Pytorch --- cuda 相关

说明: 本篇是对 Pytorch Docs 0.4.1 CUDA semeantics 的部分翻译和自己在阅读过程中想加入的一些注释。

 

1、torch.cuda

  torch.cuda 用来建立和运行 CUDA 操作。对于 Tensor 来说需要转化为 cuda tensor 来使用 cuda 计算。

2、cuda 转换

1 print("Outside device is 0")  # On device 0 (default in most scenarios)
2 with torch.cuda.device(1):
3     print("Inside device is 1")  # On device 1
4 print("Outside device is still 0")  # On device 0

3、tensor 设置为 cuda tensor

  三种方法:

  a、torch.tensor([list], device)

  b、to(device) 实际是一种 copy() 操作而不是移动操作

  c、cuda(device)

 1 cuda = torch.device('cuda')     # Default CUDA device
 2 cuda0 = torch.device('cuda:0')
 3 cuda2 = torch.device('cuda:2')  # GPU 2 (these are 0-indexed)
 4 
 5 x = torch.tensor([1., 2.], device=cuda0)
 6 # x.device is device(type='cuda', index=0)
 7 y = torch.tensor([1., 2.]).cuda()
 8 # y.device is device(type='cuda', index=0)
 9 
10 with torch.cuda.device(1):
11     # allocates a tensor on GPU 1
12     a = torch.tensor([1., 2.], device=cuda)
13 
14     # transfers a tensor from CPU to GPU 1
15     b = torch.tensor([1., 2.]).cuda()
16     # a.device and b.device are device(type='cuda', index=1)
17 
18     # You can also use ``Tensor.to`` to transfer a tensor:
19     b2 = torch.tensor([1., 2.]).to(device=cuda)
20     # b.device and b2.device are device(type='cuda', index=1)
21 
22     c = a + b
23     # c.device is device(type='cuda', index=1)
24 
25     z = x + y
26     # z.device is device(type='cuda', index=0)
27 
28     # even within a context, you can specify the device
29     # (or give a GPU index to the .cuda call)
30     d = torch.randn(2, device=cuda2)
31     e = torch.randn(2).to(cuda2)
32     f = torch.randn(2).cuda(cuda2)
33     # d.device, e.device, and f.device are all device(type='cuda', index=2)

  对于上面创建的 cuda tensor 来说,已经 “绑定” 了 cuda,那么在进行加减乘除的操作时都会在其所绑定的显卡上进行。

3、torch.cuda.comm.broadcast

  torch.cuda

  原型:torch.cuda.comm.broadcast(tensor, devices)

  第二个参数是元组,元组第一个 device_id 是 源 tensor 的 id。

  示例:

1 cuda0 = torch.device('cuda:0')
2 x = troch.rand((3,4)).cuda(cuda0) # set to teh default cuda device
3 # the second parameter should be like (src, dst1, dst2, ...)
xt = torch.cuda.comm.broadcast(x,(0,1)) 4 # xt[0]:torch.tensor, cuda:0 5 # xt[1]:torch.tensor, cuda:

 

posted @ 2018-09-03 14:15  赵小春  阅读(611)  评论(0编辑  收藏  举报