pytorch 常见数据类型
tensor,variable,numpy
| numpy->tensor |
tensor->numpy |
| x_numpy=np.arange(100).reshape(10,10) x_tensor=torch.from_numpy(x_numpy)[默认存放在cpu中] 转换后的tensor与numpy指向同一地址 |
x_tensor=torch.randn((3, 2)) x_numpy=x_tensor.numpy() 转换后的tensor与numpy指向同一地址 |
| tensor->variable |
variable->tensor |
| from torch.autograd import Variable x_variable = Variable(x_tensor, requires_grad=True) |
x_tensor =x_variable.detach() |
| cpu variable->gpu variable |
gpu variable->cpu variable |
| if torch.cuda.is_available(): gpu_variable= x_variable.cuda() |
cpu_variable = gpu_variable.cpu() |
pytorch 强制数据类型转换
| scalar,tensor中某一个位置的值 |
tensor 张量 |
| x_scalar = x_tensor[0,0] |
x_tensor = torch.randn(3,2) |
| float tensor -> integer tensor [in place] |
float tensor -> integer tensor |
| x_float32=torch.randn((3,2),dtype=torch.float32) |
x_int = x_float32.type(torch.int32) 如果x_float32 为variable变量并带有梯度时会直接剥离原来的梯度 |
>>> x = torch.randn(3,4)
>>> x
tensor([[-0.6089, 1.0113, -0.5017, 0.0393],
[-0.8978, -0.0118, 0.3297, -0.4590],
[ 0.9305, -0.6148, -0.8959, 1.0758]])
>>> mask = x.ge(0.5)
>>> mask
tensor([[False, True, False, False],
[False, False, False, False],
[ True, False, False, True]])
>>> torch.masked_select(x, mask)
tensor([1.0113, 0.9305, 1.0758])
torch 比较连个tensor的大小
torch.equal(tensor1, tensor2),#返回值为 True or False 为scaler
torch.gt(tensor1, tensor2),#逐位比较,返回的仍然时bool Tensor
torch.ge(tensor1,tensor2),#逐位比较,返回的仍然时bool Tensor