Fork me on GitHub

Pytorch 0.3加载0.4模型及其之间版本的变化

1. 0.4中使用设备:.to(device)

2. 0.4中删除了Variable,直接tensor就可以

3. with torch.no_grad():的使用代替volatile;弃用volatile,测试中不需要计算梯度的话,用with torch.no_grad():

4. data改用.detach;x.detach()返回一个requires_grad=False的共享数据的Tensor,并且,如果反向传播中需要x,那么x.detach返回的Tensor的变动会被autograd追踪。相反,x.data()返回的Tensor,其变动不会被autograd追踪,如果反向传播需要用到x的话,值就不对了。
5. torchvision

- pytorch0.4有一些接口已经改变,且模型向下版本兼容,不向上兼容。

使用pytorch0.3导入pytorch0.4保存的模型时候:

 Monkey-patch because I trained with a newer version.
# This can be removed once PyTorch 0.4.x is out.
# See https://discuss.pytorch.org/t/question-about-rebuild-tensor-v2/14560
import torch._utils
try:
    torch._utils._rebuild_tensor_v2
except AttributeError:
    def _rebuild_tensor_v2(storage, storage_offset, size, stride, requires_grad, backward_hooks):
        tensor = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
        tensor.requires_grad = requires_grad
        tensor._backward_hooks = backward_hooks
        return tensor
    torch._utils._rebuild_tensor_v2 = _rebuild_tensor_v2
posted @ 2018-08-15 22:03  ranjiewen  阅读(2688)  评论(0编辑  收藏  举报