pytorch报错

问题报错

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [544, 768]], which is output 0 of RuleBackward, is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

问题排查

使用torch.autograd.set_detect_anomaly(True)
在训练之前
torch.autograd.set_detect_anomaly(True)
在反向传播时

with autograd.detect_anomaly():
    loss.backward()

问题发现

x=F.relu(x)出错,具体原因未知。

posted @ 2023-10-10 15:19  心比天高xzh  阅读(73)  评论(0)    收藏  举报