Pytorch初始化模型参数
#高斯分布
torch.nn.init.normal_(tensor: torch.Tensor, mean: float = 0.0, std: float = 1.0) → torch.Tensor
#均匀分布
torch.nn.init.uniform_(tensor: torch.Tensor, a: float = 0.0, b: float = 1.0) → torch.Tensor
#常数分布
torch.nn.init.constant_(tensor: torch.Tensor, val: float) → torch.Tensor
#全0分布
torch.nn.init.zeros_(tensor: torch.Tensor) → torch.Tensor
#全1分布
torch.nn.init.ones_(tensor: torch.Tensor) → torch.Tensor
具体代码
self.encoder_att = nn.Linear(encoder_dim, attention_dim) # linear layer to transform encoded image
self.decoder_att = nn.Linear(decoder_dim, attention_dim) # linear layer to transform decoder's output
self.full_att = nn.Linear(attention_dim, 1) # linear layer to calculate values to be softmax-ed
torch.nn.init.zeros_(self.encoder_att.weight)
torch.nn.init.zeros_(self.encoder_att.bias)
torch.nn.init.zeros_(self.decoder_att.weight)
torch.nn.init.zeros_(self.decoder_att.bias)
# for m in self.modules():
torch.nn.init.zeros_(self.full_att.weight)
torch.nn.init.zeros_(self.full_att.bias)
for param in self.parameters():
param.requires_grad = False
self.relu = nn.ReLU()
self.softmax = nn.Softmax(dim=1) # softmax layer to calculate weights
初始化分为 weight 和 bias 的初始化,要分开

浙公网安备 33010602011771号