pytorchにおいてtorchsummaryでモデルの構造をファインチューニングするため、
print(summary(model, input_size=([(10,1684,40),(10)])))
を実行しました。forward関数は2入力となっているので、引数を2つ入力しています。モデルのforward関数における引数の次元数はそれぞれ3次元と1次元です。しかし、以下のようなエラーが発生しました。:
`TypeError: rand() argument after * must be an iterable, not int`
どうやら(10)が反復可能でないとのことなので、簡単に以下を試しました。
print(summary(model, input_size=([(10,1684,40),(10,20)])))
すると、以下のエラーが出ました。
`'lengths' argument should be a 1D CPU int64 tensor, but got 2D cuda:0 Long tensor`
二つ目の引数の(10)や(10,20)がlengthsに対応していることはわかりますが、もともとの次元数が1なのに1次元として入力すると上記のようにエラーになってしまうのです。なにか解決策はありますでしょうか。
torchsummary は最新版を使用しています。
ご助言いただいたことを元に修正を行いました。
print(summary(model, input_size=([(10,1684,40),(10,)])))
すると以下のエラーメッセージが出ました。
python
1--------------------------------------------------------------------------- 2RuntimeError Traceback (most recent call last) 3~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs) 4 267 if isinstance(x, (list, tuple)): 5--> 268 _ = model.to(device)(*x, **kwargs) 6 269 elif isinstance(x, dict): 7 8~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 9 1101 or _global_forward_hooks or _global_forward_pre_hooks): 10-> 1102 return forward_call(*input, **kwargs) 11 1103 # Do not call functions when jit is used 12 13~/06rnn_attentionf6/my_model.py in forward(self, input_sequence, input_lengths, label_sequence) 14 85 # エンコーダに入力する 15---> 86 enc_out, enc_lengths = self.encoder(input_sequence, 16 87 input_lengths) 17 18~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 19 1119 20-> 1120 result = forward_call(*input, **kwargs) 21 1121 if _global_forward_hooks or self._forward_hooks: 22 23~/06rnn_attentionf6/encoder.py in forward(self, sequence, lengths) 24 101 rnn_input \ 25--> 102 = nn.utils.rnn.pack_padded_sequence(output, 26 103 output_lengths, 27 28~/.local/lib/python3.8/site-packages/torch/nn/utils/rnn.py in pack_padded_sequence(input, lengths, batch_first, enforce_sorted) 29 248 data, batch_sizes = \ 30--> 249 _VF._pack_padded_sequence(input, lengths, batch_first) 31 250 return _packed_sequence_init(data, batch_sizes, sorted_indices, None) 32 33RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor 34 35The above exception was the direct cause of the following exception: 36 37RuntimeError Traceback (most recent call last) 38/tmp/ipykernel_704668/614744292.py in <module> 39 1 from torchinfo import summary 40----> 2 print(summary(model, input_size=([(10,1684,40),(10,)]))) 41 42~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs) 43 199 input_data, input_size, batch_dim, device, dtypes 44 200 ) 45--> 201 summary_list = forward_pass( 46 202 model, x, batch_dim, cache_forward_pass, device, **kwargs 47 203 ) 48 49~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs) 50 275 except Exception as e: 51 276 executed_layers = [layer for layer in summary_list if layer.executed] 52--> 277 raise RuntimeError( 53 278 "Failed to run torchinfo. See above stack traces for more details. " 54 279 f"Executed layers up to: {executed_layers}" 55 56RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to:
また、この後、.cpu()により、CPUのマッチングを行い
print(summary(model, input_size=([(10,1684,40),(10,)])))
を再度実行しましたが、次は以下のエラーが生じました。
python
1--------------------------------------------------------------------------- 2RuntimeError Traceback (most recent call last) 3~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs) 4 267 if isinstance(x, (list, tuple)): 5--> 268 _ = model.to(device)(*x, **kwargs) 6 269 elif isinstance(x, dict): 7 8~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 9 1101 or _global_forward_hooks or _global_forward_pre_hooks): 10-> 1102 return forward_call(*input, **kwargs) 11 1103 # Do not call functions when jit is used 12 13~/06rnn_attentionf6/my_model.py in forward(self, input_sequence, input_lengths, label_sequence) 14 85 # エンコーダに入力する 15---> 86 enc_out, enc_lengths = self.encoder(input_sequence, 16 87 input_lengths) 17 18~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 19 1119 20-> 1120 result = forward_call(*input, **kwargs) 21 1121 if _global_forward_hooks or self._forward_hooks: 22 23~/06rnn_attentionf6/encoder.py in forward(self, sequence, lengths) 24 101 rnn_input \ 25--> 102 = nn.utils.rnn.pack_padded_sequence(output, 26 103 output_lengths.cpu(), #ここを修正 27 28~/.local/lib/python3.8/site-packages/torch/nn/utils/rnn.py in pack_padded_sequence(input, lengths, batch_first, enforce_sorted) 29 248 data, batch_sizes = \ 30--> 249 _VF._pack_padded_sequence(input, lengths, batch_first) 31 250 return _packed_sequence_init(data, batch_sizes, sorted_indices, None) 32 33RuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0 34 35The above exception was the direct cause of the following exception: 36 37RuntimeError Traceback (most recent call last) 38/tmp/ipykernel_715630/614744292.py in <module> 39 1 from torchinfo import summary 40----> 2 print(summary(model, input_size=([(10,1684,40),(10,)]))) 41 42~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs) 43 199 input_data, input_size, batch_dim, device, dtypes 44 200 ) 45--> 201 summary_list = forward_pass( 46 202 model, x, batch_dim, cache_forward_pass, device, **kwargs 47 203 ) 48 49~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs) 50 275 except Exception as e: 51 276 executed_layers = [layer for layer in summary_list if layer.executed] 52--> 277 raise RuntimeError( 53 278 "Failed to run torchinfo. See above stack traces for more details. " 54 279 f"Executed layers up to: {executed_layers}" 55 56RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: []
回答1件
あなたの回答
tips
プレビュー