teratail header banner
teratail header banner
質問するログイン新規登録

質問編集履歴

4

誤字の訂正

2022/01/05 14:52

投稿

pleasehelpme
pleasehelpme

スコア0

title CHANGED
File without changes
body CHANGED
@@ -101,7 +101,7 @@
101
101
  -> 1120 result = forward_call(*input, **kwargs)
102
102
  1121 if _global_forward_hooks or self._forward_hooks:
103
103
 
104
- ~/06rnn_attentionf5/encoder.py in forward(self, sequence, lengths)
104
+ ~/06rnn_attentionf6/encoder.py in forward(self, sequence, lengths)
105
105
  101 rnn_input \
106
106
  --> 102 = nn.utils.rnn.pack_padded_sequence(output,
107
107
  103 output_lengths.cpu(), #ここを修正

3

コードの修正、追加

2022/01/05 14:52

投稿

pleasehelpme
pleasehelpme

スコア0

title CHANGED
File without changes
body CHANGED
@@ -1,5 +1,4 @@
1
- すみません。詳しく書く前に要点だけ書かせてください。落ち着いたら詳細を記述します。
2
- pytorchにおいてtorchsummaryでモデルの構造をファインチューニングするため、可視化したいのですが
1
+ pytorchにおいてtorchsummaryでモデルの構造をファインチューニングするため、
3
2
  `print(summary(model, input_size=([(10,1684,40),(10)])))`
4
3
  を実行しました。forward関数は2入力となっているので、引数を2つ入力しています。モデルのforward関数における引数の次元数はそれぞれ3次元と1次元です。しかし、以下のようなエラーが発生しました。:
5
4
 
@@ -18,7 +17,9 @@
18
17
  ご助言いただいたことを元に修正を行いました。
19
18
  `print(summary(model, input_size=([(10,1684,40),(10,)])))`
20
19
  すると以下のエラーメッセージが出ました。
20
+ ```python
21
+ ---------------------------------------------------------------------------
21
- ```RuntimeError Traceback (most recent call last)
22
+ RuntimeError Traceback (most recent call last)
22
23
  ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
23
24
  267 if isinstance(x, (list, tuple)):
24
25
  --> 268 _ = model.to(device)(*x, **kwargs)
@@ -72,7 +73,66 @@
72
73
  278 "Failed to run torchinfo. See above stack traces for more details. "
73
74
  279 f"Executed layers up to: {executed_layers}"
74
75
 
76
+ RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to:
77
+ ```
78
+ また、この後、.cpu()により、CPUのマッチングを行い
79
+ `print(summary(model, input_size=([(10,1684,40),(10,)])))`
80
+ を再度実行しましたが、次は以下のエラーが生じました。
81
+ ```python
82
+ ---------------------------------------------------------------------------
83
+ RuntimeError Traceback (most recent call last)
84
+ ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
85
+ 267 if isinstance(x, (list, tuple)):
86
+ --> 268 _ = model.to(device)(*x, **kwargs)
87
+ 269 elif isinstance(x, dict):
88
+
89
+ ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
90
+ 1101 or _global_forward_hooks or _global_forward_pre_hooks):
91
+ -> 1102 return forward_call(*input, **kwargs)
92
+ 1103 # Do not call functions when jit is used
93
+
94
+ ~/06rnn_attentionf6/my_model.py in forward(self, input_sequence, input_lengths, label_sequence)
95
+ 85 # エンコーダに入力する
96
+ ---> 86 enc_out, enc_lengths = self.encoder(input_sequence,
97
+ 87 input_lengths)
98
+
99
+ ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
100
+ 1119
101
+ -> 1120 result = forward_call(*input, **kwargs)
102
+ 1121 if _global_forward_hooks or self._forward_hooks:
103
+
104
+ ~/06rnn_attentionf5/encoder.py in forward(self, sequence, lengths)
105
+ 101 rnn_input \
106
+ --> 102 = nn.utils.rnn.pack_padded_sequence(output,
107
+ 103 output_lengths.cpu(), #ここを修正
108
+
109
+ ~/.local/lib/python3.8/site-packages/torch/nn/utils/rnn.py in pack_padded_sequence(input, lengths, batch_first, enforce_sorted)
110
+ 248 data, batch_sizes = \
111
+ --> 249 _VF._pack_padded_sequence(input, lengths, batch_first)
112
+ 250 return _packed_sequence_init(data, batch_sizes, sorted_indices, None)
113
+
114
+ RuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0
115
+
116
+ The above exception was the direct cause of the following exception:
117
+
118
+ RuntimeError Traceback (most recent call last)
119
+ /tmp/ipykernel_715630/614744292.py in <module>
120
+ 1 from torchinfo import summary
121
+ ----> 2 print(summary(model, input_size=([(10,1684,40),(10,)])))
122
+
123
+ ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs)
124
+ 199 input_data, input_size, batch_dim, device, dtypes
125
+ 200 )
126
+ --> 201 summary_list = forward_pass(
127
+ 202 model, x, batch_dim, cache_forward_pass, device, **kwargs
128
+ 203 )
129
+
130
+ ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
131
+ 275 except Exception as e:
132
+ 276 executed_layers = [layer for layer in summary_list if layer.executed]
133
+ --> 277 raise RuntimeError(
134
+ 278 "Failed to run torchinfo. See above stack traces for more details. "
135
+ 279 f"Executed layers up to: {executed_layers}"
136
+
75
137
  RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: []
76
-
77
- コード
78
138
  ```

2

タグの修正

2022/01/05 14:40

投稿

pleasehelpme
pleasehelpme

スコア0

title CHANGED
@@ -1,1 +1,1 @@
1
- torchinfoによるモデルサマリー(可視化ついて
1
+ pytorch:モデル可視化,cpuとcudaのデバイス間おけるエラー
body CHANGED
File without changes

1

入力データを修正

2022/01/05 14:06

投稿

pleasehelpme
pleasehelpme

スコア0

title CHANGED
File without changes
body CHANGED
@@ -13,4 +13,66 @@
13
13
 
14
14
  二つ目の引数の(10)や(10,20)がlengthsに対応していることはわかりますが、もともとの次元数が1なのに1次元として入力すると上記のようにエラーになってしまうのです。なにか解決策はありますでしょうか。
15
15
 
16
- torchsummary は最新版を使用しています。
16
+ torchsummary は最新版を使用しています。
17
+
18
+ ご助言いただいたことを元に修正を行いました。
19
+ `print(summary(model, input_size=([(10,1684,40),(10,)])))`
20
+ すると以下のエラーメッセージが出ました。
21
+ ```RuntimeError Traceback (most recent call last)
22
+ ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
23
+ 267 if isinstance(x, (list, tuple)):
24
+ --> 268 _ = model.to(device)(*x, **kwargs)
25
+ 269 elif isinstance(x, dict):
26
+
27
+ ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
28
+ 1101 or _global_forward_hooks or _global_forward_pre_hooks):
29
+ -> 1102 return forward_call(*input, **kwargs)
30
+ 1103 # Do not call functions when jit is used
31
+
32
+ ~/06rnn_attentionf6/my_model.py in forward(self, input_sequence, input_lengths, label_sequence)
33
+ 85 # エンコーダに入力する
34
+ ---> 86 enc_out, enc_lengths = self.encoder(input_sequence,
35
+ 87 input_lengths)
36
+
37
+ ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
38
+ 1119
39
+ -> 1120 result = forward_call(*input, **kwargs)
40
+ 1121 if _global_forward_hooks or self._forward_hooks:
41
+
42
+ ~/06rnn_attentionf6/encoder.py in forward(self, sequence, lengths)
43
+ 101 rnn_input \
44
+ --> 102 = nn.utils.rnn.pack_padded_sequence(output,
45
+ 103 output_lengths,
46
+
47
+ ~/.local/lib/python3.8/site-packages/torch/nn/utils/rnn.py in pack_padded_sequence(input, lengths, batch_first, enforce_sorted)
48
+ 248 data, batch_sizes = \
49
+ --> 249 _VF._pack_padded_sequence(input, lengths, batch_first)
50
+ 250 return _packed_sequence_init(data, batch_sizes, sorted_indices, None)
51
+
52
+ RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor
53
+
54
+ The above exception was the direct cause of the following exception:
55
+
56
+ RuntimeError Traceback (most recent call last)
57
+ /tmp/ipykernel_704668/614744292.py in <module>
58
+ 1 from torchinfo import summary
59
+ ----> 2 print(summary(model, input_size=([(10,1684,40),(10,)])))
60
+
61
+ ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs)
62
+ 199 input_data, input_size, batch_dim, device, dtypes
63
+ 200 )
64
+ --> 201 summary_list = forward_pass(
65
+ 202 model, x, batch_dim, cache_forward_pass, device, **kwargs
66
+ 203 )
67
+
68
+ ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
69
+ 275 except Exception as e:
70
+ 276 executed_layers = [layer for layer in summary_list if layer.executed]
71
+ --> 277 raise RuntimeError(
72
+ 278 "Failed to run torchinfo. See above stack traces for more details. "
73
+ 279 f"Executed layers up to: {executed_layers}"
74
+
75
+ RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: []
76
+
77
+ コード
78
+ ```