質問編集履歴

4

誤字の訂正

2022/01/05 14:52

投稿

pleasehelpme
pleasehelpme

スコア0

test CHANGED
File without changes
test CHANGED
@@ -204,7 +204,7 @@
204
204
 
205
205
 
206
206
 
207
- ~/06rnn_attentionf5/encoder.py in forward(self, sequence, lengths)
207
+ ~/06rnn_attentionf6/encoder.py in forward(self, sequence, lengths)
208
208
 
209
209
  101 rnn_input \
210
210
 

3

コードの修正、追加

2022/01/05 14:52

投稿

pleasehelpme
pleasehelpme

スコア0

test CHANGED
File without changes
test CHANGED
@@ -1,6 +1,4 @@
1
- すみません。詳しく書く前に要点だけ書かせてください。落ち着いたら詳細を記述します。
2
-
3
- pytorchにおいてtorchsummaryでモデルの構造をファインチューニングするため、可視化したいのですが
1
+ pytorchにおいてtorchsummaryでモデルの構造をファインチューニングするため、
4
2
 
5
3
  `print(summary(model, input_size=([(10,1684,40),(10)])))`
6
4
 
@@ -38,7 +36,11 @@
38
36
 
39
37
  すると以下のエラーメッセージが出ました。
40
38
 
39
+ ```python
40
+
41
+ ---------------------------------------------------------------------------
42
+
41
- ```RuntimeError Traceback (most recent call last)
43
+ RuntimeError Traceback (most recent call last)
42
44
 
43
45
  ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
44
46
 
@@ -146,10 +148,128 @@
146
148
 
147
149
 
148
150
 
151
+ RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to:
152
+
153
+ ```
154
+
155
+ また、この後、.cpu()により、CPUのマッチングを行い
156
+
157
+ `print(summary(model, input_size=([(10,1684,40),(10,)])))`
158
+
159
+ を再度実行しましたが、次は以下のエラーが生じました。
160
+
161
+ ```python
162
+
163
+ ---------------------------------------------------------------------------
164
+
165
+ RuntimeError Traceback (most recent call last)
166
+
167
+ ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
168
+
169
+ 267 if isinstance(x, (list, tuple)):
170
+
171
+ --> 268 _ = model.to(device)(*x, **kwargs)
172
+
173
+ 269 elif isinstance(x, dict):
174
+
175
+
176
+
177
+ ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
178
+
179
+ 1101 or _global_forward_hooks or _global_forward_pre_hooks):
180
+
181
+ -> 1102 return forward_call(*input, **kwargs)
182
+
183
+ 1103 # Do not call functions when jit is used
184
+
185
+
186
+
187
+ ~/06rnn_attentionf6/my_model.py in forward(self, input_sequence, input_lengths, label_sequence)
188
+
189
+ 85 # エンコーダに入力する
190
+
191
+ ---> 86 enc_out, enc_lengths = self.encoder(input_sequence,
192
+
193
+ 87 input_lengths)
194
+
195
+
196
+
197
+ ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
198
+
199
+ 1119
200
+
201
+ -> 1120 result = forward_call(*input, **kwargs)
202
+
203
+ 1121 if _global_forward_hooks or self._forward_hooks:
204
+
205
+
206
+
207
+ ~/06rnn_attentionf5/encoder.py in forward(self, sequence, lengths)
208
+
209
+ 101 rnn_input \
210
+
211
+ --> 102 = nn.utils.rnn.pack_padded_sequence(output,
212
+
213
+ 103 output_lengths.cpu(), #ここを修正
214
+
215
+
216
+
217
+ ~/.local/lib/python3.8/site-packages/torch/nn/utils/rnn.py in pack_padded_sequence(input, lengths, batch_first, enforce_sorted)
218
+
219
+ 248 data, batch_sizes = \
220
+
221
+ --> 249 _VF._pack_padded_sequence(input, lengths, batch_first)
222
+
223
+ 250 return _packed_sequence_init(data, batch_sizes, sorted_indices, None)
224
+
225
+
226
+
227
+ RuntimeError: Length of all samples has to be greater than 0, but found an element in 'lengths' that is <= 0
228
+
229
+
230
+
231
+ The above exception was the direct cause of the following exception:
232
+
233
+
234
+
235
+ RuntimeError Traceback (most recent call last)
236
+
237
+ /tmp/ipykernel_715630/614744292.py in <module>
238
+
239
+ 1 from torchinfo import summary
240
+
241
+ ----> 2 print(summary(model, input_size=([(10,1684,40),(10,)])))
242
+
243
+
244
+
245
+ ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs)
246
+
247
+ 199 input_data, input_size, batch_dim, device, dtypes
248
+
249
+ 200 )
250
+
251
+ --> 201 summary_list = forward_pass(
252
+
253
+ 202 model, x, batch_dim, cache_forward_pass, device, **kwargs
254
+
255
+ 203 )
256
+
257
+
258
+
259
+ ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
260
+
261
+ 275 except Exception as e:
262
+
263
+ 276 executed_layers = [layer for layer in summary_list if layer.executed]
264
+
265
+ --> 277 raise RuntimeError(
266
+
267
+ 278 "Failed to run torchinfo. See above stack traces for more details. "
268
+
269
+ 279 f"Executed layers up to: {executed_layers}"
270
+
271
+
272
+
149
273
  RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: []
150
274
 
151
-
152
-
153
- コード
154
-
155
275
  ```

2

タグの修正

2022/01/05 14:40

投稿

pleasehelpme
pleasehelpme

スコア0

test CHANGED
@@ -1 +1 @@
1
- torchinfoによるモデルサマリー(可視化ついて
1
+ pytorch:モデル可視化,cpuとcudaのデバイス間おけるエラー
test CHANGED
File without changes

1

入力データを修正

2022/01/05 14:06

投稿

pleasehelpme
pleasehelpme

スコア0

test CHANGED
File without changes
test CHANGED
@@ -29,3 +29,127 @@
29
29
 
30
30
 
31
31
  torchsummary は最新版を使用しています。
32
+
33
+
34
+
35
+ ご助言いただいたことを元に修正を行いました。
36
+
37
+ `print(summary(model, input_size=([(10,1684,40),(10,)])))`
38
+
39
+ すると以下のエラーメッセージが出ました。
40
+
41
+ ```RuntimeError Traceback (most recent call last)
42
+
43
+ ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
44
+
45
+ 267 if isinstance(x, (list, tuple)):
46
+
47
+ --> 268 _ = model.to(device)(*x, **kwargs)
48
+
49
+ 269 elif isinstance(x, dict):
50
+
51
+
52
+
53
+ ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
54
+
55
+ 1101 or _global_forward_hooks or _global_forward_pre_hooks):
56
+
57
+ -> 1102 return forward_call(*input, **kwargs)
58
+
59
+ 1103 # Do not call functions when jit is used
60
+
61
+
62
+
63
+ ~/06rnn_attentionf6/my_model.py in forward(self, input_sequence, input_lengths, label_sequence)
64
+
65
+ 85 # エンコーダに入力する
66
+
67
+ ---> 86 enc_out, enc_lengths = self.encoder(input_sequence,
68
+
69
+ 87 input_lengths)
70
+
71
+
72
+
73
+ ~/.local/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
74
+
75
+ 1119
76
+
77
+ -> 1120 result = forward_call(*input, **kwargs)
78
+
79
+ 1121 if _global_forward_hooks or self._forward_hooks:
80
+
81
+
82
+
83
+ ~/06rnn_attentionf6/encoder.py in forward(self, sequence, lengths)
84
+
85
+ 101 rnn_input \
86
+
87
+ --> 102 = nn.utils.rnn.pack_padded_sequence(output,
88
+
89
+ 103 output_lengths,
90
+
91
+
92
+
93
+ ~/.local/lib/python3.8/site-packages/torch/nn/utils/rnn.py in pack_padded_sequence(input, lengths, batch_first, enforce_sorted)
94
+
95
+ 248 data, batch_sizes = \
96
+
97
+ --> 249 _VF._pack_padded_sequence(input, lengths, batch_first)
98
+
99
+ 250 return _packed_sequence_init(data, batch_sizes, sorted_indices, None)
100
+
101
+
102
+
103
+ RuntimeError: 'lengths' argument should be a 1D CPU int64 tensor, but got 1D cuda:0 Long tensor
104
+
105
+
106
+
107
+ The above exception was the direct cause of the following exception:
108
+
109
+
110
+
111
+ RuntimeError Traceback (most recent call last)
112
+
113
+ /tmp/ipykernel_704668/614744292.py in <module>
114
+
115
+ 1 from torchinfo import summary
116
+
117
+ ----> 2 print(summary(model, input_size=([(10,1684,40),(10,)])))
118
+
119
+
120
+
121
+ ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in summary(model, input_size, input_data, batch_dim, cache_forward_pass, col_names, col_width, depth, device, dtypes, row_settings, verbose, **kwargs)
122
+
123
+ 199 input_data, input_size, batch_dim, device, dtypes
124
+
125
+ 200 )
126
+
127
+ --> 201 summary_list = forward_pass(
128
+
129
+ 202 model, x, batch_dim, cache_forward_pass, device, **kwargs
130
+
131
+ 203 )
132
+
133
+
134
+
135
+ ~/.local/lib/python3.8/site-packages/torchinfo/torchinfo.py in forward_pass(model, x, batch_dim, cache_forward_pass, device, **kwargs)
136
+
137
+ 275 except Exception as e:
138
+
139
+ 276 executed_layers = [layer for layer in summary_list if layer.executed]
140
+
141
+ --> 277 raise RuntimeError(
142
+
143
+ 278 "Failed to run torchinfo. See above stack traces for more details. "
144
+
145
+ 279 f"Executed layers up to: {executed_layers}"
146
+
147
+
148
+
149
+ RuntimeError: Failed to run torchinfo. See above stack traces for more details. Executed layers up to: []
150
+
151
+
152
+
153
+ コード
154
+
155
+ ```