pytorch (GPU) で学習したモデルを "file_1.pth" というファイル名で出力し、
"file_1.pth" を "file_2.pt" に変換し、
"file_2.pt" を C++ プログラムで読み込もうとしたところ、
下記のようなエラーメッセージが表示されます。
ここで、
学習プログラムは test_train.py です。
変換プログラムは test_convert.py です。
C++ プログラムは test_load.cpp です。
なお、学習結果は次のとおりです。
epoch:11, Test set: Average loss: 0.000058, Accuracy: 5000/5000 (100.00%)
誤っている箇所を教えていただけますと幸いです。
エラーメッセージ
error loading the model
Could not run 'aten::empty_strided' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty_strided' is only available for these backends: [CPU, Meta, BackendSelect, Python, Named, Conjugate, Negative, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, UNKNOWN_TENSOR_TYPE_ID, Autocast, Batched, VmapMode].
CPU: registered at aten\src\ATen\RegisterCPU.cpp:18433 [kernel]
Meta: registered at aten\src\ATen\RegisterMeta.cpp:12703 [kernel]
BackendSelect: registered at aten\src\ATen\RegisterBackendSelect.cpp:665 [kernel]
Python: registered at ....\aten\src\ATen\core\PythonFallbackKernel.cpp:47 [backend fallback]
Named: registered at ....\aten\src\ATen\core\NamedRegistrations.cpp:7 [backend fallback]
Conjugate: fallthrough registered at ....\aten\src\ATen\ConjugateFallback.cpp:22 [kernel]
Negative: fallthrough registered at ....\aten\src\ATen\native\NegateFallback.cpp:22 [kernel]
ADInplaceOrView: fallthrough registered at ....\aten\src\ATen\core\VariableFallbackKernel.cpp:64 [backend fallback]
AutogradOther: registered at ....\torch\csrc\autograd\generated\VariableType_2.cpp:10491 [autograd kernel]
AutogradCPU: registered at ....\torch\csrc\autograd\generated\VariableType_2.cpp:10491 [autograd kernel]
AutogradCUDA: registered at ....\torch\csrc\autograd\generated\VariableType_2.cpp:10491 [autograd kernel]
AutogradXLA: registered at ....\torch\csrc\autograd\generated\VariableType_2.cpp:10491 [autograd kernel]
AutogradLazy: registered at ....\torch\csrc\autograd\generated\VariableType_2.cpp:10491 [autograd kernel]
AutogradXPU: registered at ....\torch\csrc\autograd\generated\VariableType_2.cpp:10491 [autograd kernel]
AutogradMLC: registered at ....\torch\csrc\autograd\generated\VariableType_2.cpp:10491 [autograd kernel]
AutogradHPU: registered at ....\torch\csrc\autograd\generated\VariableType_2.cpp:10491 [autograd kernel]
AutogradNestedTensor: registered at ....\torch\csrc\autograd\generated\VariableType_2.cpp:10491 [autograd kernel]
AutogradPrivateUse1: registered at ....\torch\csrc\autograd\generated\VariableType_2.cpp:10491 [autograd kernel]
AutogradPrivateUse2: registered at ....\torch\csrc\autograd\generated\VariableType_2.cpp:10491 [autograd kernel]
AutogradPrivateUse3: registered at ....\torch\csrc\autograd\generated\VariableType_2.cpp:10491 [autograd kernel]
Tracer: registered at ....\torch\csrc\autograd\generated\TraceType_2.cpp:11425 [kernel]
UNKNOWN_TENSOR_TYPE_ID: fallthrough registered at ....\aten\src\ATen\autocast_mode.cpp:466 [backend fallback]
Autocast: fallthrough registered at ....\aten\src\ATen\autocast_mode.cpp:305 [backend fallback]
Batched: registered at ....\aten\src\ATen\BatchingRegistrations.cpp:1016 [backend fallback]
VmapMode: fallthrough registered at ....\aten\src\ATen\VmapModeRegistrations.cpp:33 [backend fallback]
test_train.py
・・・ device = "cuda" model = Net().to(device) optimizer = optim.Adadelta(model.parameters(), lr=1.0) scheduler = StepLR(optimizer, step_size=1, gamma=0.7) for epoch in range(1, 12): train(model, device, train_loader, optimizer) test(model, device, test_loader, epoch) scheduler.step() torch.save(model.state_dict(), "file_1.pth")
test_convert.py
・・・ device = "cuda" model = Net().to(device) input = torch.rand((1, 1, 224, 224), dtype=torch.float32).cuda() loaded_model = Net().to(device) loaded_model.load_state_dict(torch.load("file_1.pth")) loaded_model.eval() with torch.no_grad(): traced_net = torch.jit.trace(loaded_model, input) traced_net.save("file_2.pt")
test_load.cpp
・・・ const char* s_pfn = "file_2.pt"; torch::jit::script::Module module; try { module = torch::jit::load(s_pfn); } catch (const c10::Error& e) { std::cerr << "error loading the model\n"; std::cerr << e.msg(); return -1; }
あなたの回答
tips
プレビュー