前提・実現したいこと
UnityのMl-agentsを用い強化学習及び、模倣学習をしたい
発生している問題・エラーメッセージ
Ml-agentsの最新バージョンである 0.11.0を使用しているが参考文献が少なすぎて解決できない。
(base) C:\Users\user\Documents\ml-agents-master\ml-agents>mlagents-learn ../config/trainer_config.yaml --run-id=firstRun --train 2019-11-14 14:45:59.431992: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_100.dll'; dlerror: cudart64_100.dll not found 2019-11-14 14:45:59.436532: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. WARNING:tensorflow: The TensorFlow contrib module will not be included in TensorFlow 2.0. For more information, please see: * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md * https://github.com/tensorflow/addons * https://github.com/tensorflow/io (for I/O related ops) If you depend on functionality not listed there, please file an issue. ▄▄▄▓▓▓▓ ╓▓▓▓▓▓▓█▓▓▓▓▓ ,▄▄▄m▀▀▀' ,▓▓▓▀▓▓▄ ▓▓▓ ▓▓▌ ▄▓▓▓▀' ▄▓▓▀ ▓▓▓ ▄▄ ▄▄ ,▄▄ ▄▄▄▄ ,▄▄ ▄▓▓▌▄ ▄▄▄ ,▄▄ ▄▓▓▓▀ ▄▓▓▀ ▐▓▓▌ ▓▓▌ ▐▓▓ ▐▓▓▓▀▀▀▓▓▌ ▓▓▓ ▀▓▓▌▀ ^▓▓▌ ╒▓▓▌ ▄▓▓▓▓▓▄▄▄▄▄▄▄▄▓▓▓ ▓▀ ▓▓▌ ▐▓▓ ▐▓▓ ▓▓▓ ▓▓▓ ▓▓▌ ▐▓▓▄ ▓▓▌ ▀▓▓▓▓▀▀▀▀▀▀▀▀▀▀▓▓▄ ▓▓ ▓▓▌ ▐▓▓ ▐▓▓ ▓▓▓ ▓▓▓ ▓▓▌ ▐▓▓▐▓▓ ^█▓▓▓ ▀▓▓▄ ▐▓▓▌ ▓▓▓▓▄▓▓▓▓ ▐▓▓ ▓▓▓ ▓▓▓ ▓▓▓▄ ▓▓▓▓` '▀▓▓▓▄ ^▓▓▓ ▓▓▓ └▀▀▀▀ ▀▀ ^▀▀ `▀▀ `▀▀ '▀▀ ▐▓▓▌ ▀▀▀▀▓▄▄▄ ▓▓▓▓▓▓, ▓▓▓▓▀ `▀█▓▓▓▓▓▓▓▓▓▌ ¬`▀▀▀█▓ INFO:mlagents.trainers:CommandLineOptions(debug=False, num_runs=1, seed=-1, env_path=None, run_id='firstRun', load_model=False, train_model=True, save_freq=50000, keep_checkpoints=5, base_port=5005, num_envs=1, curriculum_folder=None, lesson=0, slow=False, no_graphics=False, multi_gpu=False, trainer_config_path='../config/trainer_config.yaml', sampler_file_path=None, docker_target_name=None, env_args=None, cpu=False) INFO:mlagents.envs:Start training by pressing the Play button in the Unity Editor.
該当のソースコード
上記のメッセージが出たのちUnity上でplayを押すと以下のメッセージが出る。
Process Process-1: Traceback (most recent call last): File "c:\users\user\anaconda3\lib\multiprocessing\process.py", line 297, in _bootstrap self.run() File "c:\users\user\anaconda3\lib\multiprocessing\process.py", line 99, in run self._target(*self._args, **self._kwargs) File "c:\users\user\anaconda3\lib\site-packages\mlagents\envs\subprocess_env_manager.py", line 82, in worker env = env_factory(worker_id) File "c:\users\user\anaconda3\lib\site-packages\mlagents\trainers\learn.py", line 359, in create_unity_environment args=env_args, File "c:\users\user\anaconda3\lib\site-packages\mlagents\envs\environment.py", line 105, in __init__ aca_output = self.send_academy_parameters(rl_init_parameters_in) File "c:\users\user\anaconda3\lib\site-packages\mlagents\envs\environment.py", line 689, in send_academy_parameters return self.communicator.initialize(inputs) File "c:\users\user\anaconda3\lib\site-packages\mlagents\envs\rpc_communicator.py", line 88, in initialize "The Unity environment took too long to respond. Make sure that :\n" mlagents.envs.exception.UnityTimeOutException: The Unity environment took too long to respond. Make sure that : The environment does not need user interaction to launch The Agents are linked to the appropriate Brains The environment and the Python interface have compatible versions. Traceback (most recent call last): File "c:\users\user\anaconda3\lib\multiprocessing\connection.py", line 312, in _recv_bytes nread, err = ov.GetOverlappedResult(True) BrokenPipeError: [WinError 109] パイプは終了しました。 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\users\user\anaconda3\lib\site-packages\mlagents\envs\subprocess_env_manager.py", line 59, in recv response: EnvironmentResponse = self.conn.recv() File "c:\users\user\anaconda3\lib\multiprocessing\connection.py", line 250, in recv buf = self._recv_bytes() File "c:\users\user\anaconda3\lib\multiprocessing\connection.py", line 321, in _recv_bytes raise EOFError EOFError During handling of the above exception, another exception occurred: Traceback (most recent call last): File "c:\users\user\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "c:\users\user\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\user\Anaconda3\Scripts\mlagents-learn.exe\__main__.py", line 9, in <module> File "c:\users\user\anaconda3\lib\site-packages\mlagents\trainers\learn.py", line 408, in main run_training(0, run_seed, options, Queue()) File "c:\users\user\anaconda3\lib\site-packages\mlagents\trainers\learn.py", line 222, in run_training options.sampler_file_path, env.reset_parameters, run_seed File "c:\users\user\anaconda3\lib\site-packages\mlagents\envs\subprocess_env_manager.py", line 225, in reset_parameters return self.env_workers[0].recv().payload File "c:\users\user\anaconda3\lib\site-packages\mlagents\envs\subprocess_env_manager.py", line 62, in recv raise UnityCommunicationException("UnityEnvironment worker: recv failed.") mlagents.envs.exception.UnityCommunicationException: UnityEnvironment worker: recv failed.
試したこと
再インストール、webサイトの閲覧
補足情報(FW/ツールのバージョンなど)
TensorFlow 1.15
回答2件
あなたの回答
tips
プレビュー
バッドをするには、ログインかつ
こちらの条件を満たす必要があります。