I built a reinforcement learning model and when I took the step to submit the trained file to a local host connected to FlexSim Reinforcement Learning, by executing "flexsim_inference.py", I get the following error:
Exception occurred during processing of request from ('127.0.0.1', 53187) Traceback (most recent call last): File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2032.0_x64__qbz5n2kfra8p0\Lib\socketserver.py", line 317, in _handle_request_noblock self.process_request(request, client_address) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2032.0_x64__qbz5n2kfra8p0\Lib\socketserver.py", line 348, in process_request self.finish_request(request, client_address) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2032.0_x64__qbz5n2kfra8p0\Lib\socketserver.py", line 361, in finish_request self.RequestHandlerClass(request, client_address, self) File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2032.0_x64__qbz5n2kfra8p0\Lib\socketserver.py", line 755, in __init__ self.handle() File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2032.0_x64__qbz5n2kfra8p0\Lib\http\server.py", line 436, in handle self.handle_one_request() File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.11_3.11.2032.0_x64__qbz5n2kfra8p0\Lib\http\server.py", line 424, in handle_one_request method() File "c:\Users\GAIVOTA_FLEXSIM\Documents\FLEXSIM FELIPE CAPALBO\ESTUDOS ML\Exercicio ML Warehousing\flexsim_reinforcement_learning\flexsim_inference.py", line 11, in do_GET self._handle_reply(params) File "c:\Users\GAIVOTA_FLEXSIM\Documents\FLEXSIM FELIPE CAPALBO\ESTUDOS ML\Exercicio ML Warehousing\flexsim_reinforcement_learning\flexsim_inference.py", line 30, in _handle_reply action, _states = FlexSimInferenceServer.model.predict(observation) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\GAIVOTA_FLEXSIM\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\stable_baselines3\common\base_class.py", line 553, in predict return self.policy.predict(observation, state, episode_start, deterministic) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\GAIVOTA_FLEXSIM\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\stable_baselines3\common\policies.py", line 363, in predict obs_tensor, vectorized_env = self.obs_to_tensor(observation) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\GAIVOTA_FLEXSIM\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\stable_baselines3\common\policies.py", line 274, in obs_to_tensor obs_tensor = obs_as_tensor(observation, self.device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\GAIVOTA_FLEXSIM\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\stable_baselines3\common\utils.py", line 483, in obs_as_tensor return th.as_tensor(obs, device=device) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA error: device-side assert triggered CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. For debugging consider passing CUDA_LAUNCH_BLOCKING=1. Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
The model consists on a Warehousing problem and the observation parameter was set to the Type of the Item to be allocated and the actions was set to choose one of the 3 available racks.