Hello,
I have a Jetson AGX Xavier and we have developed a deep learning model with TensorRT which utilized the CUDA. However, when I am trying to obfuscate the code with PyInstaller, I cannot get past the error that is displayed underneath:
[TensorRT] ERROR: /home/jenkins/workspace/TensorRT/helpers/rel-7.1/L1_Nightly_Internal/build/source/rtSafe/resources.h (460) - Cuda Error in loadKernel: 3 (initialization error)
[TensorRT] ERROR: INVALID_STATE: std::exception
[TensorRT] ERROR: INVALID_CONFIG: Deserialize the cuda engine failed.
Traceback (most recent call last):
File “yolov5/yolov5_trt_plugins.py”, line 521, in
File “yolov5/yolov5_trt_plugins.py”, line 120, in init
AttributeError: ‘NoneType’ object has no attribute ‘create_execution_context’
[10003] Failed to execute script ‘yolov5_trt_plugins’ due to unhandled exception!
-------------------------------------------------------------------
PyCUDA ERROR: The context stack was not empty upon module cleanup.
-------------------------------------------------------------------
A context was still active when the context stack was being
cleaned up. At this point in our execution, CUDA may already
have been deinitialized, so there is no way we can finish
cleanly. The program will be aborted now.
Use Context.pop() to avoid this problem.
------------------------------------------------------------------
Aborted (core dumped)
I obfuscate the code with the command: pyinstaller –onefile yolov5_trt_plugins.spec yolov5_trt_plugins is the filename of the main code where the whole pipeline is made. In the .spec file, I specified the yolov5_trt_plugins.py and in the hidden imports I included the path to pycuda. I also tried to specify the path to tensorrt in the spec-file, but that didn’t solve the issue.
Pycuda version: pycuda-2019.1.2
Cuda version: 10.2
Python version: 3.6
Is anyone else facing comparable errors and knows a solution on how to fix it? And can this obfuscation done by ngc-dockers?
Regards,
Chris