Cuda lib loading issue in python/nano

Hi,
We could able to migrate the model to onnx. Post that, we required pycuda libraries to migrate to tensorrt. While trying to import pycuda in the python program, we are getting an import error in Jetson nano. Can someone please suggest the next steps?
I am not getting error while importing pycuda, whereas when i tried importing pycuda.autoinit, i am getting error as “file too short”

The same works fine in Jetpack 4.5 version, where as its not working in Jetpack 4.4.1 version

@karunakar.r Are you using any NGC container for converting into onnx and tensorrt model?

@bgiddwani I am trying to run migration within pytorch ngc container.

So, Inside Pytorch-l4t container. You will get Tensorrt already installed.

Here,
Pytorch → Onnx [torch.onnx.export]
Onnx → TRT [trtexec]

Reference: TensorRT/4. Using PyTorch through ONNX.ipynb at master · NVIDIA/TensorRT · GitHub

Thanks @bgiddwani As I already generated onnx file, I tried running below statement inside the docker.
!trtexec --onnx=onnx_pose.onnx --saveEngine=resnet_engine_pytorch.trt --explicitBatch

However, I am getting syntax error. I tried running inside python3. Can you please suggest?

Also, how can I run impyb within the container. I am now got familiarity on running python program within the container. Can you please share any know-how links

The issue was because of wrong version of docker being used. After correcting the version, could able to see CUDA