PyCuda Cuda Context error encountered when running a model using TensorRT framework on Jetson Orin Nano for inferenced through CSI camera

I have a Jetson Orin Nano device and recently I have been developing a project which involves running an object detection inference on a converted Yolo V7 .pt model to tensorrt on the Jetson Orin Nano device.

I have been facing some issues regarding Pycuda library with CSI camera for inference:-

ERROR: pycuda._driver.LogicError: cuMemcpyHtoDAsync failed: context is destroyed

The TensorRT framework is initially able to load the .trt model successfully and also start the Gstreamer pipeline to read the frames from the CSI camera attached but when it starts the inference it shows the above error.

When the Gstreamer pipeline starts it is destroying the cuda context hence the pycuda is showing the above error. Hence we need to maintain the Cuda context and avoid it from being destroyed.

I am not able to fix the above error hence I need some assistance in solving the above problem and understanding the reason behind it.

The details of the hardware and the software framework are as follows:

Board:- Jetson Orin Nano Developer Kit

Jetpack: 5.1.1
CUDA: 11.4.315
OpenCV: 4.5.4
Python: 3.8.10

Below is my Python script file from which I am calling the inference function which is used for object detection, and this inference function itself calls a method called infer just above it, which is giving the above mentioned error at line no. 38:- (10.6 KB)

Following is the reference to the original code from Github, it has the same file used above
Reference code:-

  1. GitHub - Linaom1214/TensorRT-For-YOLO-Series: tensorrt for yolo series (YOLOv8, YOLOv7, YOLOv6....), nms plugin support
    Line no 50 is where the error is generated in the following file:-
  2. TensorRT-For-YOLO-Series/ at main · Linaom1214/TensorRT-For-YOLO-Series · GitHub

The code I am using is exactly the same as the one in the above links with some slight modifications.

Note: This error does not occur when same script is used with USB camera on Orin and also on Jetson Nano with JetPack 4.6.1 it works with CSI as well, in both cases, the script runs successfully..

Following is the Jetson Nano device specs where it is running without any issue
Jetson Nano Device specs:-
Device:- Jetson Nano Dev Kit
Jetpack:4.6.3 [L4T 32.7.3]
Cuda version:- 10.2.300
cuDNN version:-
TensorRT vrsion:-
VPI:- 1.2.3
Vulkan:- 1.2.70
OpenCV:- 3.4.19 with CUDA- NO
Python version:- 3.6.9
Current pycuda package version:- 2021.1

The issue seems to arise when using CSI cam on Jetson Orin Nano.

I have tried to do the same setup on a fresh Jetson Orin Nano and I have the same error there as well.

Please help me in resolving this issue as soon as possible.


Which CSI camera do you use?
Does the camera work correctly on Orin/Nano?

Could you check if the camera can work correctly with GStreamer command first?

$ gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=30/1' ! fakesink


I am using the Raspberry Pi cam v2.1

Yes, the camera is fine and it is able to display the feed to the display with a simple script to just show the live feed, both from Nano and Orin. The CSI camera does not seem to be an issue.

Maybe this could help :

There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.


Is the issue fixed with easybob’s suggestion?