Tensorrt threading error

Hi, I have tensorrt project which is detect people and count them on Jetson AGX Orin. I’m using frames as input which are coming from an IP camera.

I’m trying to add thread logic. One thread is going to capture the frames and add them to a queue and other thread is going to process frames which are in the queue. At this point I’m facing some errors. I’ll add the output down below. I don’t know how to fix that.

Thanks for your attention.

The output:

start Receive
[07/18/2022-14:42:07] [TRT] [I] [MemUsageChange] Init CUDA: CPU +300, GPU +0, now: CPU 443, GPU 8812 (MiB)
[07/18/2022-14:42:07] [TRT] [I] Loaded engine size: 127 MiB
[07/18/2022-14:42:07] [TRT] [W] Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.
[07/18/2022-14:42:09] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +738, GPU +700, now: CPU 1394, GPU 9713 (MiB)
[07/18/2022-14:42:09] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +132, GPU +131, now: CPU 1526, GPU 9844 (MiB)
[07/18/2022-14:42:09] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in engine deserialization: CPU +0, GPU +121, now: CPU 0, GPU 121 (MiB)
[07/18/2022-14:42:09] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +1, now: CPU 1401, GPU 9726 (MiB)
[07/18/2022-14:42:09] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +0, now: CPU 1401, GPU 9726 (MiB)
[07/18/2022-14:42:09] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in IExecutionContext creation: CPU +0, GPU +33, now: CPU 0, GPU 154 (MiB)
Exception in thread Thread-2:
Traceback (most recent call last):
File “/home/hsnhsynglk/Documents/tensorrt_demos/utils/yolo_with_plugins.py”, line 299, in init
allocate_buffers(self.engine)
File “/home/hsnhsynglk/Documents/tensorrt_demos/utils/yolo_with_plugins.py”, line 202, in allocate_buffers
stream = cuda.Stream()
pycuda._driver.LogicError: explicit_context_dependent failed: invalid device context - no currently active context?

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/usr/lib/python3.8/threading.py”, line 932, in _bootstrap_inner
self.run()
File “/usr/lib/python3.8/threading.py”, line 870, in run
self._target(*self._args, **self._kwargs)
File “trt_yolo.py”, line 175, in main
trt_yolo = TrtYOLO(args.model, args.category_num, args.letter_box)
File “/home/hsnhsynglk/Documents/tensorrt_demos/utils/yolo_with_plugins.py”, line 301, in init
raise RuntimeError(‘fail to allocate CUDA resources’) from e
RuntimeError: fail to allocate CUDA resources
Exception ignored in: <function TrtYOLO.del at 0xffff3187d9d0>
Traceback (most recent call last):
File “/home/hsnhsynglk/Documents/tensorrt_demos/utils/yolo_with_plugins.py”, line 308, in del
del self.outputs
AttributeError: outputs

Hi,

This looks like a Jetson issue. Please refer to the below samples in case useful.

For any further assistance, we will move this post to to Jetson related forum.

Thanks!