Cannot allocate memory in static TLS block

Description

On my jetson xavier nx, I am trying to get an image from gstreamer and then use my tensorrt engine (detectron2) for object detection.

Problem is that under a few circumstances, it throws this error.
model

Basically, I will provide a rough code:

If I import this file on the top before cv2.videoCapture, it throws the above error.

from infer import TensorRTInfer

cap = cv2.VideoCapture(
                'thetauvcsrc ! h264parse ! nvv4l2decoder ! '
                'nvvidconv ! video/x-raw,format=BGRx ! queue ! videoconvert ! '
                'video/x-raw,format=BGR,width=1344,height=1344 ! '
                'queue ! videorate ! video/x-raw,framerate=10/3 ! '
                'queue ! appsink sync=false')

if not cap.isOpened():
            print('Could not capture')
            # sys.exit()
            return

But if I import it later, it works fine.

cap = cv2.VideoCapture(
                'thetauvcsrc ! h264parse ! nvv4l2decoder ! '
                'nvvidconv ! video/x-raw,format=BGRx ! queue ! videoconvert ! '
                'video/x-raw,format=BGR,width=1344,height=1344 ! '
                'queue ! videorate ! video/x-raw,framerate=10/3 ! '
                'queue ! appsink sync=false')

if not cap.isOpened():
            print('Could not capture')
            # sys.exit()
            return

from infer import TensorRTInfer

Now in my actual code, I cannot actually import it later since I need to call it in a class’s init.

I saw some solutions online especially the libgomp one but that isn’t the case here.

Any advices?

Environment

TensorRT Version: 8.5.2.2
Jetson & Jetpack: Nvidia Jetson Xavier Jetpack 5.1
CUDA Version: 11.4
Operating System + Version: Ubuntu 20.04
Python Version (if applicable): 3.8.10
PyTorch Version (if applicable): 1.14.0

nvm, it’s solved.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.