Usb camera-python-opencv. Can these three work somehow?

I have a see3cam_cu135. It is 4k.

It runs deepstream test usb python example perfectly
I have a python code (it uses dlib and yolo) and want to run it on opencv but it seems impossible to run properly.
videocapture(0) gives bad results like 2 fps in 4k

Can you suggest a gst pipeline that begins with v4l2src and ends in appsink, that I can use
with cap = cv2.VideoCapture(gst_url, cv2.CAP_GSTREAMER)

There is no easy way to combine deepstream python and get frames to opencv… Or is there?

Please check this python code:

You would need to modify the resolution to 4K.
    streammux.set_property('width', 1920)
    streammux.set_property('height', 1080)

Thank you for that reply. But it is not quite the answer of the topic. Can this deepstream app give frames to opencv in python, so that it can process it?


No. It renders the frames out through nveglglessink.

nvdsosd ! nvegltransform ! nveglglessink

For giving frames to OpenCV, you need to customize it to run like:

nvdsosd ! nvvideoconvert ! video/x-raw,format=RGBA ! videoconvert ! video/x-raw,format=BGR ! appsink

Thank you for your reply.
How can I come from v4l2src to nvdosd? I have a usb camera…

would this pipeline eventually work on mostly GPU or would it still work on CPU and work at around 2 fps
best regards