Argus camera with pytorch (or opencv)

Hi,

I’m trying to use an argus camera with pytorch (or opencv).

To do so, I plan to get images from libargus to be able to pass them to pytorch

Currently I’m trying to install a python binding to lib argus:

I already corrected a bad pointing to #tegra_multimedia_api# => “jetson_multimedia_api”, but now I’m having the following error when trying to compile:

NvVideoConverter.h: No such file or directory

I’m not sure but it may be related to deepstream. But I already have deepstream installed, and I’m unable to locate this header. Any idea where it could be located or what to install to get it?

Otherwise, if you have a better method to access camera images from pytorch, don’t hesitate to share it. The python binding will push it in a numpy array. But I imagine the image in libargus may already be on the GPU and that it may be possible to avoid to pass on cpu via numpy …

Thank you for your help!

Not sure, this post (GPU Acceleration Support for OpenCV Gstreamer Pipeline - #3 by Honey_Patouceul) may be outdated now, but you may try using jetson-utils.

Thank you, it seems interesting indeed.

I installed jetson-utils (jetson-inference in fact) without any issue, and I tried to open my camera:

import jetson.utils

def display_csi_camera():
    # Create the camera instance
    camera = jetson.utils.gstCamera(1280, 720, "/dev/video0")  # You may need to adjust resolution and camera index (0) accordingly.

    # Create the display instance
    display = jetson.utils.glDisplay()

    # Main loop to capture and display frames from the camera
    while display.IsOpen():
        # Capture a frame from the camera
        img, width, height = camera.CaptureRGBA(zeroCopy=1)

        # Render the frame
        display.RenderOnce(img, width, height)

        # Update the window title with the current frames per second (FPS)
        display.SetTitle("CSI Camera | {:.1f} FPS".format(display.GetFPS()))

        # Check for user exit (Esc key)
        if display.IsClosed():
            break

# Call the main function to display the camera feed
if __name__ == "__main__":
    display_csi_camera()


Unfortunatly it can not find my camera because it tries to open a v4l2 device which is not the case of my camera (e-CAM82_CUOAGX, doesn’t have internal ISP so the v4l2 API cannot be used with this camera).

I’m having a hardtime finding information on how I should proceed. Any idea?

Thank you for your help!

For a bayer sensor connected though CSI, you would use Argus.
First check that your sensor is ok with argus:

gst-launch-1.0 nvarguscamerasrc ! nvvidconv ! autovideosink

or with argus_camera.

Then, check available modes from your sensor with:

v4l2-ctl -d0 --list-formats-ext

Pick your preferred resolution and framerate and just use csi://0 as in the link I’ve posted in my last post.

Thank you for you answer

gst-launch-1.0 nvarguscamerasrc ! nvvidconv ! autovideosink works perfectly, I’m able to visualize the camera correctly.

v4l2-ctl -d0 --list-formats-ext gives:

ioctl: VIDIOC_ENUM_FMT
        Type: Video Capture

        [0]: 'RG10' (10-bit Bayer RGRG/GBGB)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.017s (60.000 fps)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.017s (60.000 fps)
                Size: Discrete 1920x1080
                        Interval: Discrete 0.011s (90.000 fps)
                Size: Discrete 1920x1080
                        Interval: Discrete 0.011s (90.000 fps)
        [1]: 'RG12' (12-bit Bayer RGRG/GBGB)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.017s (60.000 fps)
                Size: Discrete 3840x2160
                        Interval: Discrete 0.017s (60.000 fps)
                Size: Discrete 1920x1080
                        Interval: Discrete 0.011s (90.000 fps)
                Size: Discrete 1920x1080
                        Interval: Discrete 0.011s (90.000 fps)

I changed the resolution to 1920x1080 in my previous code just in case, but it still give me:

[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0

(python3:104295): GStreamer-CRITICAL **: 15:59:15.916: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed

(python3:104295): GStreamer-CRITICAL **: 15:59:15.916: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed

(python3:104295): GStreamer-CRITICAL **: 15:59:15.916: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed

(python3:104295): GStreamer-CRITICAL **: 15:59:15.916: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed

(python3:104295): GStreamer-CRITICAL **: 15:59:15.916: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed
[gstreamer] gstCamera -- didn't discover any v4l2 devices
[gstreamer] gstCamera -- device discovery failed, but /dev/video0 exists
[gstreamer]              support for compressed formats is disabled
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 do-timestamp=true ! nvv4l2decoder name=decoder enable-max-performance=1 ! video/x-raw(memory:NVMM) ! nvvidconv flip-method=0 ! video/x-raw ! appsink name=mysink sync=false
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- X window resolution:    1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
Opening in BLOCKING MODE
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> decoder
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer changed state from READY to PAUSED ==> decoder
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> decoder
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstCamera -- end of stream (EOS)
[gstreamer] gstreamer v4l2src0 ERROR Internal data stream error.
[gstreamer] gstreamer Debugging info: gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink

I think my previous code may be wrong and don’t use libargus to access the camera

So you would change to:

camera = jetson.utils.gstCamera(1280, 720, "csi://0")

See jetson-inference/docs/aux-streaming.md at master · dusty-nv/jetson-inference · GitHub

Thank you, it works! (I feel idiot not to have seen that you already had written the solution previously, sorry for that)

Last question: do you know if at this stage the image is already on the GPU, and doing something like torch.as_tensor(cuda_img, device='cuda') would avoid an unecessary copy via the cpu ? Or is it on the cpu?

Thank you for your precious help!

I have poor experience of jetson-utils from python, but my understanding is that yes it is already available to GPU as expected by jetson-inference (and may also be available to CPU if using zeroCopy option but may be a bit slower).

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.