I’m trying to use an argus camera with pytorch (or opencv).
To do so, I plan to get images from libargus to be able to pass them to pytorch
Currently I’m trying to install a python binding to lib argus:
I already corrected a bad pointing to #tegra_multimedia_api# => “jetson_multimedia_api”, but now I’m having the following error when trying to compile:
NvVideoConverter.h: No such file or directory
I’m not sure but it may be related to deepstream. But I already have deepstream installed, and I’m unable to locate this header. Any idea where it could be located or what to install to get it?
Otherwise, if you have a better method to access camera images from pytorch, don’t hesitate to share it. The python binding will push it in a numpy array. But I imagine the image in libargus may already be on the GPU and that it may be possible to avoid to pass on cpu via numpy …
I installed jetson-utils (jetson-inference in fact) without any issue, and I tried to open my camera:
import jetson.utils
def display_csi_camera():
# Create the camera instance
camera = jetson.utils.gstCamera(1280, 720, "/dev/video0") # You may need to adjust resolution and camera index (0) accordingly.
# Create the display instance
display = jetson.utils.glDisplay()
# Main loop to capture and display frames from the camera
while display.IsOpen():
# Capture a frame from the camera
img, width, height = camera.CaptureRGBA(zeroCopy=1)
# Render the frame
display.RenderOnce(img, width, height)
# Update the window title with the current frames per second (FPS)
display.SetTitle("CSI Camera | {:.1f} FPS".format(display.GetFPS()))
# Check for user exit (Esc key)
if display.IsClosed():
break
# Call the main function to display the camera feed
if __name__ == "__main__":
display_csi_camera()
Unfortunatly it can not find my camera because it tries to open a v4l2 device which is not the case of my camera (e-CAM82_CUOAGX, doesn’t have internal ISP so the v4l2 API cannot be used with this camera).
I’m having a hardtime finding information on how I should proceed. Any idea?
I changed the resolution to 1920x1080 in my previous code just in case, but it still give me:
[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstCamera -- attempting to create device v4l2:///dev/video0
(python3:104295): GStreamer-CRITICAL **: 15:59:15.916: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed
(python3:104295): GStreamer-CRITICAL **: 15:59:15.916: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed
(python3:104295): GStreamer-CRITICAL **: 15:59:15.916: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed
(python3:104295): GStreamer-CRITICAL **: 15:59:15.916: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed
(python3:104295): GStreamer-CRITICAL **: 15:59:15.916: gst_element_message_full_with_details: assertion 'GST_IS_ELEMENT (element)' failed
[gstreamer] gstCamera -- didn't discover any v4l2 devices
[gstreamer] gstCamera -- device discovery failed, but /dev/video0 exists
[gstreamer] support for compressed formats is disabled
[gstreamer] gstCamera pipeline string:
[gstreamer] v4l2src device=/dev/video0 do-timestamp=true ! nvv4l2decoder name=decoder enable-max-performance=1 ! video/x-raw(memory:NVMM) ! nvvidconv flip-method=0 ! video/x-raw ! appsink name=mysink sync=false
[gstreamer] gstCamera successfully created device v4l2:///dev/video0
[OpenGL] glDisplay -- X screen 0 resolution: 1920x1080
[OpenGL] glDisplay -- X window resolution: 1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
Opening in BLOCKING MODE
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> nvvconv0
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> decoder
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> nvvconv0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer changed state from READY to PAUSED ==> decoder
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> nvvconv0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> decoder
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstCamera -- end of stream (EOS)
[gstreamer] gstreamer v4l2src0 ERROR Internal data stream error.
[gstreamer] gstreamer Debugging info: gstbasesrc.c(3072): gst_base_src_loop (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
streaming stopped, reason not-negotiated (-4)
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
I think my previous code may be wrong and don’t use libargus to access the camera
Thank you, it works! (I feel idiot not to have seen that you already had written the solution previously, sorry for that)
Last question: do you know if at this stage the image is already on the GPU, and doing something like torch.as_tensor(cuda_img, device='cuda') would avoid an unecessary copy via the cpu ? Or is it on the cpu?
I have poor experience of jetson-utils from python, but my understanding is that yes it is already available to GPU as expected by jetson-inference (and may also be available to CPU if using zeroCopy option but may be a bit slower).