Question on jetson.utils.videoSource

I have a 360 camera connected to Xavier NX via USB (using v4l2src device=/dev/video0).

When I open /dev/video0 in VLC, I see the full 360 video from both lenses. Similarly, when I use “gst-launch-1.0 v4l2src device=/dev/video0 ! nvvidconv ! nv3dsink sync=false”, I see the full 360 video from both lenses.

However, when I call jetson.utils.videoSource(“/dev/video0”), I only see the 180 video from one lens.

I am using the same exact python code from

(Just replaced the CSI with /dev/video0)

How can I see the full 360 video using jetson.utils.videoSource()?

Alternatively, is there a way to pass the full pipeline (“v4l2src device=/dev/video0 ! nvvidconv ! nv3dsink sync=false”) to jetson.utils.videoSource() so I can see the full 360 video?

Hi,
The source code is in

You may need to customize the pipeline according to your camera.

Can you try the video-viewer utility first? It should be providing the full image. What is the size of the images that come from camera? And what is the size of the images that video-viewer reports that it captures? Can you please post the terminal log from video-viewer /dev/video0?

Is it possible that the whole image is too big to fit on your screen at once, and hence you only see a part of it?

Here’s the terminal log snippet from video-viewer /dev/video0. It is capturing the full 3840x1920 frame but the display size is set to 1920x1080. Is it possible to change the display size?


[gstreamer] opening gstCamera for streaming, transitioning pipeline to GST_STATE_PLAYING
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> v4l2src0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> v4l2src0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer stream status ENTER ==> src
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> v4l2src0
[gstreamer] gstCamera – onPreroll
[gstreamer] gstCamera – map buffer size was less than max size (11059200 vs 11059207)
[gstreamer] gstCamera recieve caps: video/x-raw, format=(string)I420, width=(int)3840, height=(int)1920, framerate=(fraction)30/1, colorimetry=(string)2:4:7:1, interlace-mode=(string)progressive
[gstreamer] gstCamera – recieved first frame, codec=unknown format=i420 width=3840 height=1920 size=11059207
[gstreamer] gstreamer changed state from READY to PAUSED ==> mysink
[gstreamer] gstreamer message async-done ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> mysink
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> pipeline0
RingBuffer – allocated 4 buffers (11059207 bytes each, 44236828 bytes total)
RingBuffer – allocated 4 buffers (22118400 bytes each, 88473600 bytes total)
video-viewer: captured 1 frames (3840 x 1920)
[OpenGL] glDisplay – set the window size to 1920x1080
[OpenGL] creating 3840x1920 texture (GL_RGB8 format, 22118400 bytes)
[cuda] registered openGL texture for interop access (3840x1920, GL_RGB8, 22118400 bytes)
video-viewer: captured 2 frames (3840 x 1920)
video-viewer: captured 3 frames (3840 x 1920)

What is the size of your monitor - is it 4K or is it 1080p?

The alternative would be to resize the image from the camera like shown here:

Resizing worked! I think both VLC and gst-streamer are automatically doing the resizing whereas in the python code you have to explicitly do the resizing.

Here’s the snippet of my code that worked

while display.IsStreaming():
img = camera.Capture()
imgOutput = jetson.utils.cudaAllocMapped(width=img.width * 0.5, height=img.height * 0.5, format=img.format)
# rescale the image (the dimensions are taken from the image capsules)
jetson.utils.cudaResize(img, imgOutput)
print(imgOutput)
detections = net.Detect(imgOutput)
display.Render(imgOutput)
display.SetStatus(“Object Detection | Network {:.0f} FPS”.format(net.GetNetworkFPS()))

OK cool. You may be able to speed it up a bit by only allocating the resized image once, and re-using the memory:

imgOutput = None

while display.IsStreaming():
     img = camera.Capture()

     if imgOutput is None:
          imgOutput = jetson.utils.cudaAllocMapped(width=img.width * 0.5, height=img.height * 0.5, format=img.format)

     # rescale the image (the dimensions are taken from the image capsules)
     jetson.utils.cudaResize(img, imgOutput)

     print(imgOutput)
     detections = net.Detect(imgOutput)
     display.Render(imgOutput)
     display.SetStatus(“Object Detection | Network {:.0f} FPS”.format(net.GetNetworkFPS()))

That helps!

BTW, I went one step further and figured out how to change the native 360 camera resolution to FHD instead of UHD in the camera UVC driver code. This further helped increase the FPS as I don’t have to resize anymore. Thanks for all your help!

$ v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: ‘YU12’
Name : Planar YUV 4:2:0
Size: Discrete 1920x960
Interval: Discrete 0.033s (30.000 fps)

1 Like

I ran into the same issue and was able to resolve it directly. The Python binding will parse the second parameter as program args (jetson-utils/PyVideo.cpp at master · dusty-nv/jetson-utils · GitHub) Width and height are exposed as --input-width, and --input_height respectively. Check the .Usage() method for additional options.

import jetson.utils

camera = jetson.utils.videoSource("/dev/video0", ["--input-width=640", "--input-height=320"])
camera.Usage()
1 Like

Yep that is a good point, and if you don’t actually need the original high-res image from camera for visualization purposes, the DNNs downscale the images to ~300x300 (or 224x224) anyways (so you don’t really need HD unless you want it for some other purpose)

1 Like