I’m trying to set up the camera, VNC and SSH and am looking for where to enable these. I know you can do this on rasp pi from sudo raspi-config. Is there a similar command I can type into the Xavier terminal to bring up these settings?
SSH is default enable you don’t need any configure it. You just need to know the IP then you can connect to it by ssh command.
Ok. Would you happen to have a resource to the quickest easiest way to pull up either a Rasp Pi camera or usb webcam on the Xavier? It’s probably just me(I’m just starting to learn this), but every terminal command I find somewhere seems to be outdated and no longer work.
For USB camera you can just connect to Xavier and check with v4l2-ctl --list-devices to check if the UVC driver been loaded. Install the v4l2-ctl by sudo apt-get install v4l-utils
For Pi camera you need to modify the device tree by reference to below programing guide.
I did the apt-install for the v4l2-ctl. When I run the v4l2-ctl --list-devices I get:
vi-output, imx219 10-0010 (platform:15c10000.vi:2):
UVC Camera (046d:0825) (usb-3610000.xhci-2.1):
I think I see it’s connected. So then how then would I go about pulling up a live video window on my screen?
For the bayer RG10 IMX219 CSI sensor you would use nvarguscamerasrc:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 ! nvoverlaysink
For your UVC camera, you may try v4l2src:
gst-launch-1.0 v4l2src device=/dev/video1 ! videoconvert ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! nvoverlaysink
So I tried for the UVC (logitech) camera and it worked like a charm. But when I used the other one to bring up the Rasp pi camera I get an all red screen
Here’s the code to accompany that:
Setting pipeline to PAUSED …
Pipeline is live and does not need PREROLL …
Setting pipeline to PLAYING …
New clock: GstSystemClock
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected…
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1640 x 1232 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;
GST_ARGUS: Running with following settings:
Camera index = 0
Camera mode = 2
Output Stream W = 1920 H = 1080
seconds to Run = 0
Frame Rate = 29.999999
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
Interrupt: Stopping pipeline …
Execution ended after 0:01:11.687769178
Setting pipeline to PAUSED …
Setting pipeline to READY …
CONSUMER: Done Success
GST_ARGUS: Cleaning up
GST_ARGUS: Done Success
Setting pipeline to NULL …
Freeing pipeline …
Such a red screen may be due to your monitor not able to support the resolution. You may add -v so that you’ll see what camera provides.
Try specifying a low mode such as 640x480:
gst-launch-1.0 -v nvarguscamerasrc sensor-id=0 ! 'video/x-raw(memory:NVMM),width=640,height=480' ! nvoverlaysink
That worked beautifully. And actually had less lag then then the UVC camera. I used 1024 x 600 for my monitor. Awesome!
For the less lag have try xvimagesink to check
gst-launch-1.0 nvarguscamerasrc ! "video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1" ! nvvidconv ! xvimagesink sync=false
This works well too, thanks. And what would be the equivalent of this command for the UVC (v4l2-ctl) camera? (Since that was the one giving me the most lag)
Also, a couple of general questions about the code for my understanding:
- How come sensor-id=0 wasn’t included?
- Does the framerate=(fraction)30/1 lower the framerate to increase performance?
- Previously, nvoverlaysink was used but not in this code. Why is this? What does nvoverlaysink do and did you replace it with xvimagesink?
You may use:
gst-launch-1.0 v4l2src device=/dev/video1 ! videoconvert ! xvimagesink
videoconvert may not be mandatory, but it depends on what formats your USB camera provides.
Not sure I correctly understand your question, but 0 is the default value for sensor-id, so if you don’t set it, nvarguscamerasrc will try to use first sensor (should be CSI0 and /dev/video0)
Rpi cam driver provides various sensor modes. For 1920x1080 resolution, the framerate is 30 fps. You can only decrease if you want. Note that higher resolutions with this sensor don’t support 30 fps.
These are both display sinks, allowing to display frames on a screen.
nvoverlaysink draws from NVMM memory into display controller (thus overlaying X windows).
xvimagesink draws from standard memory into a X window.
nvarguscamerasrc outputs in NVMM memory.
v4l2src outputs into standard memory.
nvvidconv can be used for copying from one memory type into the other one.
Thanks. I still get more lag on the UVC than on the Rasp Pi Camera, but maybe that is normal? I’m using a Logitech C270 HD Webcam.
From what I’ve found, your camera can provide YUY2 format. You may try, for 640x480@30fps:
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! xvimagesink # Or: gst-launch-1.0 v4l2src device=/dev/video1 io-mode=2 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! xvimagesink
I don’t think you can do more.
Of course, this involves USB2, driver, V4L so it may not be as fast a MIPI/CSI path.
- If your system have only one MIPI Bayer sensor you can ignore the sensor-id, due to nvarguscamerasrc didn’t support USB cam.
2.The framerate here only for the sensor mode select doesn’t matter with frame control.
- All of them are display sink. You can choice what suited your use case.
When I try to use the Bayer sensor code portion in my program I get an error.
camSet=‘gst-launch-1.0 nvarguscamerasrc ! “video/x-raw(memory:NVMM), width=(int)1920, height=(int)1080,format=(string)NV12, framerate=(fraction)30/1” ! nvvidconv ! xvimagesink sync=false’
_, frame = cam.read()
(python3:11789): GStreamer-CRITICAL **: 16:49:34.802: gst_element_make_from_uri: assertion ‘gst_uri_is_valid (uri)’ failed
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (711) open OpenCV | GStreamer warning: Error opening bin: unexpected reference “gst-launch-1” - ignoring
[ WARN:0] global /home/nvidia/host/build_opencv/nv_opencv/modules/videoio/src/cap_gstreamer.cpp (480) isPipelinePlaying OpenCV | GStreamer warning: GStreamer: pipeline have not been created
Traceback (most recent call last):
File “/home/dp/Desktop/pyPro/.vscode/OpenCV/OpenCV-1.py”, line 9, in
cv2.error: OpenCV(4.1.1) /home/nvidia/host/build_opencv/nv_opencv/modules/highgui/src/window.cpp:352: error: (-215:Assertion failed) size.width>0 && size.height>0 in function ‘imshow’
For using the pipeline in opencv:
- remove gst-launch-1.0 that is binary command. Opencv only uses the pipeline string (without single quotes).
- the sink of your pipeline is xvimagesink that displays in a Xwindow. If you want to get frames in opencv, you would have to convert into a supported format such as BGR for color and use appsink instead.
camSet = "nvarguscamerasrc ! video/x-raw(memory:NVMM),width=1920,height=1080,format=NV12,framerate=30/1 ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1" cam = cv2.VideoCapture(camSet, cv2.CAP_GSTREAMER) if not cam.isOpened(): print("Failed to open camera") exit() ...
That does work to bring up the camera… but ooof does it ever lag. I assume because we’re not using xvimagesink anymore?
I haven’t seen this command before. What does it do for our code?