Is it possible to run a GStreamer pipeline in headless mode with the source plugin nvarguscamerasrc?

@DoneLLL Continuing the discussion from Is it possible to run a GStreamer pipeline in headless mode with the source plugin nvarguscamerasrc?:

I moved it here because the unit where I am developing is based on Orin NX (16GB)

I could replicate the examples in:

  1. Slow GStreamer accelerated pipeline - #3 by DaneLLL

  2. Real-Time Camera UDP Stream VLC Player Problem - #5 by yergen1

The solution for the third posting for UDP streaming: Gstreamer TCPserversink 2-3 seconds latency - #5 by DaneLLL, produces an error in the PC where I try to consume the stream: WARNING: erroneous pipeline: no element "avdec_h264".

I could use the same host to consume the stream and sink it in an app. However, this does not work in my host because it is headless and with no computer screen physically connected to the board. The only source plugin that I know to process the colour format NV12 is the one that has this limitation, nvarguscamerasrc. That is the catch-22: I cannot create the video source for the pipeline in the first place.

Can you please point out what I’m missing?

Would migrating to JetPack 6.2 help with a virtual desktop like Xvfb or make the nvarguscamerasrc independent of the physical computer screen?

Are there any other source plugins to capture video with the color format NV12 that may function without the need to have a real desktop and a computer screen connected to the board?

Hi,
We don’t support nvarguscamerasrc in virtual display by default, so it is supposed to be same on Jetpack 6.2. Please set up UDP/RTSP and install software decoder to receive the stream and decode it. The avdec_h264 plugin is a software decoder and the error indicates it is not installed.

@DoneLLL, thanks for the reply. What can I use instead of the nvarguscamerasrc in the UDP producer pipeline in the Jetson device?
In a real installation, that device is high up in a pole or structure in a remote location, with its cameras connected to it inside an enclosure and pointing to the area of interest.

Below, a test that works in an orin system without a computer monitor connected to the board HDMI connector:

gst-launch-1.0 videotestsrc is-live=1 ! video/x-raw,width=1280,height=720,format=I420 ! timeoverlay valignment=4 halignment=1 ! nvvidconv ! 'video/x-raw(memory:NVMM),width=1280,height=720' ! nvv4l2h264enc insert-sps-pps=1 idrinterval=15 ! h264parse ! mpegtsmux ! udpsink host=10.10.2.138 port=5003 sync=0

If I replace videotestsrc with nvarguscamerasrc this pipeline fails. What other options do I have to capture the video when it comes withNV12 colour model that are not argus-based?

This is what I have tried:

$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 ! "video/x-raw(memory:NVMM), format=(string)NV12, framerate=(fraction)10/1" ! nvvidconv ! 'video/x-raw(memory:NVMM),width=1280,height=720' ! nvv4l2h264enc insert-sps-pps=1 idrinterval=15 ! h264parse ! mpegtsmux ! udpsink host=10.10.2.138 port=5003 sync=0
libEGL warning: DRI3: failed to query the version
libEGL warning: DRI2: failed to authenticate
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
NvMMLiteOpen : Block : BlockType = 4 
===== NvVideo: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
GST_ARGUS: Creating output stream
(Argus) Error BadParameter:  (propagating from src/eglstream/FrameConsumerImpl.cpp, function initialize(), line 89)
(Argus) Error BadParameter:  (propagating from src/eglstream/FrameConsumerImpl.cpp, function create(), line 44)
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadInitialize:320 Failed to create FrameConsumer
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadFunction:241 (propagating)
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, waitRunning:203 Invalid thread state 3
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:806 (propagating)
Got EOS from element "pipeline0".
Execution ended after 0:00:00.100243452
Setting pipeline to NULL ...
Freeing pipeline ...

The same pipeline can stream video using the UDP format when ran from a system that has a Desktop and a computer screen plugged in:

Any suggestions?

What video source plugin should be used instead of the nvarguscamerasrc to do this work when the desktop is not installed?

The suggestion to stream video using UDP from the Jetson unit to an external computer does not work because the pipeline in the Jetson unit would require the nvarguscamrasrc, precisely the component that fails because the Jetson device is run headless, which is an edge device usually in a pole high above the ground. This solution would work if the headless computer were the target one.

For context, our systems use sockets, a network connection, and our own MJPEG server to serve video to our web-based UI in remote clients. Right now, the challenge is to process the video using the Argus camera stack.

Thanks

@DaneLLL, thanks for the answer. This suggestion of streaming video out of the Jetson, fails for us at the source. Can you suggest an alternative?

The Jetson unit is the one in headless mode and also the one that needs to process the video because it works as an edge device (it is installed high above the ground) with cameras, and no desktop is available in the unit.

Hi,
The error may be due to additional setting in graphics:

(Argus) Error BadParameter:  (propagating from src/eglstream/FrameConsumerImpl.cpp, function initialize(), line 89)
(Argus) Error BadParameter:  (propagating from src/eglstream/FrameConsumerImpl.cpp, function create(), line 44)

Please login into Orin NX through ssh command to have clean environment. And run

$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 ! "video/x-raw(memory:NVMM), format=(string)NV12, framerate=(fraction)10/1" ! nvvidconv ! 'video/x-raw(memory:NVMM),width=1280,height=720' ! nvv4l2h264enc insert-sps-pps=1 idrinterval=15 ! h264parse ! mpegtsmux ! udpsink host=10.10.2.138 port=5003 sync=0

Here is the output:

Mon Apr 14, 17:08:31; intelliview@orin-4:~ 
$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 ! "video/x-raw(memory:NVMM), format=(string)NV12, framerate=(fraction)10/1" ! nvvidconv ! 'video/x-raw(memory:NVMM),width=1280,height=720' ! nvv4l2h264enc insert-sps-pps=1 idrinterval=15 ! h264parse ! mpegtsmux ! udpsink host=10.10.2.138 port=5003 sync=0
libEGL warning: DRI3: failed to query the version
libEGL warning: DRI2: failed to authenticate
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
NvMMLiteOpen : Block : BlockType = 4 
===== NvVideo: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
GST_ARGUS: Creating output stream
(Argus) Error BadParameter:  (propagating from src/eglstream/FrameConsumerImpl.cpp, function initialize(), line 89)
(Argus) Error BadParameter:  (propagating from src/eglstream/FrameConsumerImpl.cpp, function create(), line 44)
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadInitialize:320 Failed to create FrameConsumer
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadFunction:241 (propagating)
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, waitRunning:203 Invalid thread state 3
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:806 (propagating)
Got EOS from element "pipeline0".
Execution ended after 0:00:00.101978217
Setting pipeline to NULL ...
Freeing pipeline ...

Hi,
We don’t hit the issue on developer kit. Do you login like:

$ ssh username@_IP_ADDRESS_OF_ORIN_NX_

@DaneLLL , thanks for testing this on the developer kit.

Yes I ssh using:
$ ssh -Y intelliview@orin-2

Where intelliview is the user name and orin-2 is the target host name for the unit that has a desktop and screen, and orin-4 is the host name that is headless. These are development units based on the same CTI Photon board and cameras, with identical hardware.

I can ssh to either one and test with and without a desktop.

When you say that this works in a development board, do you mean in headless mode?

I don’t have any problem in Orin-2, but the errors I posted are from Orin-4 where there is no desktop.
And yes, we have tested connecting a computer screen and enabling the desktop in Ubuntu 20 and then the pipelines based on nvarguscamerasrc do stream video.

Our experience so far has been with cameras based on v4l2src, the e-CAM30 is like that, and it used to work on the same board in headless mode, but with TX2 NX (8GB) as the SoM and of course a different BSP from CTI.

The teams participating in this integration have tested with the Desktop environment in good faith. We had never asked anyone to test in headless mode because switching between modes had never interfered with any camera for the thermal or visible spectra. Hopefully, this can be resolved via configuration.

Hi,
We don’t hit the issue on Orin Nano developer kit in headless mode. Does it work if you don’t set -Y? And please try the command:

$ gst-launch-1.0 nvarguscamerasrc ! fakesink

@DaneLLL , removing the -Y from the ssh command made the two pipelines work.

Also, the pipeline to fakesink worked; I remember that basic test used to fail.

This brings hope to our team. Thank you very much, 😃.

Do you know why X11 forwarding through the tunnel interferes with the nvarguscamerasrc?

This was the only way to test streaming on other cameras that use v4l2src and regular UYVY colour format in a headless Jetson unit.

With that out of the way, the UDP pipeline is extremely slow, as shown in the video attached. This video is produced at 10 fps, however, it is updated at more than 30 seconds per frame!

Please note the errors from the nvargus-daemon:

Apr 16 12:31:14 orin-4 nvargus-daemon[218012]: SCF: Error BadParameter:  (propagating from src/services/capture/CaptureService.cpp, function addSourceByGuid(), line 453)
Apr 16 12:31:14 orin-4 nvargus-daemon[218012]: SCF: Error BadParameter:  (propagating from src/api/CameraDriver.cpp, function addSourceByIndex(), line 347)
Apr 16 12:31:14 orin-4 nvargus-daemon[218012]: SCF: Error BadParameter:  (propagating from src/api/CameraDriver.cpp, function getSource(), line 519)
Apr 16 12:31:14 orin-4 nvargus-daemon[218012]: E/ libnvphs:socket: Error[2]: socket connection /var/lib/nvphs/nvphsd.ctl to PHS failed: No such file or directory
Apr 16 12:31:14 orin-4 nvargus-daemon[218012]: D/ libnvphs:socket: Warning: connecting to Power Hinting Service failed. Is PHS running?
Apr 16 12:31:14 orin-4 nvargus-daemon[218012]: === gst-launch-1.0[218037]: CameraProvider initialized (0xffff88792010)SCF: Error BadValue: NvPHSSendThroughputHints (in src/common/CameraPowerHint.cpp, function sendCameraPowerHint(), line 56)
Apr 16 12:31:14 orin-4 nvargus-daemon[218012]: E/ libnvphs:socket: Error[2]: socket connection /var/lib/nvphs/nvphsd.ctl to PHS failed: No such file or directory
Apr 16 12:31:14 orin-4 nvargus-daemon[218012]: D/ libnvphs:socket: Warning: connecting to Power Hinting Service failed. Is PHS running?
Apr 16 12:31:14 orin-4 nvargus-daemon[218012]: E/ libnvphs: Error: NvPHSSendThroughputHints[usecase=camera, hint=MinCPU, value=4294967295, timeout_ms=1000]: queue_or_send() failed

I noticed that there are two mentions of Error BadValue: NvPHSSendThroughputHints: one when the Camera provider is initialized, === gst-launch-1.0[218037]: CameraProvider initialized (0xffff88792010)SCF:, and a second one where the nvargus-daemon is trying to use the libnvphs: orin-4 nvargus-daemon[218012]: E/ libnvphs: and there are mentions of a socket for the Power Hinting Service (PHS), not found in /var/lib/nvphs/nvphsd.ctl. I checked in orin-4, the path exists, but it is empty, the socket file indeed is not found there.

Is this PHS still in use in Orin NX (16GB) with the JetPack 5.1.2?


Below is a verification on the same machine with the same ssh connection as above that I am producing the video at 10 FPS to a fakesink (no video to show):

gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 ! 'video/x-raw(memory:NVMM),width=1920, height=1080, framerate=10/1, format=NV12' ! nvvidconv ! fpsdisplaysink text-overlay=0 name=sink_0 video-sink=fakesink sync=0 -v
...
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_0/GstFakeSink:fakesink0: sync = false
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_0: last-message = rendered: 7, dropped: 0, current: 11.86, average: 11.86
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_0: last-message = rendered: 12, dropped: 0, current: 9.97, average: 10.99
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_0: last-message = rendered: 18, dropped: 0, current: 10.00, average: 10.64
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_0: last-message = rendered: 24, dropped: 0, current: 10.02, average: 10.48
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_0: last-message = rendered: 29, dropped: 0, current: 9.98, average: 10.39
/GstPipeline:pipeline0/GstFPSDisplaySink:sink_0: last-message = rendered: 34, dropped: 0, current: 10.00, average: 10.33

When I tested our software in that orin-4 unit, it still fails.

Below is a sample of the C++ and GStreamer pipeline (many details omitted due to the size of the code):

            std::stringstream aS;
                aS << " nvarguscamerasrc sensor-id=" << address << " sensor-mode=0 "
                     << "! video/x-raw(memory:NVMM), width=" << frame_width << ", height=" << frame_height << ", format=(string)NV12, framerate=(fraction)10/1 "
                    << "! queue "
                    << "! nvvidconv flip-method=" << flip_method << " "
                    << "! video/x-raw, format=(string)BGRx "
                    << "! videoconvert "
                    << "! video/x-raw, format=(string)BGR, width=" << frame_width << ", height=" << frame_height << " "
                    << "! appsink drop=True";

This produces the following pipeline (copied from the application logs):

nvarguscamerasrc sensor-id=0 sensor-mode=0 ! video/x-raw(memory:NVMM), width=1280, height=720, format=(string)NV12, framerate=(fraction)10/1 ! queue ! nvvidconv flip-method=0 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR, width=1280, height=720 ! appsink drop=True

The C++ code then passes this stream to the Opencv stream object and starts processing it:

cv::VideoCapture aStream; // from opencv 4.2
bool retVal = aStream.open(aSs.str().c_str(), cv::CAP_GSTREAMER); // use the video I/O backend gstreamer

We can’t test video output directly in orin-4 using a sink plugin that requires an X11 server due to the combination of these two items:

  1. We are in headless mode; there is no X11 server.
  2. Now the SSH connection is done without X11 forwarding (no -Y)

How do you suggest I test that the pipeline is producing video?

The C++ application shows errors, and there are no video frames processed in the application either.
Do you see anything meaningful in these error messages from our application log and from the nvargus-daemon?

15|SmrtHVRService  | 2025-04-16 14:44:20.111 [deb] cvcam.cpp 116 CVCam::ConnectDevice() GStreamer pipeline:  nvarguscamerasrc sensor-id=0 sensor-mode=0 ! video/x-raw(memory:NVMM), width=1280, height=720, format=(string)NV12, framerate=(fraction)10/1 ! queue ! nvvidconv flip-method=0 ! video/x-raw, format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR, width=1280, height=720 ! appsink drop=True
15|SmrtHVRService  | 0:00:28.769089967 285943 0xfffe300079e0 FIXME                default gstutils.c:3980:gst_pad_create_stream_id_internal:<nvarguscamerasrc6:src> Creating random stream-id, consider implementing a deterministic way of creating a stream-id
15|SmrtHVRService  | (Argus) Error BadParameter:  (propagating from src/eglstream/FrameConsumerImpl.cpp, function initialize(), line 89)
15|SmrtHVRService  | (Argus) Error BadParameter:  (propagating from src/eglstream/FrameConsumerImpl.cpp, function create(), line 44)
15|SmrtHVRService  | Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadInitialize:320 Failed to create FrameConsumer
15|SmrtHVRService  | Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, threadFunction:241 (propagating)
15|SmrtHVRService  | Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:804 (propagating)
15|SmrtHVRService  | GST_ARGUS: Creating output stream
15|SmrtHVRService  | GST_ARGUS: Creating output stream
15|SmrtHVRService  | GST_ARGUS: Creating output stream

Thank you very much for your help!

Hi,
We don’t observe the issue when playing UDP in VLC player. Probably it is due to network bandwidth in you enironment. You may try to set the properties to nvv4l2h264enc:

  insert-aud          : Insert H.264 Access Unit Delimiter(AUD)
                        flags: readable, writable
                        Boolean. Default: false
  insert-vui          : Insert H.264 VUI(Video Usability Information) in SPS
                        flags: readable, writable
                        Boolean. Default: false
  maxperf-enable      : Enable or Disable Max Performance mode
                        flags: readable, writable, changeable only in NULL or READY state
                        Boolean. Default: false
  poc-type            : Set Picture Order Count type value
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 2 Default: 0

Enable the properties or set poc-type=2 for a try. Or try H265 encoding.

In OpenCV, you can use udpsrc to receive the compressed steam like:

udpsrc ! rtph264depay ! h264parse ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=BGRx ! videoconvert ! appsink

Please ensure the gstreamer pipeline is good in gst-launch-1.0 with fakesink. And then apply it to cv2.VideoCapture() with appsink. There are examples in FAQ:
Jetson AGX Orin FAQ

For streaming from one Jetson to the other Jetson device, please use rtph264pay/rtph264depay like:

[stream to the other Jetson at 10.19.115.105]
$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 ! "video/x-raw(memory:NVMM), format=(string)NV12, framerate=10/1" ! nvvidconv ! 'video/x-raw(memory:NVMM),width=1280,height=720' ! nvv4l2h264enc idrinterval=15 insert-sps-pps=1 bitrate=1500000 ! video/x-h264,profile=main ! h264parse ! rtph264pay ! udpsink host=10.19.115.105 port=5003 sync=0
[the Jetson at 10.19.115.105]
$ gst-launch-1.0 udpsrc port=5003 ! 'application/x-rtp,encoding-name=H264,payload=96' ! rtph264depay ! queue ! h264parse ! queue ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=I420 ! xvimagesink sync=0

@DaneLLL, thanks for the detailed reply.

I have been able to speed up the UDP streaming just enough to see some motion; the frames still look smudged, and the refresh rate is pretty bad. I have to search what the arguments to nvv4l2h264enc mean so I can tune the bit rate and size buffers (?).

This is what I have tried:

gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 \ 
 ! "video/x-raw(memory:NVMM), format=(string)NV12, framerate=(fraction)10/1" ! nvvidconv \
 ! 'video/x-raw(memory:NVMM),width=1280,height=720' \
 ! nvv4l2h264enc insert-vui=1 poc-type=2 EnableTwopassCBR=1 insert-sps-pps=1 idrinterval=2 bitrate=6400000000 vbv-size=160000000 maxperf-enable=1 \
 ! h264parse ! mpegtsmux ! udpsink host=10.10.2.138 port=5003 sync=0

I will try to polish this as an alternative for our Product Manager to consider. I think that for a CSI camera (MIPI) to end up using UDP, it would have to be rock solid and have at most a 1 to 2 ms latency to be inside the analytics processing app to be comparable to the twin USB thermal sensor input.
I will continue tuning it and testing how to consume it in our C++ service.

As for the advice on testing on a Jetson in headless mode, I tested the fakesink and it can be used in headless mode to confirm the absence of errors in the pipeline. However, despite seeing none from executing this pipeline in the headless Orin-4 unit:

gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0  \
 ! 'video/x-raw(memory:NVMM), width=1280, height=720, format=(string)NV12, framerate=(fraction)10/1' \
 ! queue ! nvvidconv flip-method=0 ! 'video/x-raw, format=(string)BGRx' \
 ! videoconvert ! 'video/x-raw, format=(string)BGR, width=1280, height=720' \
 ! fakesink

The same pipeline with the C++ cv2.VideoCapture object as the appsink and the CAPS::GSTREAMER option for the video IO backend, still produces the errors I described in my last entry from yesterday in the same unit. This pipeline works in the same app, in a unit with the Desktop and computer screen, Orin-2, in our case.

I will test in a smaller C++ test program to see if I see the same behaviour from our C++ service, and will report back here.

Thanks,

Hi,
bitrate=640000000 is 640Mbps. It is unreasonable for 720p10. Please set to bitrate=1500000 or bitrate=1000000

And if you use nvarguscamerasrc and hits the error:

15|SmrtHVRService  | (Argus) Error BadParameter:  (propagating from src/eglstream/FrameConsumerImpl.cpp, function initialize(), line 89)
15|SmrtHVRService  | (Argus) Error BadParameter:  (propagating from src/eglstream/FrameConsumerImpl.cpp, function create(), line 44)

It does not link to EGL libs in the OpenCV environment. Suggest you run a pipeline with udpsrc to decode and get the camera frame data.

1 Like

I estimate the bitrate like this:

Variable Symbol Value Units
Frame height h 720 pixel
Frame width w 1,280 pixel
Frame size h . w 921,600 pixel
bit density (NV12) bd 12 b/pixel
Frames per second fps 30 f/s
bit to megabit ratio bMr 1,000,000 b/Mb
Calculated Bitrate h . w . bd . fps / bMr 332 Mb/s

However, I achieved the best video streaming quality by setting the bitrate to 8,600,000 in the argument to nvv4l2h265enc instead of not setting it at all.

Can you elaborate on how nvv4l2h265enc, or its similar plugin nvv4l2h264enc, processes an overspecified set of inputs, like in my pipeline, where the framerate, frame size, pixel format (NV12), and bitrate are all specified but one is redundant?

Our internal network is around 40 MB/s, so 320 Mb/s.

gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 \
 ! "video/x-raw(memory:NVMM), format=(string)NV12, framerate=(fraction)30/1" \
 ! nvvidconv ! 'video/x-raw(memory:NVMM),width=1280,height=720' \
 ! nvv4l2h265enc insert-vui=1 EnableTwopassCBR=1 insert-sps-pps=1 idrinterval=1 bitrate=8600000 vbv-size=8000 maxperf-enable=1 \
 ! h265parse \ 
 ! mpegtsmux \
 ! udpsink host=10.10.2.138 port=5003 sync=0

VLC reports an input bitrate of around 5,800 kb/s. I wonder if that is a bug in the UI?

Now I am moving on to the sink in our app on the same Jetson to see if this workaround works well for this use case.

@DaneLLL, I successfully tested your suggestion for sending video from a headless Jetson to another using the pipelines you suggested. On the source Photon - Jetson Orin NX (16GB), headless:

gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 \
 ! "video/x-raw(memory:NVMM), format=(string)NV12, framerate=10/1" \
 ! nvvidconv \
 ! 'video/x-raw(memory:NVMM),width=1280,height=720' \
 ! nvv4l2h264enc idrinterval=1 insert-sps-pps=1 bitrate=1500000 \
 ! video/x-h264, profile=main \
 ! h264parse \
 ! rtph264pay \
 ! udpsink host=10.10.3.62 port=5003 sync=0

and on the receiving Photon- Jetson Orin NX (16GB)

udpsrc port=5003 \
 ! application/x-rtp,encoding-name=H264,payload=96 \
 ! rtph264depay ! queue \
 ! h264parse ! queue \
 ! nvv4l2decoder \
 ! nvvidconv \
 ! video/x-raw(memory:NVMM), format=(string)I420 \
 ! nvvidconv \
 ! video/x-raw, format=(string)BGRx \
 ! videoconvert \
 ! video/x-raw, format=(string)BGR, width=1280, height=720 \
 ! appsink drop=True

Is there a need to go through the I420 pixel format?

I started the UDP pipeline targeting localhost, and that proved useful. This is an initial test on the headless unit:


(the software was built on orin-2 but it is running entirely in orin-4, headless).

I still have to work on productizing the new pipeline, I will put it into a systemd service so it is robust to restarts and stuff like that.

I replaced H264 for H265 in the original pipeline, but it did not work.
Can you suggest the syntax for H265 encoding for the video payload?
I thought the RTP envelope could stay the same.
This is what I tried:

gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 \
 ! "video/x-raw(memory:NVMM), format=(string)NV12, framerate=(fraction)30/1" \
 ! nvvidconv \
 ! 'video/x-raw(memory:NVMM),width=1280,height=720' \
 ! nvv4l2h265enc insert-vui=1 EnableTwopassCBR=1 insert-sps-pps=1 idrinterval=1 bitrate=8000000 vbv-size=8000 maxperf-enable=1 \
 ! h265parse \
 ! mpegtsmux \
 ! udpsink host=10.10.3.62 port=5003 sync=0

The above works at the source. The following produces errors at the destination:

gst-launch-1.0 udpsrc port=5003 \
 ! 'application/x-rtp,encoding-name=H265,payload=96' \
 ! rtph265depay ! queue \
 ! h265parse ! queue \
 ! nvv4l2decoder ! nvvidconv \
 ! video/x-raw,format=I420 ! xvimagesink sync=0

The error is:

WARNING: from element /GstPipeline:pipeline0/GstRtpH265Depay:rtph265depay0: Could not decode stream.
Additional debug info:
gstrtpbasedepayload.c(505): gst_rtp_base_depayload_handle_buffer (): /GstPipeline:pipeline0/GstRtpH265Depay:rtph265depay0:
Received invalid RTP payload, dropping

This works without mentioning the rtp at the destination, why?

gst-launch-1.0 udpsrc port=5003 \
 ! tsparse \
 ! tsdemux ! queue \
 ! h265parse ! queue \
 ! nvv4l2decoder \
 ! nvvidconv \
 ! video/x-raw,format=I420 \
 ! xvimagesink sync=0

tsparse consumes video/mpegts, MPEG transport stream, while tsdemux consumes video/mpegts and produces video/h265 for h265parse to process. I was a bit surprised by the different syntax in the consumer pipeline, having the source share the same syntax, just different plugins for H265.

Is there any documentation on this two-stage pipeline to route video from cameras that produce NV12 pixel format?

I think that product managers would benefit from being aware of the extra complexity of implementation and testing for a headless robotic deployment.

Hi,
For H265, please try rtph265pay/rtph265depay like:

.. ! h265parse ! rtph265pay ! udpsink host=10.19.115.105 port=5003 sync=0

$ gst-launch-1.0 udpsrc port=5003 ! 'application/x-rtp,encoding-name=H265,payload=96' ! rtph265depay ! queue ! h265parse ! ...
1 Like

I have tested the following for H.265, I am trying to improve the quality of the image:

Source pipeline:

gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0  \
 ! "video/x-raw(memory:NVMM), format=(string)NV12, framerate=30/1" \
 ! nvv4l2h265enc idrinterval=1 insert-sps-pps=1 bitrate=24000000  \
 ! "video/x-h265,profile=main" \
 ! h265parse  \
 ! rtph265pay  \
 ! udpsink host=0.0.0.0 port=5003 sync=0

Consumer pipeline inside the C++ app:

    std::stringstream aSs;
    aSs << " udpsrc port=5003 "
           " ! application/x-rtp,encoding-name=H265,payload=96 "
           " ! rtph265depay ! queue "
           " ! h265parse ! queue "
           " ! nvv4l2decoder  "
           " ! nvvidconv ! video/x-raw(memory:NVMM), format=(string)I420 "
           " ! nvvidconv ! video/x-raw, format=(string)BGRx "
           " ! videoconvert ! video/x-raw, format=(string)BGR, width=1280, height=720 "
           " ! appsink drop=true";
    setenv("GST_DEBUG", "3", 1);
    cv::VideoCapture aStream;
    retVal = aStream.open(aSs.str().c_str(), cv::CAP_GSTREAMER);

Errors I get in the logs using level 3 debug:

0:00:00.115548479 685476 0xfffe2c19fa40 WARN                    v4l2 gstv4l2object.c:4561:gst_v4l2_object_probe_caps:<nvv4l2decoder0:src> Failed to probe pixel aspect ratio with VIDIOC_CROPCAP: Unknown error -1
0:00:00.116781532 685476 0xfffe2c19fa40 WARN                    v4l2 gstv4l2object.c:2420:gst_v4l2_object_add_interlace_mode:0xfffe2c188120 Failed to determine interlace mode
0:00:00.116822525 685476 0xfffe2c19fa40 WARN                    v4l2 gstv4l2object.c:2420:gst_v4l2_object_add_interlace_mode:0xfffe2c188120 Failed to determine interlace mode
0:00:00.116841054 685476 0xfffe2c19fa40 WARN                    v4l2 gstv4l2object.c:2420:gst_v4l2_object_add_interlace_mode:0xfffe2c188120 Failed to determine interlace mode
0:00:00.116856798 685476 0xfffe2c19fa40 WARN                    v4l2 gstv4l2object.c:2420:gst_v4l2_object_add_interlace_mode:0xfffe2c188120 Failed to determine interlace mode
0:00:00.149738244 685476 0xaaaaecb42240 FIXME           rtph265depay gstrtph265depay.c:1310:gst_rtp_h265_depay_process:<rtph265depay0> Assuming DONL field is not present
0:00:00.151703282 685476 0xaaaaecb42240 FIXME           rtph265depay gstrtph265depay.c:1310:gst_rtp_h265_depay_process:<rtph265depay0> Assuming DONL field is not present
... 
(53 more lines of the same)
...
0:00:00.161791392 685476 0xaaaaecb42240 FIXME           rtph265depay gstrtph265depay.c:1310:gst_rtp_h265_depay_process:<rtph265depay0> Assuming DONL field is not present
0:00:00.163390693 685476 0xaaaaecb475e0 FIXME              h265parse gsth265parse.c:1850:gst_h265_parse_parse_frame:<h265parse0> Implement timestamp/duration interpolation based on SEI message
NvMMLiteOpen : Block : BlockType = 279 
NvMMLiteBlockCreate : Block : BlockType = 279
...
(the same pattern as above repeats over)
...
0:00:00.190400865 685476 0xaaaaecb42240 FIXME           rtph265depay gstrtph265depay.c:1310:gst_rtp_h265_depay_process:<rtph265depay0> Assuming DONL field is not present
0:00:00.190485955 685476 0xaaaaecb42240 FIXME           rtph265depay gstrtph265depay.c:1310:gst_rtp_h265_depay_process:<rtph265depay0> Assuming DONL field is not present

The video is there nonetheless. The video is less grainy in H.265 compared to H.264 at the same source bitrate, framerate and mode. However, I could tell a bit of lag between colour and thermal on the H.265 pipelines, with all the other parameters the same. After I took the screen vide, I was able to improve the lag by setting the video buffering verifier to 3200 bits:

 gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 \
  ! "video/x-raw(memory:NVMM), format=(string)NV12, framerate=30/1" \
  ! nvv4l2h265enc idrinterval=1 insert-sps-pps=1 bitrate=36000000 vbv-size=3200 maxperf-enable=1 \
  ! "video/x-h265,profile=main" \
  ! h265parse ! queue  \
  ! rtph265pay ! queue \
  ! udpsink host=0.0.0.0 port=5003 sync=0

Any suggestions to remove the errors from the H.265 pipelines?

Tests with the H.264 pipelines:

Tests with the H.265 pipelines:

Hi,
The bitrate setting still looks unreasonable. Look too large. On Jetpack 6.2/Orin NX developer kit + RPi HiQ camera, we have two console to run server client like:

// server
// run jetson_clocks
$ sudo jetson_clocks
// either 720p10 in 1.5Mbps
$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 ! "video/x-raw(memory:NVMM), format=(string)NV12, framerate=10/1" ! nvvidconv ! 'video/x-raw(memory:NVMM),width=1280,height=720' ! nvv4l2h265enc idrinterval=15 insert-sps-pps=1 bitrate=1500000 ! h265parse ! rtph265pay ! udpsink host=127.0.0.1 port=5003 sync=0
// or 720p30 in 5Mbps
$ gst-launch-1.0 nvarguscamerasrc sensor-id=0 sensor-mode=0 ! "video/x-raw(memory:NVMM), format=(string)NV12" ! nvvidconv ! 'video/x-raw(memory:NVMM),width=1280,height=720' ! nvv4l2h265enc idrinterval=15 insert-sps-pps=1 bitrate=5000000 ! h265parse ! rtph265pay ! udpsink host=127.0.0.1 port=5003 sync=0
// client
$ gst-launch-1.0 udpsrc port=5003 ! 'application/x-rtp,encoding-name=H265,payload=96' ! rtph265depay ! queue ! h265parse ! queue ! nvv4l2decoder ! nvvidconv ! video/x-raw,format=I420 ! videoconvert ! xvimagesink sync=0
// if xvimagesink does not work, please try ximagesink

Are you able to try the setup on developer kit? See if we can align the result on this setup.

1 Like

And you may run the server(ssh login) like:

import sys
import cv2

def read_cam():
    cap = cv2.VideoCapture("nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)1280, height=(int)720,format=(string)NV12, framerate=(fraction)30/1 ! nvvidconv ! video/x-raw, format=(string)BGRx ! videoconvert !  appsink drop=true ")

    w = 1280
    h = 720
    fps = 30
    print('Src opened, %dx%d @ %d fps' % (w, h, fps))

    gst_out = "appsrc ! video/x-raw, format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv ! nvv4l2h265enc idrinterval=15 insert-sps-pps=1 bitrate=5000000 ! h265parse ! rtph265pay ! udpsink host=127.0.0.1 port=5003 sync=0 "
    out = cv2.VideoWriter(gst_out, cv2.CAP_GSTREAMER, 0, float(fps), (int(w), int(h)))
    if not out.isOpened():
        print("Failed to open output")
        exit()

    if cap.isOpened():
        while True:
            ret_val, img = cap.read();
            if not ret_val:
                break;
            out.write(img);
            cv2.waitKey(1)
    else:
     print("pipeline open failed")

    print("successfully exit")
    cap.release()
    out.release()

if __name__ == '__main__':
    read_cam()

And run gst-launch-1.0 command in client(remote display) to have preview.

1 Like

@DaneLLL,

We only keep some old Nano Developer kits. Ever since TX2 NX came out, we stuck to working directly on hardened boards that support the IO and environmental conditions that we require in the field, that kind of get us faster to market for challenging applications.

Thanks for suggesting these pipeline settings. When I ran them, I noticed that I could get good image quality at lower bitrate values if the idrinterval is increased from 1 to 15.

My previous observation is that using a higher bitrate like 36 Mb/s and starting with an idrinterval of 1, as soon as I increased the IDR (Instantaneous Decoding Refresh) frames to 2, I would get missing moving objects, as if they were skipped in between a few frames. However, by lowering the bitrate to 1.5 Mb/s and increasing the idrinterval to 15, I observed steady motion and better quality.

Below are two images taken from the video produced through the app. In both cases, the source was set at idrinterval=15, in the top image, the bitrate was 1.5Mbps, while in the bottom one it was 10Mbps; the lower bitrate + high IDR interval looks better.


The next two images have the same idrinterval=1, the top has a bitrate of 1.5Mbps and the bottom of 10Mbps; the higher bitrate + low IDR looks slightly better.


I measured the UDP traffic to the localhost from running several configurations, and it is between 2.6 and 14 Mb/s and the average background value without streaming video at 350 kb/s. Below are two screenshots of how I measured for a high and low bandwidth utilization, using sudo iftop -i lo.

Using this technique I measured the two Server pipelines you suggested above without any modifications:

Server pipeline ( from @DaneLLL in previous comment on this thread) Average UDP rate on the loopback interface (localhost)
// either 720p10 in 1.5Mbps 2.3 Mbps
// or 720p30 in 5Mbps 7.5 Mbps

Of course, if QA approves this, the lower bandwidth utilization/codec configuration will be favoured for the final implementation.


About the errors:

I investigated some more, I confirm that the errors I see in the application log come from the GStreamer library when I set the GStreamer logging level to 3. I do it while I develop the C++ code in the app to get feedback.
If I set it to 1, the WARM and FIXME messages don’t show.

I can disregard these messages if you think they can be safely ignored. However, I don’t see them while consuming the H.264 UDP pipeline in the app at the same logging level.

See the attached video to demonstrate the effect of setting the GStreamer logging level to 3 at the command line with GST_DEBUG=3:

Below is a screen video showing that when running the H.264 source pipeline, consuming it at GStreamer logging level 3 does not show continuous errors like with H.265.

I asked an AI assistant to describe these two messages, it suggests that they can be safely ignored if some conditions are met about the RTP H.265 source. Can this be confirmed by NVIDIA?

  1. WARN: ... Failed to determine interlace mode. Comes from GStreamer element v4l2 because the stream does not say if it is interlaced or progressive. The solution, according to CoPilot using GPT-4o, could be to add caps such as: "video/x-raw, interlace-mode=progressive". However, the example was for a v4l2src source, not our case. I don’t want to spend more time with something that could be a dead end.
  2. FIXME: ... Assuming DONL field is not present. Comes from the element rtph265depay in charge of processing the packets that contain the H.265 payload within the RTP envelope. It complains that it can’t find the optional DONL field in the RTP stream. The acronym is short for Decoded Order Number LSB. Is this an opportunity to improve the encoding of the RTP stream carrying H.265?