Transmit and receive an RTP stream on the same Jetson ORIN machine

In summary:
I follow documentation at jetson-inference/aux-streaming
In separate terminal windows on same ORIN machine.
I transmit with $ video-viewer --bitrate=1000000 /dev/video0 rtp://192.168.1.162:5005
I receive with $video-viewer --input-codec=h264 rtp://192.168.1.162:5005

Both repeat with “a timeout occurred…”

I don’t get a window with the camera output. Any Ideas?

I’m transmitting on terminal session 1:

$ video-viewer --bitrate=1000000 /dev/video0 rtp://192.168.1.162:5005
[video]  created gstCamera from v4l2:///dev/video0
  .
  .
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstCamera::Capture() -- a timeout occurred waiting for the next image buffer
 (...) 
continues until I ctr-c

Then I receive on terminal session 2 (same ORIN):

$video-viewer --input-codec=h264 rtp://192.168.1.162:5005
[gstreamer] initialized gstreamer, version 1.16.3.0
[gstreamer] gstDecoder -- creating decoder for 192.168.1.162
[gstreamer] gstDecoder -- resource discovery not supported for RTP/WebRTC streams
[gstreamer] gstDecoder -- pipeline string:
[gstreamer] udpsrc port=5005 multicast-group=192.168.1.162 auto-multicast=true caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264" ! rtph264depay ! nvv4l2decoder name=decoder enable-max-performance=1 ! video/x-raw(memory:NVMM) ! nvvidconv name=vidconv ! video/x-raw ! appsink name=mysink sync=false
[video]  created gstDecoder from rtp://192.168.1.162:5005
------------------------------------------------
gstDecoder video options:
------------------------------------------------
  -- URI: rtp://192.168.1.162:5005
     - protocol:  rtp
     - location:  192.168.1.162
     - port:      5005
  -- deviceType: ip
  -- ioType:     input
  -- codec:      H264
  -- codecType:  v4l2
  -- frameRate:  0
  -- numBuffers: 4
  -- zeroCopy:   true
  -- flipMethod: none
  -- loop:       0
  -- latency     10
------------------------------------------------
[OpenGL] glDisplay -- X screen 0 resolution:  1920x1080
[OpenGL] glDisplay -- X window resolution:    1920x1080
[OpenGL] glDisplay -- display device initialized (1920x1080)
[video]  created glDisplay from display://0
------------------------------------------------
glDisplay video options:
------------------------------------------------
  -- URI: display://0
     - protocol:  display
     - location:  0
  -- deviceType: display
  -- ioType:     output
  -- width:      1920
  -- height:     1080
  -- frameRate:  0
  -- numBuffers: 4
  -- zeroCopy:   true
------------------------------------------------
[gstreamer] opening gstDecoder for streaming, transitioning pipeline to GST_STATE_PLAYING
Opening in BLOCKING MODE 
[gstreamer] gstreamer changed state from NULL to READY ==> mysink
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter1
[gstreamer] gstreamer changed state from NULL to READY ==> vidconv
[gstreamer] gstreamer changed state from NULL to READY ==> capsfilter0
[gstreamer] gstreamer changed state from NULL to READY ==> decoder
[gstreamer] gstreamer changed state from NULL to READY ==> rtph264depay0
[gstreamer] gstreamer changed state from NULL to READY ==> udpsrc0
[gstreamer] gstreamer changed state from NULL to READY ==> pipeline0
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter1
[gstreamer] gstreamer changed state from READY to PAUSED ==> vidconv
[gstreamer] gstreamer changed state from READY to PAUSED ==> capsfilter0
[gstreamer] gstreamer changed state from READY to PAUSED ==> decoder
[gstreamer] gstreamer changed state from READY to PAUSED ==> rtph264depay0
[gstreamer] gstreamer stream status CREATE ==> src
[gstreamer] gstreamer changed state from READY to PAUSED ==> udpsrc0
[gstreamer] gstreamer changed state from READY to PAUSED ==> pipeline0
[gstreamer] gstreamer message new-clock ==> pipeline0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter1
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> vidconv
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> capsfilter0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> decoder
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> rtph264depay0
[gstreamer] gstreamer changed state from PAUSED to PLAYING ==> udpsrc0
[gstreamer] gstreamer stream status ENTER ==> src
NvMMLiteOpen : Block : BlockType = 261 
NVMEDIA: Reading vendor.tegra.display-size : status: 6 
NvMMLiteBlockCreate : Block : BlockType = 261 
[gstreamer] gstreamer message stream-start ==> pipeline0
[gstreamer] gstDecoder::Capture() -- a timeout occurred waiting for the next image buffer
 (...) 
continues until I ctr-c

Hi,

Just want to confirm first.
Why do you need the RTP stream on the same device?
Do you want to implement an IPC application?

Thanks.

Yes. For IPC.

I am using GTK3 written in C as my UI, and I’m using jetson-inference detectnet C++ program to provide visual inference. I’d like to keep the UI coding separate from the inference code even though its possible to create a gtkmm UI. Anyway, It would be a lot of work for my level of expertise to code the UI in C++, so leaving it in C is easy for me. Therefore, I’d like to pass the video from detectnet to my UI using RTP as the mechanism. However, if there is a different way, more appropriate, please let me know. Understanding how video-viewer works is my 1st step. Next I will investigate the actual pipeline elements. It seems the video-viewer displays the actual pipelines? But I haven’t been able to get that to work. If you have an answer for this it would be great or I can submit a 2nd question.

video-viewer pipelines
Sending:
[gstreamer] v4l2-proplist, device.path=(string)/dev/video0, udev-probed=(boolean)false, device.api=(string)v4l2, v4l2.device.driver=(string)uvcvideo, v4l2.device.card=(string)“HD\ Pro\ Webcam\ C920”, v4l2.device.bus_info=(string)usb-3610000.xhci-4.4, v4l2.device.version=(uint)330344, v4l2.device.capabilities=(uint)2225078273, v4l2.device.device_caps=(uint)69206017;

Receiving:
[gstreamer] udpsrc port=5005 multicast-group=192.168.1.162 auto-multicast=true caps=“application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H264” ! rtph264depay ! nvv4l2decoder name=decoder enable-max-performance=1 ! video/x-raw(memory:NVMM) ! nvvidconv name=vidconv ! video/x-raw ! appsink name=mysink sync=false

Hi @steven.sparks, if the process capturing video from your camera is getting timeouts, then it isn’t sending the RTP stream either because it doesn’t have frames. So your second process will be timing out too.

Can you try running your first /dev/video0 camera process with --input-codec=mjpeg and see if that helps? You should see video frames getting captured.

This represents a lot of added overhead (and potentially latency) to compress/decompress the video stream just for IPC. I would integrate it under one C/C++ application and just run it in different threads. You can still keep your UI and inferencing code separate under different libraries/modules if you prefer.

@dusty_nv - I added --input-codec=mjpeg and the video is now displaying. Yes I see the video. How do I get the 2nd part to work?

Also, in the long run I will investigate using 1 c/c++ application. Need to get my prototype completed so I can get investor money to hire smart people…

Are you still getting the timeout messages from the 2nd application? If so, try --output-codec=mjpeg when launching the first program and --input-codec=mjpeg when launching the second.

I don’t really understand why if you have jetson_utils running videoSource in both applications, why don’t you just use /dev/video0 as the source for the second application instead of RTP, and totally skip the first application?

@dusty_nv
video-viewer /dev/video0 rtp://192.168.1.162:5005 --output-codec=mjpeg --bitrate=1000000
video-viewer rtp://192.168.1.162:5005 --input-codec=mjpeg
Both have timeout messages.

I’m trying to prove I can pass video via RTP from one terminal window to a process reading from a second terminal window and see the video. If I can do this I believe I can output your detectnet(C++) video to a port and read it with my UI (C) program. I think when I first started this I was on a Jetson Nano Ubuntu 18.04 and now I’m on an ORIN Ubuntu 20.04 so the limitations of using GTKmm are gone. But I haven’t up learned from GTK3 to GTKmm, and my UI is in GTK3, so I’m trying use what I have. So the real question goes beyond video-viewer. If I can determine the Nvidia pipeline elements to display Video read from RTP in my C UI program, I have a work around until I can build detectnet code with a gtkmm C++ UI. My UI has multiple screens and many widgets. I just need to get the video into one of the screens. I know your detectnet program can output to RTP, its built in. If there is another solution, please let me know, I’m still learning.

You would need to keep --input-codec=mjpeg in the first line:

video-viewer /dev/video0 rtp://192.168.1.162:5005 --input-codec=mjpeg --output-codec=mjpeg --bitrate=1000000
video-viewer rtp://192.168.1.162:5005 --input-codec=mjpeg

or if that doesn’t work:

video-viewer /dev/video0 rtp://192.168.1.162:5005 --input-decoder=cpu --bitrate=1000000
video-viewer rtp://192.168.1.162:5005 --input-decoder=cpu --input-codec=h264

Unfortunately as you are finding out, configuring RTP isn’t what I’d call highly-reliable which is why I added support for RTSP and WebRTC

I use OpenGL for rendering the video stream to a display as traditional GUI widgets may have issues handling those resolutions/framerate.

The first set worked. The 2nd set failed with an error. So it looks as if RTP could work in a limited way. Frame rates are ~30FPS which is sufficient. However, I think I will buckle down and learn gtkmm and pass video from detechNet somehow, understand OpenGL better, take a look at RTSP, or just position the OpenGL output exactly over my GTK3 screens (use signals to control). I will mark this complete for proving the video-viewer can work with RTP.
@dusty_nv Thanks for your support!!!

Glad you were able to get the RTP working! Before you invest much time in other solutions, yea I would just go for the proper integration of detectNet + your UI code. jetson-inference/jetson-utils just works with pointers to image data (this data is typically allocated in shared CPU/GPU memory), so if your UI video widget can take in an RGB/RGBA image it should hopefully be relatively straightforward. Good luck @steven.sparks!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.