Trying to get ultra low live-streaming latency(<100ms) on the drone using nano

Currently, I am using this code to start the stream :

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=3280, height=1848, framerate=28/1, format=NV12'  ! nvvidconv left=200 right=3080 top=600 bottom=1200 !  'video/x-raw,width=2880, height=600, framerate=28/1, format=NV12, pixel-aspect-ratio=1/1' ! nvvidconv ! nvv4l2h264enc ! h264parse ! flvmux ! rtmpsink location='rtmp://192.168.1.34:1935/live/'

And using openCV to receive the stream on my laptop. The code I am using to receive the stream on openCV python is :

import cv2
video = cv2.VideoCapture('rtmp://NANO’S IP ADDRESS/live/drone')
ret = video.set(3,1280)
ret = video.set(4,720)
while(True):
ret, frame = video.read()
cv2.imshow('Object detector', frame)
if cv2.waitKey(1) == ord('q'):
break
video.release()
cv2.destroyAllWindows()

With this, the latency I was able to receive was roughly 400ms(0.4s).

Is there a better alternative to openCV to receive the livestream? Or is there a better way to receive the streaming code?

Thank you!

low latency streaming is not easy. you may try these options:

  1. if the bandwidth allows, transfer image/video without encoding/decoding. you may write you own data transfer protocol for that.
  2. try some other format for encoding and transfer protocol, such as low latency webrtc, low latency HLS.
  3. tune some parameter in gstreamer with low buffer and build decoder program in C/C++.

Hi sunxishan,

Thank you for the reply! Really appriciate it. From my limited knowledge on the matter, all I did was just use a pre defined pipeline to stream,

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=3280, height=1848, framerate=28/1, format=NV12'  ! nvvidconv left=200 right=3080 top=600 bottom=1200 !  'video/x-raw,width=2880, height=600, framerate=28/1, format=NV12, pixel-aspect-ratio=1/1' ! nvvidconv ! nvv4l2h264enc ! h264parse ! flvmux ! rtmpsink location='rtmp://192.168.1.34:1935/live/'

Could you elaborate more on your first point where you mentioned transfering videos without encoding and decoding? As I will be trying out 5G I believe that will be greatly beneficial!

How do I go about trying to edit the data transfer protocol?

Thank you!

I made a few tests too, see link:
https://devtalk.nvidia.com/default/topic/1064908/jetson-projects/jetson-nano-on-a-drone-multicopter/
Sony RX0 > Atomos Ninja display ~100 ms
Sony RX0 > USB-3 adapter > jetson nano > Atomos Ninja display ~200 ms
So the time for USB-3 > nano is about 100 ms
Maybe your laptop is the bottleneck.
Best regards,
Wilhelm

That is excellent information! Thank you for the data inputs! :)

May I know, how did you manage to live stream?

What protocol did you use? RTSP,RTMP?

Also did you use Gstreamer to stream and receive?

Thank you! :)

Basically I’m using this stream:

# gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw, framerate=30/1,width=1980,hight=1080 ! nvvidconv ! nv3dsink

It is not a network stream, only a camera>nano>HDMI-display transfer, see pictures.
On the copter I will use the Amimon Connex downlink with almost zero latency.

Best regards,
Wilhelm

Hi Wilhelm,

Thank you for the help but my project budget does not allow me to get a downlink.

My project is basically an autonomous drone where the jetson nano controls the autonomous flight as well as the streaming via cellular network(LTE/4G Modem) to a receiving end.

Our current latency using the codes provided above and running RTMP protocol are about 500ms. I am looking for ways to get it down to as low as 100ms or less if possible.

My current research has led me to believe gstreamer has the least latency live streaming wise.

EDIT: I am using an Raspberry Pi Camera Module V2 if it helps

if you want to try enc->dec route, you may try to optimize the latency from Accelerated GStreamer User Guide

you may test with some parameters like:

nvv4l2h264enc preset-level=4 MeasureEncoderLatency=1

Some wireless remote display protocol, such as miracast, can achieve sub 100ms latency easily. underlayer it is using H264/H265 RTSP/RTP as well. So I think if you are targeting around 100ms latency it is possible.

Hi sunxishan,

I replaced my nvv4l2h264 enc with nvv4l2h264enc preset-level=4 MeasureEncoderLatency=1

gst-launch-1.0 nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=3280, height=1848, framerate=28/1, format=NV12'  ! nvvidconv left=200 right=3080 top=600 bottom=1200 !  'video/x-raw,width=2880, height=600, framerate=28/1, format=NV12, pixel-aspect-ratio=1/1' ! nvvidconv ! nvv4l2h264enc preset-level=4 MeasureEncoderLatency=1 ! h264parse ! flvmux ! rtmpsink location='rtmp://164.78.185.96/live/'

But how do I receive the latency information? I got this afterwards when I ran the code above :

Setting pipeline to PAUSED ...
Opening in BLOCKING MODE 
gst_v4l2_video_enc_open: open trace file successfully
Pipeline is live and does not need PREROLL ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Redistribute latency...
NvMMLiteOpen : Block : BlockType = 4 
===== NVMEDIA: NVENC =====
NvMMLiteBlockCreate : Block : BlockType = 4 
GST_ARGUS: Creating output stream
CONSUMER: Waiting until producer is connected...
GST_ARGUS: Available Sensor modes :
GST_ARGUS: 3264 x 2464 FR = 21.000000 fps Duration = 47619048 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 3264 x 1848 FR = 28.000001 fps Duration = 35714284 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1920 x 1080 FR = 29.999999 fps Duration = 33333334 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 59.999999 fps Duration = 16666667 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: 1280 x 720 FR = 120.000005 fps Duration = 8333333 ; Analog Gain range min 1.000000, max 10.625000; Exposure Range min 13000, max 683709000;

GST_ARGUS: Running with following settings:
   Camera index = 0 
   Camera mode  = 0 
   Output Stream W = 3264 H = 2464 
   seconds to Run    = 0 
   Frame Rate = 21.000000 
GST_ARGUS: PowerService: requested_clock_Hz=37126320
GST_ARGUS: Setup Complete, Starting captures for 0 seconds
GST_ARGUS: Starting repeat capture requests.
CONSUMER: Producer has connected; continuing.
H264: Profile = 66, Level = 0

Also, to receive the RTSP stream, we’d need a RTSP server. How do you set up the RTSP server?

you may want to try preset-level=0, 1, 2 to speed up. 4 is the default value i think.

measure the latency using the old way…
two cameras capture two screens with digital clock

Ahh I got the measuring latency part using a digital stopwatch. I thought the MeasureEncoderLatency=1 would print out a digital latency test in the terminal hahah

Hi,
100 ms latency may not be achieved on Jetson Nano. You may refer to below test case and tune ‘latency’ to see how good it can achieve.
https://devtalk.nvidia.com/default/topic/1043770/jetson-tx2/problems-minimizing-latency-and-maximizing-quality-for-rtsp-and-mpeg-ts-/post/5295828/#5295828

Setting nvarguscamerasrc to 60fps mode should bring better performance.

Hi DaneLLL,

Is the Jetson capable of handling 5G network?

Would 100ms or less live streaming latency be possible with the implementation of 5G on the Nano?

Something to consider is that increased average throughput of 5G does not necessarily translate to reduced latency. As throughput goes up, then probably latency goes down, but anything wireless will add far more to latency than what you’d get with wired. Then there is the latency of the Nano itself. If you experiment with the wired gigabit network while the Nano is in performance mode, then you probably have the lowest latency possible. Everything else, especially wireless, will get significantly higher latency in comparison.

you need to care about the fps, you are encoding with max resolution hence you are getting 21fps, this means 48ms is added after all other encoding.

if you need to get less than 100ms, go for 1280 x 720 @60fps, i have done that and got 80-90ms glass-to-glass latency.

you need to have a good decoder machine as well with 60Hz screen on the client side.

better of course to have a screen of 120Hz and encode with @120fps

mu pipelines

server:

gst-launch-1.0 -e nvarguscamerasrc ! 'video/x-raw(memory:NVMM), width=(int)1280, height=(int)720, format=(string)NV12, framerate=(fraction)60/1' ! nvv4l2h265enc maxperf-enable=1 bitrate=8000000 iframeinterval=40 preset-level=1 control-rate=1 ! h265parse ! rtph265pay config-interval=1 ! udpsink host=192.168.1.47 port=5000 sync=false async=false

receiver

gst-launch-1.0 -vvv udpsrc port=5000 ! application/x-rtp,encoding-name=H265,payload=96 ! rtph265depay ! h265parse ! queue ! avdec_h265 ! autovideosink sync=false async=false -e

Hi EbrahimAli,

Thank you for your reply!! It really helps a ton :)

I see udp everywhere though. Is that the same as using rtmp/rtsp ?

Also, I see you’re using h265 as opposed to the popular h264. Is there a specific reasoning behind it or is it just faster?

Once again, Thank you!

udpsink is the simplest form, there is no faster and simpler than it (less practical than rtsp). once you test your latency with it, you can test it with others to see where is your delay come from.

h265 is 50% more efficient in bandwidth than h264, it might not be faster ? you can try.

be aware to enable max performance

sudo nvpmodel -m 0
sudo jetson_clocks

it reduces the latency further by 10ms

Hi @DaneLLL,
Why is 100ms latency may not be achieved on the Jetson Nano? I’ve been trying to get an RTP stream over udp from an IP camera and it seems like I’m stuck at around 200ms latency and couldn’t get it down an further.

Been trying to get around 150ms of delay from an RTP stream using gstreamer.

Hi nicholas.leong,

Please open a new topic for your issue. Thanks

100ms can be achieved. We achieved it at the end of our project