Hi
In my application, I am using OpenCV VideoCature to launch the gstreamer script below to stream the video over WIFI to a Windows PC and do image processing on Jetson Nano frame by frame:
The above code works well in most of time, but the PC received frame data has been damaged sometimes during the transmission using UDP protocol.
I tried to use TCP protocol, it has no frame data damaged, but I have noticed it may have number of frame dropped sometimes and the latency is more than 1.5 seconds.
All the cases (frame data damaged, frame dropped, big latency) are not acceptability.
Therefore, I am looking for other method to meet my application requirements:
Jetson nano do image processing frame by frame. The frame size is 640 x 480.
Along with image processing, Jetson nano stream the video to Windows PC over WIFI at full frame size (2592 x 1944), 24 fps without frame data damaged and frame dropped.
The latency is less than 500 ms
I don’t know which method can do the job?
DeepStream?
webRTC?
or something else?
Would you please give me your suggestion.
Thank you very much.
I don’t care the first number of frames taking more than 400 ms to encode one frame. But I need to know why and how to resolve the issues of encode one frame takes more than 300ms in rest frames (frame 1092 to 1094, and frame 4463 to 4465).
Below is the log data for the frames encode latency more than 300ms:
KPI: v4l2: frameNumber= 1090 encoder= 12 ms pts= 46947206264
KPI: v4l2: frameNumber= 1091 encoder= 17 ms pts= 46988884785
KPI: v4l2: frameNumber= 1092 encoder= 403 ms pts= 47030550421
KPI: v4l2: frameNumber= 1093 encoder= 368 ms pts= 47072215420
KPI: v4l2: frameNumber= 1094 encoder= 323 ms pts= 47113893889
KPI: v4l2: frameNumber= 1095 encoder= 11 ms pts= 47155555525
KPI: v4l2: frameNumber= 1096 encoder= 15 ms pts= 47197221785
KPI: v4l2: frameNumber= 1097 encoder= 12 ms pts= 47238895369
KPI: v4l2: frameNumber= 4461 encoder= 11 ms pts= 187621020785
KPI: v4l2: frameNumber= 4462 encoder= 18 ms pts= 187662694785
KPI: v4l2: frameNumber= 4463 encoder= 596 ms pts= 187704361785
KPI: v4l2: frameNumber= 4464 encoder= 560 ms pts= 187746031212
KPI: v4l2: frameNumber= 4465 encoder= 518 ms pts= 187787696265
KPI: v4l2: frameNumber= 4466 encoder= 17 ms pts= 187829374577
KPI: v4l2: frameNumber= 4467 encoder= 13 ms pts= 187871030525
Hi,
The pipeline looks optimal. Youay run with sudo nvpmodel -m 0 and sudo jetson_clocks. If the issue is still present, it looks to be constraint of Jetson Nano. It can achieve 30 fps,but occasionally the system is too busy, leading to the delay.
I just run with sudo nvpmodel -m 0 and sudo jetson_clocks, and re-run my test. It still has some frame encoding latency more 550ms. Please refer to the log below:
KPI: v4l2: frameNumber= 1728 encoder= 13 ms pts= 72444785621
KPI: v4l2: frameNumber= 1729 encoder= 10 ms pts= 72486445350
KPI: v4l2: frameNumber= 1730 encoder= 641 ms pts= 72528114141
KPI: v4l2: frameNumber= 1731 encoder= 603 ms pts= 72569791193
KPI: v4l2: frameNumber= 1732 encoder= 556 ms pts= 72611448194
KPI: v4l2: frameNumber= 1733 encoder= 16 ms pts= 72653118037
KPI: v4l2: frameNumber= 1734 encoder= 11 ms pts= 72694798715
It means that this is known issue and can’t be sort out, is it?
Thanks
I did another test with a static target in front of the camera.
I Just run video stream from Jetson nano without other operation except Ubuntu OS related process. The spikes of encoding latency (169ms) are still detected. Can I say the nvv4l2h264enc is not a purely hardware encoder, i.e, it has some CPU related operation. Therefore, the Ubuntu OS process may affect the encoder latency, can I?
Hi,
The encoding is done on hardware engine, but some stacks are on CPU, like sending frames to hardware encoder, getting encoded h264 stream from encoder, and passing to upper application layer. So if you run encoding in high fps or multi-encoding instances, there is certain CPU usage.
Comparing to jetson_multimedia_api, gstreamer is with more layers and have more latency.
Hi,
The jetson_multimedia_api is low-level APIs and there are samples for demonstrating hardware functions. For high-level software stacks such as muxing into mp4, mkv, or UDP/RTSP streaming, you would need to do the implementation. It is tradeoff between using jetson_multimedia_api or gstreamer. You can construct gstreamer pipeline with existing plugins fast. For reducing latency, can use jetson_multimedia_api but need to implement SW stacks from head to tail.
@DaneLLL
Is the jetson_multimedia_api included in the SDK? I tried a quick google search but didn’t see much information about this api. What is the header file? I’d like to give it a look and see if my application could use it.