Hi
I want to setup a rtsp server to stream 4k video on Jetson nano.I used gst-rtsp-server (python)for setting up the rtsp server .Frames are capturing using the opencv api(with gstreamer pipeline).The opencv frames are pushing using the pipeline shown below.
'appsrc name=source format=GST_FORMAT_TIME ’ 'caps=video/x-raw,format=BGR,width=3840,height=2160,framerate={}/1 ’ '! videoconvert ! video/x-raw,format=I420 ’ ‘! omxh264enc control-rate=2 bitrate=4000000 ! video/x-h264, stream-format=byte-stream ‘’! rtph264pay config-interval=1 name=pay0 pt=96’.format(self.fps)
Then I am converting rtsp to rtmp using ffmpeg .The overall fps is 4.sometimes the video freezes .Even in the local rtsp streaming the fps is too low.What would be the reason? kindly help me.
Thanks in advance
Hi,
In the piepline, there is format conversion frm BGR to I420 and copying CPU buffer to NVMM buffer. These take high CPU consumption. Please run sudo tegrastats to check if the CPU loading is at ~100% and caps the performance
Thanks for your reply.I will try this.Is there any other efficient method for rtmp streaming in jetson nano other than converting rtsp to rtmp using ffmpeg?Does nano support nvenc ,nvdec and ffmpeg with gpu?
Hi,
May need other users to share experience.Usually we run RTSP on Jetson platforms.
We support hardware video decoding in ffmpeg. Please check developer guide.
What about nvenc and nvdec(in Jetpack 4.3)?I tried to compile video codec sdk .But it failed.Does jetson support nvenc?
Hi,
Video Codec SDK is specific to desktop GPUs. We have gstreamer and jetson_multiemdia_api on Jetson platforms. Please refer to the documents.
https://docs.nvidia.com/jetson/l4t-multimedia/index.html
https://docs.nvidia.com/jetson/l4t/index.html#page/Tegra%2520Linux%2520Driver%2520Package%2520Development%2520Guide%2Faccelerated_gstreamer.html%23
The hardware engines are NVENC and NVDEC. You can check the status through sudo tegrastats.
Ok.Thanks for the reply.I checked the status
Status log
RAM 3308/3965MB (lfb 5x4MB) SWAP 242/8126MB (cached 12MB) IRAM 0/252kB(lfb 252kB) CPU [57%@1479,69%@1479,65%@1479,76%@1479] EMC_FREQ 32%@1600 GR3D_FREQ 0%@76 NVENC 716 APE 25 PLL@42.5C CPU@43C PMIC@100C GPU@40C AO@51.5C thermal@41.5C POM_5V_GPU 5390/5541 POM_5V_IN 38/49 POM_5V_CPU 2133/2224
This means hardware engine is on .right?
Hi,
NVENC is present so hardware video encoding is running.
Can I use nvvidconv instead of videoconvert in the above pipeline…I tried to implement the same pipeline with nvvidconv .But it is not working.Kindly help me.
nvvidconv cannot handle BGR, only BGRx or RGBA, so you would first use videoconvert for converting into one of these, then use nvvidconv for conversion to YUV into NVMM memory as expected by encoder:
'appsrc ! video/x-raw,format=BGR,width=3840,height=2160,framerate=30/1 ! videoconvert ! video/x-raw,format=RGBA ! nvvidconv ! omxh264enc control-rate=2 bitrate=4000000 ! video/x-h264, stream-format=byte-stream ! rtph264pay config-interval=1 name=pay0 pt=96
1 Like