Limit of multiple nvv4l2h264enc & nvvidconv instances at the same time

Hi,
Based on JP 4.5.1, I’m developing a camera which supports multiple RTSP video streams with different resolution at the same time. Here is my brief pipeline:

                                      /--> interpipesrc --> nvvidconv (full HD) --> nvv4l2h264enc --> rtph264pay
                                      |
nvarguscamerasrc --> interpipesink ---+--> interpipesrc --> nvvidconv (HD) --> nvv4l2h264enc --> rtph264pay
                                      |
                                      \--> interpipesrc --> nvvidconv (SD) --> nvv4l2h264enc --> rtph264pay

gst-rtsp-server will create multiple (3 in my case) rtsp server pipelines from interpipesrc to rtph264pay, and each pipeline can have multiple stream connections.

Now I have problem when having multiple connections from different clients.
On FHD pipeline, if I use VLC to play the RTSP stream from client PCs, I can play 2 connections concurrently, hard to play 3 (can play sometimes). When the stream is not able to connect, the camera side gstreamer log is:

Apr 21 09:51:45 : rtsp_client_connected_callback() RTSP client 0x7f20378180 connected
Apr 21 09:51:45 :           s_rtsp_connection_cb() RTSP no of connection 4
Apr 21 09:51:45 : 0:34:30.570570356 208156   0x7f24004f70 ERROR             rtspclient rtsp-client.c:1020:find_media: client 0x7f20378180: not authorized for factory path /video2
Apr 21 09:51:45 : 0:34:30.570602075 208156   0x7f24004f70 ERROR             rtspclient rtsp-client.c:2899:handle_describe_request: client 0x7f20378180: no media
Apr 21 09:51:45 : 0:34:30.571868898 208156   0x7f24004f70 WARN               rtspmedia rtsp-media.c:3268:gst_rtsp_media_prepare: media 0x7ef40073c0 was not unprepared
Apr 21 09:51:45 : 0:34:30.571908169 208156   0x7f24004f70 ERROR             rtspclient rtsp-client.c:1044:find_media: client 0x7f20378180: can't prepare media
Apr 21 09:51:45 : 0:34:30.571971606 208156   0x7f24004f70 ERROR             rtspclient rtsp-client.c:2899:handle_describe_request: client 0x7f20378180: no media
Apr 21 09:51:45 :    rtsp_client_closed_callback() RTSP client 0x7f20378180 rtsp://10.10.1.187:63979(null) closed
Apr 21 09:51:45 :           s_rtsp_connection_cb() RTSP no of connection 3

It looks like the media pipeline is not created on 3rd connection.

My question is, is there any limitation of nvvidconv or nvv4l2h264enc number of instances concurrently? Or can it be NVENC/GPU/hardware usage?
This is result of tegrastats

RAM 1028/3962MB (lfb 486x4MB) IRAM 0/252kB(lfb 252kB) CPU [13%@1479,12%@1479,15%@1479,100%@1479] EMC_FREQ 6%@1600 GR3D_FREQ 0%@921 NVENC 268 VIC_FREQ 0%@192 APE 25 PLL@38C CPU@41.5C PMIC@100C GPU@36.5C AO@44.5C thermal@38.75C POM_5V_IN 4033/4311 POM_5V_GPU 159/159 POM_5V_CPU 1198/1294

Thank you for reading and any helps are appreciated.

Hi,
There are 3 nvvidconv plugins in the command but somehow there is no loading on VIC(hardware converter): VIC_FREQ 0%@192 This is a bit strange. For optimal throughput of hardware converter you can run the engine at max clock:
Nvvideoconvert issue, nvvideoconvert in DS4 is better than Ds5? - #3 by DaneLLL

Also you can run like:

nvv4l2h264enc --> fpsdisplaysink text-overlay=0 video-sink=fakesink

And check if the three pipelines can achieve target fps. Here is a C patch for reference:
FPS in test apps. - #2 by DaneLLL

Hi Dane,

The VIC usage varies.

RAM 652/3962MB (lfb 759x4MB) IRAM 0/252kB(lfb 252kB) CPU [32%@921,20%@921,27%@921,21%@921] EMC_FREQ 11%@1600 GR3D_FREQ 0%@76 NVENC 268 VIC_FREQ 0%@192 APE 25 PLL@45C CPU@48.5C PMIC@100C GPU@47C AO@51.5C thermal@47.75C POM_5V_IN 3714/3701 POM_5V_GPU 0/0 POM_5V_CPU 718/704
RAM 652/3962MB (lfb 759x4MB) IRAM 0/252kB(lfb 252kB) CPU [29%@614,19%@614,27%@614,29%@614] EMC_FREQ 11%@1600 GR3D_FREQ 0%@76 NVENC 268 VIC_FREQ 0%@192 APE 25 PLL@45C CPU@48.5C PMIC@100C GPU@47C AO@51.5C thermal@47.75C POM_5V_IN 3714/3702 POM_5V_GPU 0/0 POM_5V_CPU 718/705
RAM 652/3962MB (lfb 759x4MB) IRAM 0/252kB(lfb 252kB) CPU [27%@518,18%@403,29%@518,25%@518] EMC_FREQ 11%@1600 GR3D_FREQ 0%@76 NVENC 268 VIC_FREQ 0%@192 APE 25 PLL@45C CPU@49C PMIC@100C GPU@47C AO@52C thermal@47.75C POM_5V_IN 3714/3703 POM_5V_GPU 0/0 POM_5V_CPU 678/703
RAM 652/3962MB (lfb 758x4MB) IRAM 0/252kB(lfb 252kB) CPU [29%@518,25%@518,23%@518,26%@518] EMC_FREQ 11%@1600 GR3D_FREQ 0%@76 NVENC 268 VIC_FREQ 5%@192 APE 25 PLL@45C CPU@49C PMIC@100C GPU@47C AO@52C thermal@48C POM_5V_IN 3674/3701 POM_5V_GPU 0/0 POM_5V_CPU 718/704
RAM 652/3962MB (lfb 758x4MB) IRAM 0/252kB(lfb 252kB) CPU [32%@710,23%@710,25%@710,26%@710] EMC_FREQ 11%@1600 GR3D_FREQ 0%@76 NVENC 268 VIC_FREQ 47%@192 APE 25 PLL@45C CPU@48.5C PMIC@100C GPU@47C AO@52C thermal@48C POM_5V_IN 3674/3699 POM_5V_GPU 0/0 POM_5V_CPU 678/703
RAM 652/3962MB (lfb 758x4MB) IRAM 0/252kB(lfb 252kB) CPU [32%@921,20%@921,25%@921,31%@921] EMC_FREQ 11%@1600 GR3D_FREQ 0%@76 NVENC 268 VIC_FREQ 66%@192 APE 25 PLL@45C CPU@49C PMIC@100C GPU@47C AO@52C thermal@48C POM_5V_IN 3793/3705 POM_5V_GPU 0/0 POM_5V_CPU 758/706
RAM 652/3962MB (lfb 758x4MB) IRAM 0/252kB(lfb 252kB) CPU [29%@1036,22%@1036,23%@1036,27%@1036] EMC_FREQ 11%@1600 GR3D_FREQ 0%@76 NVENC 268 VIC_FREQ 71%@192 APE 25 PLL@45C CPU@49C PMIC@100C GPU@47C AO@52C thermal@48C POM_5V_IN 3753/3707 POM_5V_GPU 0/0 POM_5V_CPU 758/709
RAM 652/3962MB (lfb 758x4MB) IRAM 0/252kB(lfb 252kB) CPU [27%@825,25%@825,28%@825,23%@825] EMC_FREQ 11%@1600 GR3D_FREQ 0%@76 NVENC 268 VIC_FREQ 24%@192 APE 25 PLL@45.5C CPU@49C PMIC@100C GPU@47.5C AO@52C thermal@48C POM_5V_IN 3753/3710 POM_5V_GPU 0/0 POM_5V_CPU 718/709
RAM 652/3962MB (lfb 758x4MB) IRAM 0/252kB(lfb 252kB) CPU [31%@710,23%@710,26%@710,27%@710] EMC_FREQ 11%@1600 GR3D_FREQ 0%@76 NVENC 268 VIC_FREQ 8%@192 APE 25 PLL@45C CPU@48.5C PMIC@100C GPU@47C AO@52C thermal@48C POM_5V_IN 3674/3708 POM_5V_GPU 0/0 POM_5V_CPU 678/708
RAM 652/3962MB (lfb 758x4MB) IRAM 0/252kB(lfb 252kB) CPU [29%@710,20%@710,27%@710,27%@710] EMC_FREQ 11%@1600 GR3D_FREQ 0%@76 NVENC 268 VIC_FREQ 1%@192 APE 25 PLL@45.5C CPU@49C PMIC@100C GPU@47.5C AO@52.5C thermal@48C POM_5V_IN 3680/3706 POM_5V_GPU 0/0 POM_5V_CPU 678/706
RAM 652/3962MB (lfb 758x4MB) IRAM 0/252kB(lfb 252kB) CPU [29%@518,21%@518,28%@518,25%@518] EMC_FREQ 11%@1600 GR3D_FREQ 0%@76 NVENC 268 VIC_FREQ 0%@192 APE 25 PLL@45C CPU@49C PMIC@100C GPU@47C AO@52C thermal@48C POM_5V_IN 3634/3703 POM_5V_GPU 0/0 POM_5V_CPU 638/703

I did tried to set VIC to maximum speed, 627MHz. But it doesn’t help.
The FPS of running stream, and FPS at interpipe is stable at 30FPS.

So, does it mean the nvvidconv will use VIC, and as the VIC usage now is low, then it will be fine with this amount of nvvidconv?
How about nvv4l2h264enc? I see NVENC value is 268 in tegrastats, and it is constant, doesn’t change when I use one or many nvv4l2h264enc elements, and multiple resolution as well. So how to know the encoding is full load or not?
Thank you.

Hi,
There is a property to run encoder at maximum clock. Please execute gst-inspect-1.0 nvv4l2h264enc to get the property.

Thanks. I saw that is maxperf-enable property. There is no conflict if I enable this property for every nvv4l2h264enc elements?

Hi,
It is fine to set the property to every nvv4l2h264enc in the pipeline. You can check tegrastats to confirm the clock keeps at maximum value always.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.