Deep stream on jetson nano hangs when 8 streams are separated (demux) and then write to output files. I used separate pipelines for each stream and used nvv4l2h264enc encoding in each pipeline. Is it because of encoding.
is there any other method to separate streams and sink them separately without creating individual pipelines.
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• Hardware Platform (Jetson / GPU)** Jetson Nano
• DeepStream Version : 5.1
• JetPack Version : 4.5
• TensorRT Version : 7.1.3
• How to reproduce the issue ?
I am using DeepStream python sample app 3. I am running resnet on 8
stream and trying to write the output of each stream in a separate file.
I used demux to separate each stream and using nvv4l2h264enc encoding
it works on 7 stream pipline but if another pipeline is added it freezes after
running 4 to 5 frames
configuration file:
[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel
proto-file=…/…/…/…/samples/models/Primary_Detector/resnet10.prototxt
model-engine-file=…/…/…/…/samples/models/Primary_Detector/resnet10.caffemodel_b8_gpu0_fp16.engine
labelfile-path=…/…/…/…/samples/models/Primary_Detector/labels.txt
int8-calib-file=…/…/…/…/samples/models/Primary_Detector/cal_trt.bin
force-implicit-batch-dim=1
batch-size=8
process-mode=1
model-color-format=0
network-mode=1
num-detected-classes=4
interval=2
gie-unique-id=1
output-blob-names=conv2d_bbox;conv2d_cov/Sigmoid
Since you modified the source code, the problem may related to the code you changed. I can not use deepstream-test3-app to reproduce your problem.
Can you please tell me
How many [NVENC] concurrent encoding sessions are allowed on the JetsonNano
There is no software limitation, depends on the system performance(memory usage, CPU loading, HW encoder loading,…)
So the issue can be related to using Hw encoding ?
You can run tegrastats to check the loading. NVIDIA Jetson Linux Developer Guide : Applications and Tools | NVIDIA Docs
With 7 streams NVENC and NVDEC loaded and program works correctly:
RAM 3609/3964MB (lfb 7x2MB) SWAP 1773/8126MB (cached 74MB) IRAM 0/252kB(lfb 252kB) CPU [81%@1479,70%@1479,74%@1479,75%@1479] EMC_FREQ 49%@1600 GR3D_FREQ 13%@921 NVENC 716 NVDEC 716 VIC_FREQ 0%@192 APE 25 PLL@45C CPU@48C PMIC@100C GPU@43C AO@53C thermal@46.5C POM_5V_IN 4415/6565 POM_5V_GPU 405/1307 POM_5V_CPU 1458/1278
with 8 stream NVENC and NVDEC not loaded and program freezes :
RAM 3636/3964MB (lfb 4x4MB) SWAP 680/8126MB (cached 40MB) IRAM 0/252kB(lfb 252kB) CPU [8%@1224,8%@1224,7%@1224,6%@1224] EMC_FREQ 1%@1600 GR3D_FREQ 0%@76 VIC_FREQ 0%@192 APE 25 PLL@36.5C CPU@40C PMIC@100C GPU@36C AO@45.5C thermal@38.5C POM_5V_IN 1810/2273 POM_5V_GPU 41/87 POM_5V_CPU 246/568
Is it a code related issue?
Are you using deepstream-app? If so, can you show us your deepstream-app config file?
deepstream-app works correctly with multiple streams and their outputs.
I changed the code in test-app-3 (python) and issue is if i remove the nvosd for 8th stream
then everything works fine but then 8th stream can not have boxes etc.
Have you tried 8 seperated streams output with deepstream-app?
yes, it works fine on deepstream-app
So it may related to your modification of deepstream-test3-app. Can you show the code?
nvvidconv = [nvvidconv1, nvvidconv2, nvvidconv3, nvvidconv4, nvvidconv5, nvvidconv6, nvvidconv7]
nvosd = [nvosd1, nvosd2, nvosd3, nvosd4, nvosd5, nvosd6, nvosd7]
nvvidconv_postosd = [nvvidconv_postosd1, nvvidconv_postosd2, nvvidconv_postosd3, nvvidconv_postosd4, nvvidconv_postosd5, nvvidconv_postosd6, nvvidconv_postosd7]
encoder = [encoder1, encoder2, encoder3, encoder4, encoder5, encoder6, encoder7]
sink = [sink1, sink2, sink3, sink4, sink5, sink6, sink7]
qu = [queue1, queue2, queue3, queue4, queue5, queue6, queue7]
streammux.link(pgie)
# pgie.link(nvvidconv)
pgie.link(streamdemux)
#######################
for i in range(number_sources):
print(“demux source”, i, “\n”)
srcpad1 = streamdemux.get_request_pad(“src_%u”%i)
if not srcpad1:
sys.stderr.write(" Unable to get the src pad of streamdemux \n")
sinkpad1 = nvvidconv[i].get_static_pad(“sink”)
if not sinkpad1:
sys.stderr.write(" Unable to get sink pad of nvvidconv \n")
srcpad1.link(sinkpad1)
#######################
nvvidconv[i].link(nvosd[i])
nvosd[i].link(nvvidconv_postosd[i])
nvvidconv_postosd[i].link(qu[i])
qu[i].link(encoder[i])
encoder[i].link(sink[i])
every thing else is same as test-app3 python
Can you send out the complete code?