Please provide complete information as applicable to your setup.
**• Hardware Platform (Jetson / GPU)**rtx3050 • DeepStream Version6.4
I am using deepsteram-app for peoplenet.
I can interface to 17 cctvs successfully.
When increased to one more camera and altogether 18, I have this error.
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
tst1!
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvTrackerParams::getConfigRoot()] !!![WARNING] File doesn't exist. Will go ahead with default values
[NvTrackerParams::getConfigRoot()] !!![WARNING] File doesn't exist. Will go ahead with default values
[NvMultiObjectTracker] Initialized
0:00:04.503105856 1350039 0x63ed0c14bcc0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/samples/configs/conf-app/../../models/iFocus/resnet34_peoplenet_int8.onnx_b1_gpu0_int8.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1:0 3x544x960
1 OUTPUT kFLOAT output_cov/Sigmoid:0 3x34x60
2 OUTPUT kFLOAT output_bbox/BiasAdd:0 12x34x60
0:00:04.641283671 1350039 0x63ed0c14bcc0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/configs/conf-app/../../models/iFocus/resnet34_peoplenet_int8.onnx_b1_gpu0_int8.engine
0:00:04.645671984 1350039 0x63ed0c14bcc0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.4/samples/configs/conf-app/iFocus_primary.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
** INFO: <bus_callback:338>: Pipeline ready
*** stack smashing detected ***: terminated
Aborted (core dumped)
ser@va:/opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/iFocus$ valgrind --leak-check=full --show-leak-kinds=all --track-origins=yes --verbose --log-file=valgrind-out.txt ./deepstream-app -c ../../../../samples/configs/conf-app/iFocus_main.txt
** WARN: <parse_dsexample:908>: Unknown key 'batch-size' for group [ds-example]
Warning: 'input-dims' parameter has been deprecated. Use 'infer-dims' instead.
tst1!
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream/lib/libnvds_nvmultiobjecttracker.so
[NvTrackerParams::getConfigRoot()] !!![WARNING] File doesn't exist. Will go ahead with default values
[NvTrackerParams::getConfigRoot()] !!![WARNING] File doesn't exist. Will go ahead with default values
[NvMultiObjectTracker] Initialized
0:03:29.176800697 1363565 0x1406c6920 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-6.4/samples/configs/conf-app/../../models/iFocus/resnet34_peoplenet_int8.onnx_b1_gpu0_int8.engine
WARNING: [TRT]: The getMaxBatchSize() function should not be used with an engine built from a network created with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag. This function will always return 1.
INFO: ../nvdsinfer/nvdsinfer_model_builder.cpp:610 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1:0 3x544x960
1 OUTPUT kFLOAT output_cov/Sigmoid:0 3x34x60
2 OUTPUT kFLOAT output_bbox/BiasAdd:0 12x34x60
0:03:39.833621910 1363565 0x1406c6920 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-6.4/samples/configs/conf-app/../../models/iFocus/resnet34_peoplenet_int8.onnx_b1_gpu0_int8.engine
0:03:40.122489031 1363565 0x1406c6920 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-6.4/samples/configs/conf-app/iFocus_primary.txt sucessfully
Runtime commands:
h: Print this help
q: Quit
p: Pause
r: Resume
Segmentation fault (core dumped)
The followings are GPU memory usage for 17 cctvs. 17 cctvs runs very stable. Only changed to 18, I have issue.
17 cctvs, GPU memory usage is only ~1G. rtx3050 has 8GB.
Have you made any changes to the deepstream-app? Especially perf_cb and perf_measurement_callback, there seems to be overflow here
==1363565== 1 errors in context 1 of 777:
==1363565== Invalid write of size 4
==1363565== at 0x1293B8: perf_cb (in /opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/iFocus/deepstream-app)
==1363565== by 0x14BE20: perf_measurement_callback (in /opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/iFocus/deepstream-app)
==1363565== by 0x51552C7: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.7200.4)
==1363565== by 0x5154C43: g_main_context_dispatch (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.7200.4)
==1363565== by 0x51AA2B7: ??? (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.7200.4)
==1363565== by 0x51542B2: g_main_loop_run (in /usr/lib/x86_64-linux-gnu/libglib-2.0.so.0.7200.4)
==1363565== by 0x12BD90: main (in /opt/nvidia/deepstream/deepstream-6.4/sources/apps/sample_apps/iFocus/deepstream-app)
==1363565== Address 0x0 is not stack'd, malloc'd or (recently) free'd
Yeah I did slight modification in perf_cb (gpointer context, NvDsAppPerfStruct * str). I’ll check and thanks. Stack smashing detected is because of that? Can’t be, right?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks