Hi, I wanted to know if the deepstream-app application allows to put two detectors in the pipeline (detector1-trafficam → detector2-peoplenet), or will I have to rely on the back-to-back-detector?
thanks
Hi, I wanted to know if the deepstream-app application allows to put two detectors in the pipeline (detector1-trafficam → detector2-peoplenet), or will I have to rely on the back-to-back-detector?
thanks
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 5.0-GA
• JetPack Version (valid for Jetson only) —
• TensorRT Version 7.0
• NVIDIA GPU Driver Version (valid for GPU only) 450
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? –
• Requirement details
Hi, I wanted to know if the deepstream-app application allows to put two detectors in the pipeline (detector1-trafficam → detector2-peoplenet), or will I have to rely on the back-to-back-detector?
I use the following settings, but when running deepstream-app -c deepstream_source1.txt, the output never names peoplenet, it is as if it is omitted.
config_infer_primary_peoplenet.txt (759 Bytes)
config_infer_primary_trafficcamnet.txt (891 Bytes)
deepstream_source1.txt (2.3 KB)
root@ca57a2ba58a1:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models# deepstream-app -c deepstream_app_source
1_trafficcamnet.txt
Warning: ‘input-dims’ parameter has been deprecated. Use ‘infer-dims’ instead.
gstnvtracker: Loading low-level lib at /opt/nvidia/deepstream/deepstream-5.0/lib/libnvds_mot_klt.so
gstnvtracker: Optional NvMOT_RemoveStreams not implemented
gstnvtracker: Batch processing is OFF
gstnvtracker: Past frame output is OFF
WARNING: …/nvdsinfer/nvdsinfer_func_utils.cpp:36 [TRT]: Current optimization profile is: 0. Please ensure there are no enqueued operations pending in this context prior to switching profiles
0:00:01.301849272 7031 0x55ba0107f810 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/…/…/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine
INFO: …/nvdsinfer/nvdsinfer_model_builder.cpp:685 [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x544x960
1 OUTPUT kFLOAT output_bbox/BiasAdd 16x34x60
2 OUTPUT kFLOAT output_cov/Sigmoid 4x34x60
0:00:01.301932064 7031 0x55ba0107f810 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/…/…/models/tlt_pretrained_models/trafficcamnet/resnet18_trafficcamnet_pruned.etlt_b1_gpu0_int8.engine
0:00:01.302691325 7031 0x55ba0107f810 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/opt/nvidia/deepstream/deepstream-5.0/samples/configs/tlt_pretrained_models/config_infer_primary_trafficcamnet.txt sucessfully
Thanks
Can you set process-mode=2 for peaplenet?
There is no update from you for a period, assuming this is not an issue any more.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
Any update? Have you try to set process-mode=2 for peaplenet?