- deepstream-app version 6.1.0
- DeepStreamSDK 6.1.0
- CUDA Driver Version: 11.4
- CUDA Runtime Version: 11.0
- TensorRT Version: 8.2
- cuDNN Version: 8.4
- libNVWarp360 Version: 2.0.1d3
- device on A6000
With the multi-model feature of deepstram-app, can the data flow continue if the second model is not detected based on the classification of the first model?
I actually want to perform serial classification detection of two independent engine models.
please refer to /opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/source4_1080p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt, the first model will do detection, the latter models will do color, maker, type classification respectively.
Yes, I have tested this case, but he is secondary detecting the main classification. For example, the vehicle is detected, and then the second model detects the color of this vehicle.
What I want is that the second model and the first model are independent, for example the first model detects the vehicle and the second model detects the person.
please refer to GitHub - NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream, the first and second models both are detections, but the second one is based on the first’s inference result, you can do some modification, set the second model’s operate-on-gie-id to -1.
Is it possible to link two models by modifying the configuration file of deepstream-app?
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
yes, you can modify the sgie’s configuration, use a detection model, set network-type
to 0, set process-mode to 1.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.