Please provide complete information as applicable to your setup.
• Hardware Platform (GPU)
• DeepStream Version 5.0.1
• TensorRT Version 7.1
• NVIDIA GPU Driver Version (450)
In deepstream, the main detector and secondary classifier are usually used as the pipeline.How can I add a three-level classifier? Can I add an ID directly to the secondary classifier?
For example: the main detector is vehicle detection, the second level classifier is license plate detection, and the third level classifier is license plate recognition. How can I add license plate recognition?
Hi,
This workflow may need some customization when handling or extracting the ROI region.
You can check our example for a manually ROI example:
/opt/nvidia/deepstream/deepstream-5.0/sources/gst-plugins/gst-dsexample
In general, you don’t need two detectors to capture the object you want.
For example, it is possible to train a model to capture both vehicle and license plate simultaneously.
Thanks.
Thanks!
How can I convert my pytorch model of license plate recognition to deepstream model? What is its specific process? How to add it to the deep stream pipeline?
Please give me some advice, thank you!
Hi,
First, you will need to convert it into an ONNX intermediate format.
And you can run it with Deepstream by just updating the corresponding path in the configure file.
Here is an example using ONNX based model with Deepstream for your reference:
https://github.com/NVIDIA-AI-IOT/deepstream_pose_estimation/blob/master/deepstream_pose_estimation_config.txt#L45
Thanks.
Thank you for your reply!
I know how to convert the model to onnx or tensorrt, and I know how to configure the config file of deepstream, but I don’t know how to write the output parsing. Cpp file of the model?
It’s like nvdsinfer_yolo_engine.cpp and nvdsparsebbox_Yolo.cpp in /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo.
Can you give me the detailed steps to parseing .cpp? Or routine?
Hi,
The output parser is based on the semantic meaning of your detector output layer.
And need to be implemented with C++.
As you already know, you can find an example in the objectDetector_Yolo folder.
For example, in config_infer_primary_yoloV3.txt.
[property]
...
parse-bbox-func-name=NvDsInferParseCustomYoloV3
custom-lib-path=nvdsinfer_custom_impl_Yolo/libnvdsinfer_custom_impl_Yolo.so
So Deeptream will check the NvDsInferParseCustomYoloV3 function in libnvdsinfer_custom_impl_Yolo.so to parse output.
You can find the NvDsInferParseCustomYoloV3 function in nvdsparsebbox_Yolo.cpp.
extern "C" bool NvDsInferParseCustomYoloV3(
std::vector<NvDsInferLayerInfo> const& outputLayersInfo,
NvDsInferNetworkInfo const& networkInfo,
NvDsInferParseDetectionParams const& detectionParams,
std::vector<NvDsInferParseObjectInfo>& objectList)
{
...
}
In general, this function’s input is the latest layer of the model (define in the configure file).
And you will need to manipulate the tensor data to create the objectList (NvDsInferObjectDetectionInfo).
You can find the detailed structure of NvDsInferObjectDetectionInfo in the below document:
https://docs.nvidia.com/metropolis/deepstream/sdk-api/Gst_Infer/NvDsInferObjectDetectionInfo.html
Thanks.
Thank you very much!
I learned how to write post-processing programs
Now the problem is how to debug the post processor
How do you usually debug C + + programs?