These are the attached config files through which we are trying to run the custom model in the deepstream-app but getting errors (attached file as DD_error2.txt).
• How to reproduce the issue? (This is for bugs. Including which sample app is used, the configuration files content, the command line used, and other details for reproducing)
Currently, we can not share this as the project is confidential.
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
NA
Yes I tried to load the engine file trtexec It failed
[03/07/2024-14:08:13] [I] Note: The application clock rates do not reflect the actual clock rates that the GPU is currently running at.
[03/07/2024-14:08:13] [I]
[03/07/2024-14:08:13] [I] TensorRT version: 8.6.2
[03/07/2024-14:08:13] [I] Loading standard plugins
[03/07/2024-14:08:13] [E] Error opening engine file: best_emodle_engine_pytorch.trt
[03/07/2024-14:08:13] [E] Failed to create engine from model or file.
[03/07/2024-14:08:13] [E] Engine set up failed
&&&& FAILED TensorRT.trtexec [TensorRT v8602] # /usr/src/tensorrt/bin/trtexec --loadEngine=best_emodle_engine_pytorch.trt
You can keep .trt as an extension or .engine or anything else too.
It is basically a serialized engine which would be deserialized before inference begins. Extension has got nothing to do with it.
It is just that this time your model was built successfully.
Is there a reason you are using sudo while creating the engine?
What is the error this time?
Last time as I said before, your deepstream was simply not able to find the engine so it started building the engine using the onnx file and thus it failed.
For every custom model, you need to write your own custom post-processing function.
You can search it on this forum and you’ll find some helpful links.
You can also search for yolo deepstream and you’ll find out how to make custom post-processing for custom models and you can just modify it for your own model.
@rajupadhyay59 Thanks for your help, things work for me.
I came to know that I need to add the following lines in my DD_model_basic.txt file under [property] to handle custom bounding box parsing for your model.
So I have added these lines but things do not work as this file .so and the corresponding function does not exist.
After reading I come to know about the following directory
t-tech@ubuntu:/opt/nvidia/deepstream/deepstream-6.4/sources/libs/nvdsinfer_customparser$ ls
Makefile nvdsinfer_custombboxparser.cpp nvdsinfer_customclassifierparser.cpp nvdsinfer_customsegmentationparser.cpp README
after reading README file I can create libnvds_infercustomparser.so
in the directory.
t-tech@ubuntu:/opt/nvidia/deepstream/deepstream-6.4/sources/libs/nvdsinfer_customparser$ ls
libnvds_infercustomparser.so Makefile nvdsinfer_custombboxparser.cpp nvdsinfer_customclassifierparser.cpp nvdsinfer_customsegmentationparser.cpp README
then add the following lines in the DD_model_basic.txt
network-type=1
# ... other properties ...
parse-bbox-func-name=NvDsInferParseCustomTfSSD
custom-lib-path=/opt/nvidia/deepstream/deepstream-6.4/sources/libs/nvdsinfer_customparser/libnvds_infercustomparser.so
and things work. But I have some questions about how things work.
My model is mobilenet_v3_large which only does the classification task without a SSD, but how do network-type=1, and NvDsInferParseCustomTfSSD help to detect the bounding boxes?
network-mode=1 means that your engine was built using int8 precision.
You can go through tensorrt’s documentation for more information.
If you model is a classifier than you will use parse-classifier-func-name instead of the parse-bbox-func-name.
NvDsInferParseCustomTfSSD does not help with the detection.
Your nvinfer element does the inference.
NvDsInferParseCustomTfSSD is basically a post-processing function which takes the results from nvinfer, and then we set the format like yolo follows center x, center y, width and height but detectron2 follows left, top, right, bottom.
You should refer to the Deepstream documentation for more info.