• Hardware Platform (Jetson / GPU) : GPU • DeepStream Version : 7.0 • TensorRT Version : 8.6.1.6 • NVIDIA GPU Driver Version (valid for GPU only) : 555.42.02 • Getting the error while loading an engine file, while running the following pipeline :
Please suggest where am I doing wrong ? Also suggest if there are other ways of running the inference pipeline (any other element).
Refer to this image for error details :
You are using different TRT versions for building the engine and deserializing it. Or you are using different devices for building the engine and deserializing it.
Yeah, we are building the engine file in a NGC docker using the following command :
trtexec --onnx=processed_ssd_characterization_Jit_traced_cuda.onnx --optShapes=input:1x3x300x300 --saveEngine=processed_ssd_characterization_Jit_traced_cuda.engine
And the docker has following TRT and cuda versions:
Thanks for the reply, you were correct regarding the engine file, I generated the engine file in the same device where I am running the pipeline. That issue is resolved.
Now the issue is with parse-bbox-func-name . Do we need to build our own custom library (.so) file for our own AI model ? We are not sure whether we can use the already provided custom library files.
For output-blob-names and infer-dims we are correctly matched it. Example :
I think so,If your model is a detector,you can refer /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo for your own custom library.
Usually the output layer’s tensor is parsed into an object list.
If your model is based on SSD, There are some referable codes(sources/objectDetector_SSD) in legacy version.Download the DS-6.2(Log in | NVIDIA Developer).
Thank you for your inputs. We are checking on the referable codes.
To be more precise, we have our own postprocessing step to handle the nms logic, how can integrate that logic to this ’ NvDsPostProcessParseCustomSSD’.
I see these are more specific to specific SSD model and not for the customized model.
Please let us know how can we proceed with custom type of postprocess situation.
Where to add our custom postprocess code/function? Here is the nvsd_postprocess.yml file : property:
** gpu-id: 0 set the GPU id**
** process-mode: 1 # Set the mode as primary inference**
** num-detected-classes: 1 # Change according the models output**
** gie-unique-id: 1 # This should match the one set in inference config**
** ## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)**
** cluster-mode: 4 # Set appropriate clustering algorithm**
** network-type: 0 # Set the network type as detector**
** labelfile-path: labels.txt # Set the path of labels wrt to this config file**
** parse-bbox-func-name: NvDsPostProcessParseCustomSSD # Set custom parsing function**
class-attrs-all: # Set as done in the original infer configuration
** nms-iou-threshold: 0.5**
** pre-cluster-threshold: 0.7**
And this is the nvinfer_config.txt file we are using : nvinfer_config.txt (1.8 KB)
What is your model input? Is it a tensor or an image? If it is an image, I think the preprocess element is unnecessary. If the model input is a tensor, there should be no problem.
2.The postprocess element is unnecessary. The property parse-bbox-func-name and custom-lib-path in the nvinfer configuration file will parser the output layer and put the result into std::vector<NvDsInferObjectDetectionInfo> &objectList
There should be a definition similar to the following in your code. CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE(NvDsPostProcessParseCustomSSD);
Is the NvDsPostProcessParseCustomSSD function executed? You can add a log to check it.
I think postpropress doesn’t affect accuracy, just parses bbox
This function should be implemented by you, which is the parse-bbox-func-name of nvinfer.
This is why you need to add parse-bbox-func-name and custom-lib-path to the nvinfer configuration file.
nvinfer is open source, you can refer the source code at /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer and /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks