"Error Code 1: Serialization (Serialization assertion plan->header.magicTag == rt::kPLAN_MAGIC_TAG failed.)"

• Hardware Platform (Jetson / GPU) : GPU
• DeepStream Version : 7.0
• TensorRT Version : 8.6.1.6
• NVIDIA GPU Driver Version (valid for GPU only) : 555.42.02
• Getting the error while loading an engine file, while running the following pipeline :

gst-launch-1.0 filesrc location = /home/salim/Desktop/Nvidia_Pipeline_test/demo.mp4 ! qtdemux !
h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert !
nvdspreprocess config-file=/home/salim/Desktop/Nvidia_Pipeline_test/config_preprocess.txt !
nvinfer config-file-path=/home/salim/Desktop/Nvidia_Pipeline_test/nvinfer_config.txt
input-tensor-meta=1 batch-size=1 ! nvmultistreamtiler width=1920 height=1080 ! nvvideoconvert ! nvdsosd ! nveglglessink

Following is the nvinfer_config.txt file :
nvinfer_config.txt (1.3 KB)

Please suggest where am I doing wrong ? Also suggest if there are other ways of running the inference pipeline (any other element).
Refer to this image for error details :

Is the error related to the engine file we are using or is it related to the config.txt file ?

You are using different TRT versions for building the engine and deserializing it. Or you are using different devices for building the engine and deserializing it.

Yeah, we are building the engine file in a NGC docker using the following command :
trtexec --onnx=processed_ssd_characterization_Jit_traced_cuda.onnx --optShapes=input:1x3x300x300 --saveEngine=processed_ssd_characterization_Jit_traced_cuda.engine

And the docker has following TRT and cuda versions:


And in the local host :

And Can you please check the infer config file as well ? For the inference to happen are all the key values correct ?

The *.engine file is hardware-related, you need to generate it on the deployed device.

Some configuration items are related to the model, I can’t confirm whether they are normal.

Such as output-blob-names and infer-dims and parse-bbox-func-name and so on.

Thanks for the reply, you were correct regarding the engine file, I generated the engine file in the same device where I am running the pipeline. That issue is resolved.

Now the issue is with parse-bbox-func-name . Do we need to build our own custom library (.so) file for our own AI model ? We are not sure whether we can use the already provided custom library files.
For output-blob-names and infer-dims we are correctly matched it. Example :

output-blob-names=OUTPUT__LABEL;OUTPUT__LOC
force-implicit-batch-dim=1
parse-bbox-func-name=NvDsPostProcessParseCustomSSD
custom-lib-path=/opt/nvidia/deepstream/deepstream-7.0/lib/libnvds_infercustomparser.so

Please suggest how do we give the parse-bbox-func-name and custom-lib-path

Please check this error :

Is your model a detector ? If yes network-type=0 need add to configuration file.

Integer

0: Detector

1: Classifier

2: Segmentation

3: Instance Segmentation

I think so,If your model is a detector,you can refer /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo/nvdsinfer_custom_impl_Yolo for your own custom library.

Usually the output layer’s tensor is parsed into an object list.

Yeah our model is a SSD based detector. Even we have added the network-type=0.

If your model is based on SSD, There are some referable codes(sources/objectDetector_SSD) in legacy version.Download the DS-6.2(Log in | NVIDIA Developer).

You can migrate it to the DS-7.0

Hi @junshengy ,

Thank you for your inputs. We are checking on the referable codes.

To be more precise, we have our own postprocessing step to handle the nms logic, how can integrate that logic to this ’ NvDsPostProcessParseCustomSSD’.
I see these are more specific to specific SSD model and not for the customized model.

Please let us know how can we proceed with custom type of postprocess situation.

This is not a conflict. Add cluster-mode=4 to your configuration file , then do your custom postprocess to filter the bbox.

You can add nms logic in your custom postprocess function.

https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_plugin_gst-nvinfer.html#clustering-algorithms-supported-by-nvinfer

HI @junshengy Thanks for the reply

We are able to run the pipeline as shown in the below image. But getting the error as :

Where to add our custom postprocess code/function? Here is the nvsd_postprocess.yml file :
property:
** gpu-id: 0 set the GPU id**
** process-mode: 1 # Set the mode as primary inference**
** num-detected-classes: 1 # Change according the models output**
** gie-unique-id: 1 # This should match the one set in inference config**
** ## 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)**
** cluster-mode: 4 # Set appropriate clustering algorithm**
** network-type: 0 # Set the network type as detector**
** labelfile-path: labels.txt # Set the path of labels wrt to this config file**
** parse-bbox-func-name: NvDsPostProcessParseCustomSSD # Set custom parsing function**

class-attrs-all: # Set as done in the original infer configuration
** nms-iou-threshold: 0.5**
** pre-cluster-threshold: 0.7**

And this is the nvinfer_config.txt file we are using :
nvinfer_config.txt (1.8 KB)

  1. What is your model input? Is it a tensor or an image? If it is an image, I think the preprocess element is unnecessary. If the model input is a tensor, there should be no problem.
    2.The postprocess element is unnecessary. The property parse-bbox-func-name and custom-lib-path in the nvinfer configuration file will parser the output layer and put the result into std::vector<NvDsInferObjectDetectionInfo> &objectList

There should be a definition similar to the following in your code.
CHECK_CUSTOM_PARSE_FUNC_PROTOTYPE(NvDsPostProcessParseCustomSSD);

Is the NvDsPostProcessParseCustomSSD function executed? You can add a log to check it.

Our model will not run without the preprocess and postprocess, for getting the good accuracy results.

Where and how to add the log for the NvDsPostProcessParseCustomSSD ?

I think postpropress doesn’t affect accuracy, just parses bbox

This function should be implemented by you, which is the parse-bbox-func-name of nvinfer.
This is why you need to add parse-bbox-func-name and custom-lib-path to the nvinfer configuration file.

nvinfer is open source, you can refer the source code at /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer and /opt/nvidia/deepstream/deepstream/sources/libs/nvdsinfer

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.