Onnx model on deepstream5.0: Nvinfer error: Could not find NMS layer buffer while parsing

• Hardware Platform (Jetson / GPU) Jetson Nano
• DeepStream Version 5.0
• JetPack Version (valid for Jetson only) 4.4
• TensorRT Version 7.1
• Issue Type( questions, new requirements, bugs) question

  1. Trained a SSD mobilenet v1 model using custom dataset and convert to onnx model format following instructions here----- https://github.com/dusty-nv/jetson-inference/blob/master/docs/pytorch-ssd.md
  2. checked if generated onnx model is compatible with deepstream by running /usr/src/tensorrt/bin/trtexec --onnx=/home/jetson/videostreams/samples/models/onx/ssd-mobilenet.onnx
    the output i got here is:

&&&& PASSED TensorRT.trtexec # /usr/src/tensorrt/bin/trtexec --onnx=/home/jetson/videostreams/samples/models/onx/ssd-mobilenet.onnx

which implies that the model is compatible to use with deepstrream

  1. Built custom parser from /opt/nvidia/deepstream/deepstream-5.0/sources/objectDetector_SSD
  2. modifed the follwing in config file for nvinfer as follows:

input-blob-name = input_0

and the resultant output after running the deepstream app is:

Could not find NMS layer buffer while parsing
0:00:13.542989487 20826 0x1806d5e0 ERROR nvinfer gstnvinfer.cpp:613:gst_nvinfer_logger: NvDsInferContext[UID 1]: Error in NvDsInferContextImpl::fillDetectionOutput() <nvdsinfer_context_impl_output_parsing.cpp:725> [UID = 1]: Failed to parse bboxes using custom parse function
Segmentation fault (core dumped)

Is the issue with config parameters or the model convertion? what additional steps should be followed while converting the model? The convertion was done using this https://github.com/dusty-nv/pytorch-ssd/blob/e7b5af50a157c50d3bab8f55089ce57c2c812f37/onnx_export.py


You will need to update the model info to the nvdsparsebbox_ssd.cpp also.

An incorrect layer name causes the error above.
Please update the NMS into the corresponding layer name.

  if (nmsLayerIndex == -1) {
    for (unsigned int i = 0; i < outputLayersInfo.size(); i++) {
      if (strcmp(outputLayersInfo[i].layerName, "NMS") == 0) {


Hi @AastaLLL thanks for the reply. The network i had, has outputs as scores and boxes. So i modified the NMS and NMS1 into scores and boxes and rebuilt the custom parser. Now im not getting any error. But the scripts stops saying

Segmentation fault (core dumped)

Parsed successfully as got below. But could’nt find where the error is

0:00:12.321735558 25973 0x3d53a000 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1701> [UID = 1]: deserialized trt engine from :/home/jetson/videostreams/samples/models/onx/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_0 3x300x300
1 OUTPUT kFLOAT scores 3000x3
2 OUTPUT kFLOAT boxes 3000x4

0:00:12.326119852 25973 0x3d53a000 INFO nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger: NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1805> [UID = 1]: Use deserialized engine model: /home/jetson/videostreams/samples/models/onx/ssd-mobilenet.onnx.1.1.7103.GPU.FP16.engine
0:00:12.396903621 25973 0x3d53a000 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus: [UID 1]: Load new model:dstest1_pgie_config_onx.txt sucessfully
Warning: gst-library-error-quark: Rounding muxer output width to the next multiple of 8: 304 (5): /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvmultistream/gstnvstreammux.c(2299): gst_nvstreammux_change_state (): /GstPipeline:pipeline0/GstNvStreamMux:Stream-muxer
Segmentation fault (core dumped)


Could you run the Deepstream pipeline with debug enabled?

$ deepstream-app -c <config> --gst-debug=5