Problem converting ONNX model to TensorRT Engine for SSD Mobilenet V2

Hi, anujfulari

We can run your model with the configure file shared in this topic but update the class number into 4.

diff --git a/config.py b/config.py
index 499a605..444af99 100644
--- a/config.py
+++ b/config.py
@@ -36,7 +36,7 @@ NMS = gs.create_plugin_node(name="NMS", op="NMS_TRT",
     nmsThreshold=0.6,
     topK=100,
     keepTopK=100,
-    numClasses=3,
+    numClasses=4,
     inputOrder=[0, 2, 1],
     confSigmoid=1,
     isNormalized=1)
$ sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -o sample_ssd_relu6.uff -O NMS -p config.py
$ /usr/src/tensorrt/bin/trtexec --uff=./sample_ssd_relu6.uff --uffInput=Input,3,300,300 --output=NMS

Please give it a try and let us know the following.
Thanks.