Description
I am trying to follow GitHub - jkjung-avt/tensorrt_demos: TensorRT MODNet, YOLOv4, YOLOv3, SSD, MTCNN, and GoogLeNet to convert custom trained SSD Mobilenet V2 model to TensorRT format. I get this error while running the conversion.
#assertion/opt/tensorrt/TensorRT/plugin/nmsPlugin/nmsPlugin.cpp,246
I have tried changing the inputOrder on this line https://github.com/jkjung-avt/tensorrt_demos/blob/a061e44a82e1ca097f57e5a32f20daf5bebe7ade/ssd/build_engine.py#L58
to [0, 2, 1] as the pbtxt file generated contains the following format
id: "NMS"
inputs: "Squeeze"
inputs: "concat_priorbox"
inputs: "concat_box_conf"
Environment
TensorRT Version : 7.0.0
Tensorflow. Version : 1.15
@AastaLLL
I am attaching my config file below as well.
pipeline.config (5.7 KB)
NVES
May 17, 2022, 7:38am
2
Hi,
UFF and Caffe Parser have been deprecated from TensorRT 7 onwards, hence request you to try ONNX parser.
Please check the below link for the same.
Thanks!
ONNX parser doesn’t support few layers and UFF parser have shown to be working on the current ssd_mobilenet_v2 graph.
I think this is a dimension mismatch error. Any help with this would be helpful
Hi,
Please refer official TensorRT object detection API models, which use ONNX parser and also support SSD MobileNet v2.
Thank you.
Thanks
But this doesn’t support Tensorrt 7.0 which I currently want to use. I need help with tensorrt 7
Sorry, TensorRT 7 is a very old version. UFF and Caffe parsers are deprecated from TRT7 onwards.
We recommend you use the latest TensorRT version.
https://developer.nvidia.com/nvidia-tensorrt-8x-download
Thank you.