SSD object detection benchmarking on TX2

Hello,

I’m evaluating SSD model (VGG16) for 512 & 330 resolution on the TX2 platform. I’m interested to know the benchmarking of GitHub - weiliu89/caffe at ssd w/ & w/o cuDNN, 32/16 bit model, with Caffe framework. And similar benchmarking on TensorRT based optimization with 32/16/8 bit quantization. Please share the drop in accuracy due to quantization.

Appreciate for your quick support.

Thanks,
Mahsky

Hi,

Sorry that there is no available benchmark result for the SSD model in the Caffe representation.
The most relevant data is based on TensorFlow model:
https://github.com/NVIDIA-AI-IOT/tf_trt_models
https://github.com/NVIDIA-AI-IOT/tf_to_trt_image_classification

Suppose you can test it on your own:

Caffe: use app.py in the branch you shared

$ python examples/web_demo/app.py [-g]

TensorRT: execute ./trtexec in the “/usr/src/tensorrt”:

./trtexec --deploy=xxx.prototxt [--fp16]

Thanks.

Hi,

Thanks for your reply.

Can you please point me to a document/blog to convert & execute Caffe “SSD VGG 16” model into the TensorRT on TX2 board from the link GitHub - weiliu89/caffe at ssd

Thanks,
Hemant

Hi,

The branch you shared doesn’t enable TensorRT support.
Please follow these steps to execute your model with TensorRT on Jetson directly:

$ cp -r /usr/src/tensorrt/ .
$ cd tensorrt/samples/trtexec/
$ make
$ cd ../../bin/
$ ./trtexec --deploy=[/path/to/your/model] --output=[output layer name]

Thanks.

Hi,

Thanks for sharing the tools detail for converting Caffe based VGG16 SSD model to TensorRT. However, while running the tool got following errors.

$[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 817:14: Message type “ditcaffe.LayerParameter” has no field named “norm_param”.
$CaffeParser: Could not parse deploy file
$Engine could not be created

Could you please help us to resolve it.

Is there any document where we can see what all are the layers/primitives supported on the TensorRT runtime.

Thanks,
Hemant

Hi,

Which JetPack/TensorRT version are you using on your TX2?

With the latest JetPack 4.2/TensorRT5.0, you can have two approaches to evaluate the performance of VGG16 SSD,

  1. new sampleSSD
  2. trtexec way mentioned by Aasta (TensorRT 4.0 doesn’t support parsing SSD prototxt directly).

BTW, for accuracy, as per our experiment, we didn’t observe significant accuracy loss (mAP within 1%) from FP32 to INT8 for VGG16_SSD_300x300_VOC2007.