I’m evaluating SSD model (VGG16) for 512 & 330 resolution on the TX2 platform. I’m interested to know the benchmarking of https://github.com/weiliu89/caffe/tree/ssd w/ & w/o cuDNN, 32/16 bit model, with Caffe framework. And similar benchmarking on TensorRT based optimization with 32/16/8 bit quantization. Please share the drop in accuracy due to quantization.
Appreciate for your quick support.