I’m evaluating SSD model (VGG16) for 512 & 330 resolution on the TX2 platform. I’m interested to know the benchmarking of GitHub - weiliu89/caffe at ssd w/ & w/o cuDNN, 32/16 bit model, with Caffe framework. And similar benchmarking on TensorRT based optimization with 32/16/8 bit quantization. Please share the drop in accuracy due to quantization.
Can you please point me to a document/blog to convert & execute Caffe “SSD VGG 16” model into the TensorRT on TX2 board from the link GitHub - weiliu89/caffe at ssd
Thanks for sharing the tools detail for converting Caffe based VGG16 SSD model to TensorRT. However, while running the tool got following errors.
$[libprotobuf ERROR google/protobuf/text_format.cc:298] Error parsing text-format ditcaffe.NetParameter: 817:14: Message type “ditcaffe.LayerParameter” has no field named “norm_param”.
$CaffeParser: Could not parse deploy file
$Engine could not be created
Could you please help us to resolve it.
Is there any document where we can see what all are the layers/primitives supported on the TensorRT runtime.