Here are the instructions you need. I’ll file an internal bug against this documentation.
This sample demonstrates how to perform inference on the Caffe SSD network in TensorRT, use TensorRT plugins to speed up inference, and perform INT8 calibration on an SSD network. To generate the required prototxt file for this sample, perform the following steps:
Download models_VGGNet_VOC0712_SSD_300x300.tar.gz from: https://drive.google.com/file/d/0BzKzrI_SkD1_WVVTSmQxU0dVRzA/view
Extract the contents of the tar file;
Edit the deploy.prototxt file and change all the Flatten layers to Reshape operations with the following parameters:
Update the detection_out layer by adding the keep_count output, for example, add:
Rename the deploy.prototxt file to ssd.prototxt and run the sample.
To run the sample in INT8 mode, install Pillow first by issuing the $ pip install Pillow command, then follow the instructions from the README.