Python wrapper for SSD caffe


Hi, I am trying to use caffe ssd model downloaded from the official repository for inference. I know there is a C++ code at samples/sampleSSD. Due to some business requirements, I have to use PYTHON code for this task. Is there any python code for the same task ?



TensorRT Version: 7.1.0
GPU Type: Jetson Nano device
Nvidia Driver Version:
CUDA Version: 10.2
CUDNN Version:
Operating System + Version: Ubuntu 18.04
Python Version (if applicable): 3.6.9
TensorFlow Version (if applicable):
PyTorch Version (if applicable):
Baremetal or Container (if container which image + tag):

Relevant Files

Please attach or include links to any models, data, files, or scripts necessary to reproduce your issue. (Github repo, Google Drive, Dropbox, etc.)

Steps To Reproduce

Please include:

  • Exact steps/commands to build your repro
  • Exact steps/commands to run your repro
  • Full traceback of errors encountered

Hi @adeelz92,
Please check the below link.
The relevant python code resides at /usr/src/tensorrt/samples/python/uff_ssd inside TensorRT NGC container which you can pull from the below link

Hi, Thanks for the reply. I have already checked this sample. This is based on tensorflow version of ssd model. I am looking for the caffe version, which can read .caffemodel and .prototxt to do inference just like the C++ code in samples/sampleSSD.

Hi @adeelz92,
Hope this link answers your query.
There you will find a sample uses a Caffe ResNet50 Model to create a TensorRT Inference Engine