HI,
Sorry for the late update.
1. For ssd based model, you can convert it via this command:
sudo python3 /usr/lib/python3.6/dist-packages/uff/bin/convert_to_uff.py frozen_inference_graph.pb -o [output].uff -O NMS -p /usr/src/tensorrt/samples/sampleUffSSD/config.py
2. The benchmark results you shared is based on pure TensorRT inference time.
But the fps from Deepstream includes whole pipeline as following:
Thanks.
