Convert SSD-mobilenetv2 to int8

I have a custom trained single class SSD-Mobilenetv2 detector(in .uff format) trained using tensorflow. how do I operate it in int8 mode? I cannot use the tlt-converter as I have not used tlt to train the model.

Hi sivaishere96,
The tlt-converter tool is only compatible in TLT. Please see tlt user guide https://docs.nvidia.com/metropolis/TLT/tlt-getting-started-guide/index.html

And also you should follow the process of TLT. For example, to train a tlt model, converter tlt model in int8 mode, etc.

As I have mentioned above I am using SSD-Mobilenetv2 which is not supported by TLT. Hence I have to use tensorflow to train the model. Now I have to convert the trained model to int8. How do I do that?

Do you mean you want to convert the uff model into a TensorRT Int8 engine? It looks like a TensorRT topic.

Yes. I want to convert the uff model to an int8 engine . This is supposed to be deployed in deepstream. Hence asked the question here.

You can search some useful topics inside TensorRT forum.
For example,
https://devtalk.nvidia.com/default/topic/1056739/tensorrt/does-tensorrt-python-api-support-uff-model-int8-calibration/post/5360226/#5360226
https://devtalk.nvidia.com/default/topic/1039120/tensorrt/sampleuffssd-int8-calibration-segmentation-fault/post/5324295/#5324295

Can the SampleUffSSD repsoitory be used for calibrating SSD-Mobilenet-v2 to int8? From what I have seen it is for inferencing the SSD Inception network

Please refer to below two samples.
/usr/src/tensorrt/samples/python/end_to_end_tensorflow_mnist/
/usr/src/tensorrt/samples/python/int8_caffe_mnist/

Doc: https://docs.nvidia.com/deeplearning/sdk/tensorrt-sample-support-guide/index.html