Run a TLT pruned model on Tensorrt

Hello everyone,

I have just started using the Transfer Learning Toolkit, I am now able to run an inference for a SSD_RESNET18 pruned model on a nivdia nano using deepstream ( I followed the different steps of the ‘Transfer Learning Toolkit Getting Started Guide’).

All the examples I found export the transfer learning toolkit models (.tlt, .etlt) to deepstream.

I would like to know if it is possible to make an inference of a model created with transfer learning toolkit using only Tensorrt?

Thank you.

Hi,

YES.

Deepstream use TensorRT as inference engine so you can execute the model with TensorRT directly.
You can start from this TensorRT samples:

$ cd /usr/src/tensorrt/bin/
$ ./trtexec [your/model/info]

Thanks.

Thank you for your answer, i’m now able to execute the trtexec example with my prunned model
by executing the following command:

./trtexec --loadEngine=<path_to_the_engine_file>

I have modified the source code to keep only the inference purpose.

Is it possible to redefine the input buffer of the model to do the inference with my own images?

Thanks.

Hi,

Sure. You can read image with the preferred libraries (ex.OpenCV) and feed it into the inference buffer
Ex. buffers[0] in this sample

context->enqueue(gParams.batchSize, &buffers[0], stream, nullptr);

We also have an inference sample for ppm image for your reference:
/usr/src/tensorrt/samples/sampleMNIST

Thanks,.