Running .engine file outside deepstream SDK


I’ve used the TLT to train, prune and export a model, and converted it to a .engine file on the Jetson Xavier platform. But I want to use the .engine without the deepstream SDK or include the deepstread SDK into my python code (or c++ if that is required)

I have previously used Tensorflow with their object detection API and the NVIDIA tf-trt API to optimize the models for my embedded platforms.

Is this possible?

Best regards

Hi kevingrooters,
Take Faster_rcnn as an example, how to run inference using the trained FasterRCNN models from TLT is provided on github: . The pre-processing and post processing code are already exposed in C++ inside nvdsinfer_customparser_frcnn_uff folder.

Furthermore, to config TRT engine directly instead of etlt model, you can refer to the 4th comment of