Running .engine file outside deepstream SDK

Hi,

I’ve used the TLT to train, prune and export a model, and converted it to a .engine file on the Jetson Xavier platform. But I want to use the .engine without the deepstream SDK or include the deepstread SDK into my python code (or c++ if that is required)

I have previously used Tensorflow with their object detection API and the NVIDIA tf-trt API to optimize the models for my embedded platforms.

Is this possible?

Best regards
Kevin

Hi kevingrooters,
Take Faster_rcnn as an example, how to run inference using the trained FasterRCNN models from TLT is provided on github: https://github.com/NVIDIA-AI-IOT/deepstream_4.x_apps . The pre-processing and post processing code are already exposed in C++ inside nvdsinfer_customparser_frcnn_uff folder.

Furthermore, to config TRT engine directly instead of etlt model, you can refer to the 4th comment of https://devtalk.nvidia.com/default/topic/1063940/transfer-learning-toolkit/transfert-learning-toolkit-gt-export-model-/