I want to use peoplenet and i want to train with my own custom-dataset. After training how to use this model for my custom-pipeline.
Please suggest me how to achieve this.
To deploy a model trained by TLT to DeepStream you can run multiple options:
Option 1: Integrate the model (.etlt) with the encrypted key directly in the DeepStream app. The model file is generated by tlt-export.
Option 2: Generate a device specific optimized TensorRT engine, using tlt-converter. The TensorRT engine file can also be ingested by DeepStream.
You can also deploy the TensorRT engine without Deepstream.
Thanks for the reply. I have to deploy TensorRT engine without Deepstream.But my concern is how to do inferencing with tensorrt engine in my own custom-pipeline. Because in Jupyter-notebook we have functionality called tlt-infer so we are able to do inferencing with that. but in my own pipeline how to do inferencing with this tensorrt engine.