Record .engin

Hello ,

Please I’m looking for how to register the engine yolov3_b1_gpu0.engin generate after doing the inference, so as not to be forced to compile my application each time, my configuration file is as follows:inference.txt (2.0 KB)

THANK YOU

We don’t know your model type and features. Is your model exported according to Exporting the Model — Transfer Learning Toolkit 2.0 documentation (nvidia.com)? If so, you can refer to Deploying to Deepstream — Transfer Learning Toolkit 2.0 documentation (nvidia.com).

From deepstream point of view, deepstream supports caffe model, uff file, onnx and TLT encoded models, for different type of model, the configurations will be different. Gst-nvinfer — DeepStream 6.1.1 Release documentation

Thanks