Hello,
I have trained my model and convert it to tensorrt model.
How to create a nvinfer plugin with my engine ?
or
Any guidence for create nvinfer plugin with custom deep learning network?
Thanks.
Any reason you don’t use default nvinfer which has ample/flexible preprocess, postprocess and you can get any tensor from tensorRT inference ?
Thanks for reply.
Or in another way, I’d like to add my model to the nvinfer in my project.
Just the same pipeline but new nvinfer with my own model.
Thanks.
In addition, the DeepStream SDK is just for the models in ./samples/models ?
And is only CAFFE framework model supported by DeepStream?
No. You can replace to your own model.
Caffe model, onnx model, uff model (tensorflow) all are supported.
I see.Thanks.
Is there any instruction for apply onnx model for nvinfer plugin ?
Or how to apply trt model to nvinfer ?
Sorry, I have still not found out how to replace the origin model to our custom model.
For examole, the [property] dstest3_pgie_config.txt in ./sample_apps to custom onnx model?
I think this should be crucial for adapting these sample apps to customized apps.
Any advice would be appreciated.