Custom model in DeepStream 5.1

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, I have a question about custom model in DeepStream. Is it possible in DeepStream to use TensorRT engine which is not detector, classifier or does not segmentation? I have the engine which takes as inputs frame and mask and return frame with blurred objects based on mask. Is there any option to use it or maybe should I write custom plugin for it?
Thanks

Currently nvinfer cannot support multiple input model, you can use videotemplate plugin to do that.

Thank you for reply. I implemented OpenCV bluring code in custom library in videotemplate plugin and everything works fine. So now I want to run my bluring engine inside the custom library, cause opencv code works really slow. Can you give me some instruction or template what I should implement to run tensorRT engine? Gst-nvinfer plugin is an extensive plugin and I believe I can implement code for my engine in shorter form.
Thank you

Can you refer deepstream_tao_apps/apps/tlt_others/deepstream-emotion-app at release/tlt3.0 · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub Gst-nvdsvideotemplate — DeepStream 6.1.1 Release documentation

I implemented code for run engine in custom library but I get en error when I run inference. Can you help me find the reason of this error? I send my script and ONNX file in PM. Error:

…/rtSafe/cuda/genericReformat.cu (1294) - Cuda Error in executeMemcpy: 1 (invalid argument)
FAILED_EXECUTION: std::exception
free(): invalid pointer
Aborted (core dumped)

Hardware Platform: Tesla T4
DeepStream Version: 5.1
TensorRT Version: 7.2.1.6
NVIDIA GPU Driver Version: 460.73.01

My engine works with dynamic axis. It has two input images and one output images. Shape of image: ‘frame’: -1, -1, 3, ‘mask’: -1, -1, ‘output’: -1, -1, 3. To create engine:

trtexec --onnx=blend_blur_2inputs_up_to_1280.onnx --explicitBatch --fp16 --workspace=1024 --minShapes=image:220x220x3,mask:220x220 --optShapes=image:1280x1280x3,mask:1280x1280 --maxShapes=image:1280x1280x3,mask:1280x1280 --buildOnly --saveEngine=blendblur.engine

Thanks

Could you create a new topic for the new issue, we would like one topic to track one issue

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.