Deepstream custom layer

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only) 5.1
• TensorRT Version TensorRT 8.5.2
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I want to run a custom model on deepstream, can I also add a custom layer(addPluginV2)?

If you can, please tell me how.

Thank you

What kind of custom model? How many input layers and how many output layers?

gst-nvinfer supports the following types of models, which type is your model?

  • Caffe Model and Caffe Prototxt
  • ONNX
  • UFF file
  • TAO Encoded Model and Key
1 Like

Acually, model that I made is yolov5s6 that based on cpp and TensorRT with yolov5s6.cfg, yolov5s6.weights
So If I can, I want to infer by calling .dll .h in Deepstream. Can I do that?

There is sample of yolov3 cfg and weights files in /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo. Please read the README and source code in it.

I’d appreciate with your kind answer.

Can I run the deepstream in Visual Studio(.cpp) or pycharm(.py) with my custom cfg and weights files?

my cfg has custom layer, so layer name is not common.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

No. There is no DeepStream for Windows OS.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.