Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) Jetson • DeepStream Version 6.2 • JetPack Version (valid for Jetson only) 5.1 • TensorRT Version TensorRT 8.5.2 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) questions • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
I want to run a custom model on deepstream, can I also add a custom layer(addPluginV2)?
Acually, model that I made is yolov5s6 that based on cpp and TensorRT with yolov5s6.cfg, yolov5s6.weights
So If I can, I want to infer by calling .dll .h in Deepstream. Can I do that?
There is sample of yolov3 cfg and weights files in /opt/nvidia/deepstream/deepstream/sources/objectDetector_Yolo. Please read the README and source code in it.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks