Inference with custom model

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) RTX 3060
• DeepStream Version 6.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.4.3-1+cuda11.6
• NVIDIA GPU Driver Version (valid for GPU only) 510.85.02
• Issue Type( questions, new requirements, bugs) questions

Say I have trained a custom model. How can I infer my model with DeepStream 6.1 with multiple rtsp and save frames of every rtsp when my model detects objects in Python language?

Thanks in advance

You can refer the link below:
https://github.com/NVIDIA-AI-IOT/deepstream_python_apps/blob/master/apps/deepstream-imagedata-multistream/deepstream_imagedata-multistream.py

Is there a way to do it with custom model? Thank you!

You can refer the link below:
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_using_custom_model.html

Thank you!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.