How to use YOLOv8 in AI NVR in Jetson Orin Nano and use the yolov8 model trained by yourself

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU):jetson orin nano
• DeepStream Version:7.1
• JetPack Version (valid for Jetson only):6.1
• TensorRT Version:10.3
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Do you mean you want to use yolov8 with JPS? It is already in the DeepStream perception. DeepStream Perception — Jetson Platform Services documentation

I mean using yolov8 inside a deepstream container and showing it in a VST.

You may see “compose_agx_yolov8s.yaml” in the JPS “package ai_nvr-2.0.1.tar.gz”, you can refer to the “compose_agx_yolov8s.yaml” to modify the “compose_nano.yaml” to support yolov8s

No, this won’t work. This will cause the container to fail to start.

Orin Nano haven’t DLA: NVIDIA Jetson AGX Orin . JPS YOLOv8 will use DLA for inference. So you need make your trained yolov8 run with deepstream app on Orin Nano, then port it to JPS AI NVR. Or use peoplenet on Orin Nano with JPS default release.

I want to use yolov8 in the Deepstream container of AI NVR. How can I do it in detail?

You can refer the deepstream-test1 for detection model: C/C++ Sample Apps Source Details — DeepStream documentation