Regarding the use of YOLOV8S model combined with JPS Docker

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)6.1
**• DeepStream Version 7.1
**• JetPack Version (valid for Jetson only)6.1
**• TensorRT Version 10.3
**• NVIDIA GPU Driver Version (valid for GPU only)540.4.0
• Issue Type( questions, new requirements, bugs)
The Type of this mechine is Orin AGX 64G
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I would like to inquire about two questions regarding the use of YOLOV8S model combined with JPS Docker

  1. There are differences in the size of yolov8s_DAT_noqdq.onnx compared to the normal PT I downloaded from the official website
  2. The naming of yolov8s_DAT_noqd_DLA.engine cannot be converted by the program, and the program usually outputs a suffix of b8,dla0uint8

Yes, we optimized YOLOv8s model. Please use the YOLOv8s model from JPS container.
You can rename the engine file.

deepstream-yolov8s-detect-dla0-v103.log (36.6 KB)

Setting min object dimensions as 16x16 instead of 1x1 to support VIC compute mode.
0:00:00.739920824 ^[[32m 19^[[00m 0xaaab12107d90 ^[[36mINFO ^[[00m ^[[00m nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie>^[[00m NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/configs/…/models/yolov8s_Detector/yolov8s_DAT_noqdq_DLA.engine
0:00:00.739983959 ^[[32m 19^[[00m 0xaaab12107d90 ^[[33;01mWARN ^[[00m ^[[00m nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary_gie>^[[00m NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::checkBackendParams() <nvdsinfer_context_impl.cpp:2026> [UID = 1]: Backend has maxBatchSize 1 whereas 8 has been requested
0:00:00.739997047 ^[[32m 19^[[00m 0xaaab12107d90 ^[[33;01mWARN ^[[00m ^[[00m nvinfer gstnvinfer.cpp:681:gst_nvinfer_logger:<primary_gie>^[[00m NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2201> [UID = 1]: deserialized backend context :/configs/…/models/yolov8s_Detector/yolov8s_DAT_noqdq_DLA.engine failed to match config params, trying rebuild
0:00:00.749479482 ^[[32m 19^[[00m 0xaaab12107d90 ^[[36mINFO ^[[00m ^[[00m nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie>^[[00m NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.

deserialized backend context :/configs/…/models/yolov8s_Detector/yolov8s_DAT_noqdq_DLA.engine failed to match config params, trying rebuild

Please change the batch size of YOLOV8s based on the guide: DeepStream Perception — Jetson Platform Services documentation

Can YOLOV8 use INT8?
How used?

YOLOv8s on DLA already use INT8. It already the best performance.

May I ask how to modify the configuration file?
Which one should be used for the Calib file?

The default configuration file config.inpfer_primary_yoloV8.txt uses fp16

Calibration file is in the JPS docker container. You can generate with calibration file based on below command line:

$ docker run -it --rm --net=host --runtime nvidia  -w /root -v /tmp/.X11-unix/:/tmp/.X11-unix  -e DISPLAY=$DISPLAY -v /mnt/share/jps2/ai_nvr/config/deepstream/:/ds-config-files/ nvcr.io/nvidia/jps/deepstream:7.1-public-v1
$ cd yolov8s-files/
$ cp yolov8s_DAT_noqdq_DLA.engine yolov8s_DAT_noqdq_DLA.engine-bak
$ trtexec --onnx=yolov8s_DAT_noqdq.onnx --fp16 --int8 --verbose --calib=yolov8s_DAT_precision_config_calib.cache --precisionConstraints=obey --layerPrecisions=Split_36:fp16,Reshape_37:fp16,Transpose_38:fp16,Softmax_39:fp16,Conv_41:fp16,Sub_64:fp16,Concat_65:fp16,Mul_67:fp16,Sigmoid_68:fp16,Concat_69:fp16 --saveEngine=yolov8s_DAT_noqdq_DLA.engine --useDLACore=0 --allowGPUFallback

May I ask how to generate yolov8s_DAT_precision_comfig.calib.cache?

I think this topic can be fixed, but DeepStream-Yolo/docs/INT8Calibration.md at master · marcoslucianops/DeepStream-Yolo · GitHub
I can’t open it temporarily

Please use the model and the calibration file in the JPS DeepStream container:

$ docker run -it --rm --net=host --runtime nvidia  -w /root -v /tmp/.X11-unix/:/tmp/.X11-unix  -e DISPLAY=$DISPLAY -v /mnt/share/jps2/ai_nvr/config/deepstream/:/ds-config-files/ nvcr.io/nvidia/jps/deepstream:7.1-public-v1
$ cd yolov8s-files/
$ cp yolov8s_DAT_noqdq_DLA.engine yolov8s_DAT_noqdq_DLA.engine-bak
$ trtexec --onnx=yolov8s_DAT_noqdq.onnx --fp16 --int8 --verbose --calib=yolov8s_DAT_precision_config_calib.cache --precisionConstraints=obey --layerPrecisions=Split_36:fp16,Reshape_37:fp16,Transpose_38:fp16,Softmax_39:fp16,Conv_41:fp16,Sub_64:fp16,Concat_65:fp16,Mul_67:fp16,Sigmoid_68:fp16,Concat_69:fp16 --saveEngine=yolov8s_DAT_noqdq_DLA.engine --useDLACore=0 --allowGPUFallback

Hi,

I was trying to configure ai_nvr using DeepStream 7.1 and YOLOv8 with multiple streams. I followed the guide you suggested, but I wasn’t able to complete it because it failed when trying to build deepstream_test5 due to missing dependencies. I tried fixing it by installing the dependencies, but I couldn’t find a solution related to cuda_runtime_api.h.

Additionally, I noticed that l4t-repo.nvidia.com:80 doesn’t work every time I try to install something with apt.

I then repeated the process using the “deepstream:7.1-triton-multiarch” image, and it built successfully.

So, my question is: Do we need to copy the yolov8s-files folder into the Triton container from the public container, or is there a better way? Maybe I’m missing something.

Thanks!

Setup:

  • Hardware Platform: Jetson Orin NX 16GB
  • Jetpack 6.1 (L4T 36.4)
  • DeepStream Version 7.1
  • TensorRT Version 10.3.0.30

Please submit new topic for your question. Thanks!