AI NVR configuration issues with DeepStream 7.1 and YOLOv8 on Jetson Orin NX

Hi,

I was trying to configure ai_nvr using DeepStream 7.1 and YOLOv8 with multiple streams. I followed the guide you suggested, but I wasn’t able to complete it because it failed when trying to build deepstream_test5 due to missing dependencies. I tried fixing it by installing the dependencies, but I couldn’t find a solution related to cuda_runtime_api.h.

Additionally, I noticed that l4t-repo.nvidia.com:80 doesn’t work every time I try to install something with apt.

I then repeated the process using the “deepstream:7.1-triton-multiarch” image, and it built successfully.

So, my question is: Do we need to copy the yolov8s-files folder into the Triton container from the public container, or is there a better way? Maybe I’m missing something.

Thanks!

Setup:

  • Hardware Platform: Jetson Orin NX 16GB
  • Jetpack 6.1 (L4T 36.4)
  • DeepStream Version 7.1
  • TensorRT Version 10.3.0.30

Can you build deepstream_test5 in deepstream:7.1-triton-multiarch and then copy the binary into JPS deepstreamer container?

Can you share some log to show issue?

Well, the problem I’ve mentioned doing the steps in here: YOLOv8s Deployment guide

After editing the deepstream_app.c when I did make the deepstream-test5 I got:

root@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream-test5# export CUDA_VER=12.6
root@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream-test5# make
Package gstreamer-1.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing gstreamer-1.0.pc' to the PKG_CONFIG_PATH environment variable No package 'gstreamer-1.0' found Package gstreamer-video-1.0 was not found in the pkg-config search path. Perhaps you should add the directory containing gstreamer-video-1.0.pc’
to the PKG_CONFIG_PATH environment variable
No package ‘gstreamer-video-1.0’ found
Package json-glib-1.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing json-glib-1.0.pc' to the PKG_CONFIG_PATH environment variable No package 'json-glib-1.0' found Package gstreamer-1.0 was not found in the pkg-config search path. Perhaps you should add the directory containing gstreamer-1.0.pc’
to the PKG_CONFIG_PATH environment variable
No package ‘gstreamer-1.0’ found
Package gstreamer-video-1.0 was not found in the pkg-config search path.
Perhaps you should add the directory containing gstreamer-video-1.0.pc' to the PKG_CONFIG_PATH environment variable No package 'gstreamer-video-1.0' found Package json-glib-1.0 was not found in the pkg-config search path. Perhaps you should add the directory containing json-glib-1.0.pc’
to the PKG_CONFIG_PATH environment variable
No package ‘json-glib-1.0’ found
cc -c -o deepstream_test5_app_main.o -DPLATFORM_TEGRA -I…/…/apps-common/includes -I…/…/…/includes -I…/deepstream-app/ -DDS_VERSION_MINOR=0 -DDS_VERSION_MAJOR=5 -I /usr/local/cuda-12.6/include deepstream_test5_app_main.c
deepstream_test5_app_main.c:13:10: fatal error: gst/gst.h: No such file or directory
13 | include <gst/gst.h>
| ^~~~~~~~~~~
compilation terminated.
make: *** [Makefile:61: deepstream_test5_app_main.o] Error 1

So I’ve tried to solve the problem by installing the recommended dependencies following this steps:

  1. apt update && apt upgrade -y
  2. apt update --fix-missing
  3. apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev libgstrtspserver-1.0-dev libx11-dev libjson-glib-dev libyaml-cpp-dev

root@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream-test5# make

cc -c -o deepstream_test5_app_main.o -DPLATFORM_TEGRA -I…/…/apps-common/includes -I…/…/…/includes -I…/deepstream-app/ -DDS_VERSION_MINOR=0 -DDS_VERSION_MAJOR=5 -I /usr/local/cuda-12.6/include -pthread -I/usr/include/gstreamer-1.0 -I/usr/include/orc-0.4 -I/usr/include/gstreamer-1.0 -I/usr/include/aarch64-linux-gnu -I/usr/include/json-glib-1.0 -I/usr/include/libmount -I/usr/include/blkid -I/usr/include/glib-2.0 -I/usr/lib/aarch64-linux-gnu/glib-2.0/include deepstream_test5_app_main.c

deepstream_test5_app_main.c:33:10: fatal error: cuda_runtime_api.h: No such file or directory

33 | include <cuda_runtime_api.h>

| ^~~~~~~~~~~~~~~~~~~~

compilation terminated.

make: *** [Makefile:61: deepstream_test5_app_main.o] Error 1

  1. I’ve tried to resolve manually using the packages from here Index but with no success because lacks of dependencies every time I’ve tried to install a new package related with cuda.

Regarding l4t-repo.nvidia.com:80:

root@ubuntu:/opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream-test5# apt update --fix-missing
Ign:1 http://l4t-repo.nvidia.com/common r36.4 InRelease
Hit:2 Index of /ubuntu-ports jammy InRelease
Hit:3 Index of /ubuntu-ports jammy-updates InRelease
Hit:4 Index of /ubuntu-ports jammy-backports InRelease
Ign:1 http://l4t-repo.nvidia.com/common r36.4 InRelease
Hit:5 Index of /ubuntu-ports jammy-security InRelease
Ign:1 http://l4t-repo.nvidia.com/common r36.4 InRelease
Err:1 http://l4t-repo.nvidia.com/common r36.4 InRelease
Something wicked happened resolving ‘l4t-repo.nvidia.com:80’ (-5 - No address associated with hostname)
Reading package lists… Done
Building dependency tree… Done
Reading state information… Done
46 packages can be upgraded. Run ‘apt list --upgradable’ to see them.
W: Failed to fetch http://l4t-repo.nvidia.com/common/dists/r36.4/InRelease Something wicked happened resolving ‘l4t-repo.nvidia.com:80’ (-5 - No address associated with hostname)
W: Some index files failed to download. They have been ignored, or old ones used instead.

The solution I’ve found was to create a dockerfile using the folders yolov8s-files and tmp99 from the nvcr.io/nvidia/jps/deepstream:7.1-public-v:

FROM nvcr.io/nvidia/deepstream:7.1-triton-multiarch

# Variables de entorno
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility,video

# Instalar dependencias adicionales necesarias
RUN apt-get update && apt-get install -y \
    vim \
    build-essential \
    tar \
    && apt-get clean

# Copiar y descomprimir los archivos necesarios en la raíz del contenedor
COPY config/deepstream/yolov8s/yolov8s-files.tar /
COPY config/deepstream/yolov8s/tmp99.tar /

RUN tar -xvf /yolov8s-files.tar -C / && \
    tar -xvf /tmp99.tar -C / && \
    rm /yolov8s-files.tar /tmp99.tar

# Modificar el archivo deepstream_app.c
RUN sed -i '1328,1333s/^/\/\/ /' \
    /opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream-app/deepstream_app.c

# Compilar deepstream_test5
ENV CUDA_VER=12.6
RUN cd /opt/nvidia/deepstream/deepstream-7.1/sources/apps/sample_apps/deepstream-test5 && \
    make

# Establecer el punto de entrada del contenedor (opcional, dependerá del uso final)
CMD ["/bin/bash"]

I also did what you said,

It’s works and get the same results as with the dockerfile base onnvcr.io/nvidia/deepstream:7.1-triton-multiarch

Furthermore, I found out some interesting things I would like to comment.

Using default config_infer_primary_yoloV8.txt after editing the max-batch-size on [source-list] section on the yolov8s-ds-config_nx16.txt I’ve very low FPS with 2 sources from 25fps (1 source with everything on default settings) to 15fps. I’ve also noticed that the engine file was rebuilt.

NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2201> [UID = 1]: deserialized backend context :/yolov8s-files/yolov8s_DAT_noqdq_DLA.engine failed to match config params, trying rebuild
0:00:01.804621792 e[32m 19e[00m 0xaaaab9ab0ca0 e[36mINFO e[00m e[00m nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie>e[00m NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2106> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: Using an engine plan file across different models of devices is not recommended and is likely to affect performance or even cause errors.

Active sources : 2
Fri Jan  3 12:33:30 2025
**PERF:  
LAVALLE[fe6fee95-ecf8-42a3-b80c-d0830d443b3f] 15.85 (9.79)	CORRIENTES[4e159252-81d7-427e-9c60-48878aa7ae3c] 15.85 (9.34)

I’ve solved this by editing the config_infer_primary_yoloV8.txt with

  • network-mode from 2 to 1, that is, from FP16 to INT8
  • enable-dla from 1 to 0.
Active sources : 2
Thu Jan  2 19:34:10 2025
**PERF:  
LAVALLE[fe6fee95-ecf8-42a3-b80c-d0830d443b3f] 24.87 (21.92)	CORRIENTES[4e159252-81d7-427e-9c60-48878aa7ae3c] 25.07 (22.27)

What it gets my attention is that DLA0c is working and the engine model stop rebuilding, and I don’t really know why.

I upload the full log of every step.

log.txt (91.2 KB)

Please don’t modify this file. YOLOv8s DLA only support batch size 1 currently. But you can modify the source code to support multiple streams based on the doc: DeepStream Perception — Jetson Platform Services documentation

Sorry, I don’t understand. I didn’t modify the batch_size. I followed the doc: DeepStream Perception — Jetson Platform Services documentation, but it isn’t possible with the deepstream:7.1-public image, as I’d described here:

However, I sorted out with the triton-multiarch container image but the result were poor:

My suspicion was that the problem due to the fact that the engine file wasn’t used but rebuilt, that, so I did this:

And the engine model was correctly loaded (STEP 7 log.txt file):

0:00:02.008779534 e[36m 35e[00m 0xaaab13df6a70 e[36mINFO e[00m e[00m nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie>e[00m NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:2092> [UID = 1]: deserialized trt engine from :/yolov8s-files/yolov8s_DAT_noqdq_DLA.engine
0:00:02.008846864 e[36m 35e[00m 0xaaab13df6a70 e[36mINFO e[00m e[00m nvinfer gstnvinfer.cpp:684:gst_nvinfer_logger:<primary_gie>e[00m NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2195> [UID = 1]: Use deserialized engine model: /yolov8s-files/yolov8s_DAT_noqdq_DLA.engine
0:00:02.039590710 e[36m 35e[00m 0xaaab13df6a70 e[36mINFO e[00m e[00m nvinfer gstnvinfer_impl.cpp:343:notifyLoadModelStatus:<primary_gie>e[00m [UID 1]: Load new model:/ds-config-files/yolov8s/config_infer_primary_yoloV8.txt sucessfully

I checked out that the dla 0 it’s active because the engine model was created with dla. However, it is mandatory to edit the file config_infer_primary_yoloV8.txt and change network-mode from 2 to 1, that is, from FP16 to INT8 to load the model correctly.

If you need generate TensorRT engine, please use below command line in the JPS 2.0 deepstream docker container:
$ cd /yolov8s-files/
$ cp yolov8s_DAT_noqdq_DLA.engine yolov8s_DAT_noqdq_DLA.engine-bak
$ trtexec --onnx=yolov8s_DAT_noqdq.onnx --fp16 --int8 --verbose --calib=yolov8s_DAT_precision_config_calib.cache --precisionConstraints=obey --layerPrecisions=Split_36:fp16,Reshape_37:fp16,Transpose_38:fp16,Softmax_39:fp16,Conv_41:fp16,Sub_64:fp16,Concat_65:fp16,Mul_67:fp16,Sigmoid_68:fp16,Concat_69:fp16 --saveEngine=yolov8s_DAT_noqdq_DLA.engine --useDLACore=0 --allowGPUFallback

The solution for me was:

  1. Follow the guide: DeepStream Perception - Jetson Platform Services but build the deepstream-test5 sample app outside deepstream:7.1-public, I did use nvcr.io/nvidia/deepstream:7.1-triton-multiarch.
  2. You can either do everything on the last container image, getting the yolov8s-files from the first image, or copy the deepstream-test5 binary into the public.
  3. Edit the file config_infer_primary_yoloV8.txt and change network-mode from 2 to 1, that is, from FP16 to INT8 to load the model correctly.

Please use below command line to generate TensorRT engine as serval layer need to use fp16 to ensure the accuracy:

$ trtexec --onnx=yolov8s_DAT_noqdq.onnx --fp16 --int8 --verbose --calib=yolov8s_DAT_precision_config_calib.cache --precisionConstraints=obey --layerPrecisions=Split_36:fp16,Reshape_37:fp16,Transpose_38:fp16,Softmax_39:fp16,Conv_41:fp16,Sub_64:fp16,Concat_65:fp16,Mul_67:fp16,Sigmoid_68:fp16,Concat_69:fp16 --saveEngine=yolov8s_DAT_noqdq_DLA.engine --useDLACore=0 --allowGPUFallback

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.