Migrating pipeline from orin/deepstream 7.1 to dgx/deepstream 8.0

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
dgx spark - NVIDIA GB10
• DeepStream Version
deepstream 8.0

• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
| NVIDIA-SMI 580.95.05 Driver Version: 580.95.05 CUDA Version: 13.0 |
• Issue Type( questions, new requirements, bugs)
new requirements

**• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

created a docker container** FROM nvcr.io/nvidia/deepstream:8.0-triton-dgx-spark

this pipeline, that worked in orin/ds7.1

! nvinfer [ … ]
! nvtracker [ … ]
! queue max-size-buffers=0 leaky=2
! nvmultistreamtiler rows=1 columns=1 width=1920 height=1080
! nvvideoconvert name=raw_tiler_probe
! video/x-raw(memory:NVMM),format=NV12
! queue max-size-buffers=0 leaky=2
! nvjpegenc quality=80
! fakesink async=true name=jpeg_tiler_probe

was modified to

! nvinfer [ … ]
! nvtracker [ … ]
! queue max-size-buffers=0 leaky=2
! nvmultistreamtiler rows=1 columns=1 width=1920 height=1080
! nvvideoconvert name=raw_tiler_probe
! video/x-raw,format=I420
! queue max-size-buffers=0 leaky=2
! nvjpegenc quality=80
! fakesink async=true name=jpeg_tiler_probe

to make it work, but now the jpeg_tiler_probe probe gives this error

l_frame = batch_meta.frame_meta_list
AttributeError: ‘NoneType’ object has no attribute ‘frame_meta_list’

gpt suggests to split the pipeline like this

… ! nvmultistreamtiler rows=1 columns=1 width=1920 height=1080
! nvvideoconvert name=raw_tiler_probe
! tee name=t

t. ! queue max-size-buffers=0 leaky=2
! fakesink async=true name=meta_sink

t. ! queue max-size-buffers=0 leaky=2
! nvvideoconvert
! video/x-raw,format=I420
! nvjpegenc quality=80
! appsink name=jpeg_sink emit-signals=true sync=false max-buffers=1 drop=true

is there a retrocompatible way to do this and avoid two probes? (one for the frame and one for the detections)

thank you, William

This problem seems to be caused by an incorrect Python binding installation? Could you share the command line for starting the container, and how to install pyds?

We generally recommend adding different probes on different pads because batchmeta/framemeta/objectmeta, etc., are modified in the pipeline, and adding only one probe might cause you to miss some information.

hi @junshengy

the python bindings are the latest version for deepstream 8.0, so 1.2.2

this is the dockerfile i used

FROM nvcr.io/nvidia/deepstream:8.0-triton-dgx-spark

ENV DEBIAN_FRONTEND=noninteractive
ENV DS_VERSION=8.0.0

RUN /opt/nvidia/deepstream/deepstream/user_additional_install.sh
RUN /opt/nvidia/deepstream/deepstream/user_deepstream_python_apps_install.sh --version 1.2.2
RUN pip3 install /opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/pyds-1.2.2-cp312-cp312-linux_aarch64.whl

the container was started with this command

docker run --runtime nvidia -v ./:/app -it nvcr.io/nvidia/deepstream:8.0-triton-dgx-spark bash

i only need one probe for the tiler + detections, at the end of the pipeline, so i don’t think i need other probes


if you compare the 2 pipelines you notice that

! video/x-raw(memory:NVMM),format=NV12

was changed to

! video/x-raw,format=I420

because i inspected nvjpegenc in deepstream 8.0 for dgx spark and

Pad Templates:

SINK template: ‘sink’

Availability: Always

Capabilities:

  video/x-raw(memory:CUDAMemory)
             format: { (string)I420, (string)Y42B, (string)Y444 }
              width: \[ 1, 2147483647 \]
             height: \[ 1, 2147483647 \]
  video/x-raw
             format: { (string)I420, (string)Y42B, (string)Y444 }
              width: \[ 1, 2147483647 \]

the input memory changed from NVMM to CUDAMemory (so i removed memory:NVMM from the pipeline) and the format changed to I420 (instead of the NV12 in the orin ds 7.1)

could be this the reason for the python bindings to not find the batch_meta.frame_meta_list?

thank you for your help, William

*cp312-linux_aarch64.whl is a pre-compiled binary for Jetson, which is incompatible with Spark(which is a SBSA device). It’s best to compile the whl file from source code, like this.

ENV CMAKE_ARGS="-DIS_SBSA=on"
RUN /opt/nvidia/deepstream/deepstream/user_deepstream_python_apps_install.sh -b --version 1.2.2

Then create a container on the command line like this. These parameters affect nvidia-container-toolkit

docker run -it --rm --runtime=nvidia --network=host -e
NVIDIA_DRIVER_CAPABILITIES=compute,utility,video,graphics --gpus all
--privileged -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v
/etc/X11:/etc/X11 nvcr.io/nvidia/deepstream:8.0-triton-arm-sbsa

This link explained these parameters.

There are two plugins called nvjpegenc, provided by the NVIDIA and GStreamer communities, and it appears they conflict with each other on the Spark platform.

Try to use nvimageenc with a DeepStream pipeline to avoid this conflict.

thank you, it worked with nvimageenc instead of nvjpegenc!

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.