Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
dgx spark - NVIDIA GB10
• DeepStream Version
deepstream 8.0
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
| NVIDIA-SMI 580.95.05 Driver Version: 580.95.05 CUDA Version: 13.0 |
• Issue Type( questions, new requirements, bugs)
new requirements
**• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
created a docker container** FROM nvcr.io/nvidia/deepstream:8.0-triton-dgx-spark
this pipeline, that worked in orin/ds7.1
! nvinfer [ … ]
! nvtracker [ … ]
! queue max-size-buffers=0 leaky=2
! nvmultistreamtiler rows=1 columns=1 width=1920 height=1080
! nvvideoconvert name=raw_tiler_probe
! video/x-raw(memory:NVMM),format=NV12
! queue max-size-buffers=0 leaky=2
! nvjpegenc quality=80
! fakesink async=true name=jpeg_tiler_probe
was modified to
! nvinfer [ … ]
! nvtracker [ … ]
! queue max-size-buffers=0 leaky=2
! nvmultistreamtiler rows=1 columns=1 width=1920 height=1080
! nvvideoconvert name=raw_tiler_probe
! video/x-raw,format=I420
! queue max-size-buffers=0 leaky=2
! nvjpegenc quality=80
! fakesink async=true name=jpeg_tiler_probe
to make it work, but now the jpeg_tiler_probe probe gives this error
l_frame = batch_meta.frame_meta_list
AttributeError: ‘NoneType’ object has no attribute ‘frame_meta_list’
gpt suggests to split the pipeline like this
… ! nvmultistreamtiler rows=1 columns=1 width=1920 height=1080
! nvvideoconvert name=raw_tiler_probe
! tee name=t
t. ! queue max-size-buffers=0 leaky=2
! fakesink async=true name=meta_sink
t. ! queue max-size-buffers=0 leaky=2
! nvvideoconvert
! video/x-raw,format=I420
! nvjpegenc quality=80
! appsink name=jpeg_sink emit-signals=true sync=false max-buffers=1 drop=true
is there a retrocompatible way to do this and avoid two probes? (one for the frame and one for the detections)
thank you, William