Secondary GIE - deepstream python

• Hardware Platform (Jetson / GPU) : Jetson
• DeepStream Version : 7.1
• JetPack Version (valid for Jetson only) : 6.2
• TensorRT Version : 10.3.0.30

Hi,

I am working on Deepstream with Python - GitHub - NVIDIA-AI-IOT/deepstream_python_apps: DeepStream SDK Python bindings and sample applications using nvcr.io/nvidia/deepstream:7.1-triton-multiarch docker image

I am looking for an example app which uses secondary GIE, need to know the syntax and how to create the pipeline after the pgie.

But, the app doesnt have the example for that, could you help me on how to create the deepstream pipeline with the sgie

And, one more clarification, in deepstream-imagedata-multistream-redaction app, the config files mentioned the model file path, labels.txt file,
but, that mentioned model and label txt file are not provided in the docker image, could you share me where to get this model

thanks

/opt/nvidia/deepstream/deepstream/sources/deepstream_python_apps/apps/deepstream-test2 a pipeline consisting of one pgie and multiple sgies.

However, due to changes in github, the URL in /opt/nvidia/deepstream/deepstream/samples/configs/tao_pretrained_models/README.md is invalid. You can get it directly from the link below deepstream_tao_apps/download_models.sh at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.