DeepStream 7.1 docker python GazeNet pipeline

• Hardware Platform (Jetson / GPU)
Jetson Orin 4012, NVIDIA Jetson Orin NX Bundle, 8x 2GHz, 16GB DDR5
• DeepStream Version
Container: deepstream:7.1-triton-multiarch
• JetPack Version (valid for Jetson only)
see Container: deepstream:7.1-triton-multiarch
• TensorRT Version
see Container: deepstream:7.1-triton-multiarch
• NVIDIA GPU Driver Version (valid for GPU only)
$ nvidia-smi
Returns: Driver Version: N/A
• Issue Type( questions, new requirements, bugs)
Question

I’m currently trying to setup the following video processing pipeline from within the deepstream:7.1-triton-multiarch container:

USB-video camera input → Face Detection → Facial Landmarks → Gaze Detection

I want to include this pipeline into my python application and use the outputs of each pipeline step, both within my own application code and as inputs for the next pipeline step. Therefore i would love to setup the pipeline in python.

So far i’m not sure, if i understood the concept correctly and therefore dont know, if thats even possible and if so, where to start.

Thank you in advance!

DS-7.1 no longer supports deepstream-gaze-app, please use DS-7.0

You can use Python, but you need to rewrite the post-processing here as Python code, which is very complicated

I tried setting up the gaze demo the following way:

  1. Created a docker container with DeepStream 7.0 installed:
    Dockerfile:
    FROM nvcr.io/nvidia/deepstream:7.0-samples-multiarch
    WORKDIR /home

  2. Cloned the corresponding branch of deepstream_tao_apps repo (GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tao5.3_ds7.0ga) inside the container.

  3. Followed “Download” & “Build & Run” from corresponding README (deepstream_tao_apps/apps/tao_others/README.md at release/tao5.3_ds7.0ga · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub)

Here a new issue arises, when executing “make”:

make -C post_processor
make[1]: Entering directory ‘/home/deepstream_tao_apps/post_processor’
g++ -o libnvds_infercustomparser_tao.so nvdsinfer_custombboxparser_tao.cpp -I/opt/nvidia/deepstream/deepstream-7.0/sources/includes -I/usr/local/cuda-12.2/include -Wall -std=c++11 -shared -fPIC -Wl,–start-group -lnvinfer -L/usr/local/cuda-12.2/lib64 -lcudart -lcublas -Wl,–end-group
In file included from nvdsinfer_custombboxparser_tao.cpp:25:
/opt/nvidia/deepstream/deepstream-7.0/sources/includes/nvdsinfer_custom_impl.h:127:10: fatal error: NvCaffeParser.h: No such file or directory
127 | include “NvCaffeParser.h”
| ^~~~~~~~~~~~~~~~~
compilation terminated.
make[1]: *** [Makefile:49: libnvds_infercustomparser_tao.so] Error 1
make[1]: Leaving directory ‘/home/deepstream_tao_apps/post_processor’
make: *** [Makefile:24: all] Error 2

Please use deepstream:7.0-triton-multiarch.

7.0-samples-multiarch can only be used for deployment

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.