Cannot run gaze demo

Please provide complete information as applicable to your setup.
Follow up of the following thread: DeepStream 7.1 docker python GazeNet pipeline - #4 by clemens.richter

• Hardware Platform (Jetson / GPU)
Jetson AGX Orin
• DeepStream Version
Container: nvcr.io/nvidia/deepstream:7.0-triton-multiarch
• JetPack Version (valid for Jetson only)
see : Container: nvcr.io/nvidia/deepstream:7.0-triton-multiarch
• TensorRT Version
see: Container: nvcr.io/nvidia/deepstream:7.0-triton-multiarch
• NVIDIA GPU Driver Version (valid for GPU only)
see: Container: nvcr.io/nvidia/deepstream:7.0-triton-multiarch
• Issue Type( questions, new requirements, bugs)
Bug
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I tried setting up the gaze demo the following way:

  1. Created a docker container with DeepStream 7.0 installed:
    Dockerfile:
    FROM nvcr.io/nvidia/deepstream:7.0-triton-multiarch
    WORKDIR /home
  2. Cloned the corresponding branch of deepstream_tao_apps repo (GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tao5.3_ds7.0ga) inside the container.
  3. Followed “Build" from README (GitHub - NVIDIA-AI-IOT/deepstream_tao_apps at release/tao5.3_ds7.0ga)

Running make gives the folowing error:

make -C post_processor
make[1]: Entering directory '/home/deepstream_tao_apps/post_processor'
g++ -o libnvds_infercustomparser_tao.so nvdsinfer_custombboxparser_tao.cpp -I/opt/nvidia/deepstream/deepstream-7.0/sources/includes -I/usr/local/cuda-12.2/include -Wall -std=c++11 -shared -fPIC -Wl,--start-group -lnvinfer -L/usr/local/cuda-12.2/lib64 -lcudart -lcublas -Wl,--end-group
nvdsinfer_custombboxparser_tao.cpp: In lambda function:
nvdsinfer_custombboxparser_tao.cpp:371:90: error: 'INT64' was not declared in this scope
  371 |             if ((layer.dataType == FLOAT || layer.dataType == INT32 || layer.dataType == INT64) &&
      |                                                                                          ^~~~~
make[1]: *** [Makefile:49: libnvds_infercustomparser_tao.so] Error 1
make[1]: Leaving directory '/home/deepstream_tao_apps/post_processor'
make: *** [Makefile:24: all] Error 2

Seemed like i had te wrong version of the file. Used the following clone command and the error is gone:

git clone -b release/tao5.3_ds7.0ga https://github.com/NVIDIA-AI-IOT/deepstream_tao_apps.git

Thanks for your feedback

I got the deepstram-gaze-app up and running for input video files using the gazenet_appconfig.yml entry:

source-list:
  list: file:///home/videos/input/looking_up.mp4

Now a few questions arised:
The NGC Catalog model card (Gaze Estimation | NVIDIA NGC) proposes a Real-time Inference Performance of 87 FPS on the Nano. We are running the app on a Jetson AGX Orin (from inside a docker container) but the reported Average fps by the application when running on input file is ~8-10 fps.

  • What are ways to boost performance?
  • How is it possible to use a connected webcam as an stream input?

Extra informations:
The command used for starting the container:

docker run \
	-it \
	--net=host \
	--gpus all \
	-v /tmp/.X11-unix/:/tmp/.X11-unix \
	-v /mnt/SSD/projects/testing/deepstream-7.0-gaze-test:/home \
	-w /home \
	deepstream-7.0-gaze-test:latest

In the final setup no video output is needed, only the inferred gaze data needs to be processed. Will this increase performance?

This perf data is tested by /usr/src/tensorrt/bin/trtexec --loadEngine=xxxx.engine.

For this deepstream app, it also contains facenet/faciallandmark elapsed time,decoding/nvstreammux/post-process/nvosd elapsed time, etc.

Set AGX Orin to MAXN mode/set interval property of nvinfer/As well as optimizing the models

Yes, use v4l2src element as input. refer to this code snippet.