I’m currently working with the deepstream SDK and the demo apps provided here deepstream_tao_apps/apps/tao_others/deepstream-gaze-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub to get the facial landmarks working but I’m experiencing some serious issues.
The demo is working well when only one face is showing up but when there’s more (2 and more), the tensors output retrieved from the app seems generating random landmarks (as well as the gaze).
Here is our configuration :
Hardware platform : Jetson AGX ORIN DevKit
Deepstream 7.0
Jeckpack 6
TensorRT 8.6.2.3
Cuda 12.2
NVIDIA GPU Driver version 540.3.0
This is the command line I use to start the application :
./deepstream-gaze-app 3 ../../../configs/nvinfer/gaze_tao/sample_gazenet_model_config.txt v4l2:///dev/video0 ./gaze
/dev/video0
is the camera I use to get the video stream.
Here is the config file mentionned above:
enginePath=../../../models/gazenet/gazenet_facegrid.etlt_b8_gpu0_fp16.engine
etltPath=../../../models/gazenet/gazenet_facegrid.etlt
etltKey=nvidia_tlt
## networkMode can be int8, fp16 or fp32
networkMode=fp16
batchSize=16
The models and the config are retrieved from the script download_models.sh
within the github repository.
Thank you.
Stéphane