How to deploy Gazenet using Deepstream-TAO?

I want to deploy the GazeNet deployable version to test its functionality, but I’m having difficulty finding a clear deployment guideline for GazeNet. This link (Integrating TAO Models into DeepStream - NVIDIA Docs) provides general instructions for TAO model deployment, but the specific link to the DeepStream-TAO GazeNet model deployment appears to be empty or unavailable.

I’m new to using NVIDIA models. Could someone kindly provide guidance or point me to updated resources for deploying GazeNet with DeepStream? Thanks!

The GazeNet model deployment is obselete in the latest DeepStream version. The last DeepStream version to support the model is DeepStream 7.0. Please refer to Installation — DeepStream documentation 6.4 documentation for installation and refer to deepstream_tao_apps/apps/tao_others at release/tao5.3_ds7.0ga · NVIDIA-AI-IOT/deepstream_tao_apps for the app.

1 Like

Thanks your support!
I installed Deepstream 7.0 using docker container, and run test example successfully in container. But I meet problem when I run gaze estimation model with command:./deepstream-gaze-app 1 ../../../configs/nvinfer/facial_tao/sample_faciallandmarks_config.txt woman.jpg ./gazenet

The JPG file path should be URI format RFC 3986 - Uniform Resource Identifier (URI): Generic Syntax.

file:///xxxxxxxxx/woman.jpg

The sample app is open source, you can debug with the source code when you met any issue.

Thanks for suggestion! But I met other errors:

Could you suggest how can I solve this problem?Thanks

Please run the NVIDIA-AI-IOT/deepstream_tao_apps at release/tao5.3_ds7.0ga script to download the models first. The log has told you the calibration file is missing.

I appreciate your support!
I have successfully execute command: ./deepstream-gaze-app 1 ../../../configs/nvinfer/facial_tao/sample_faciallandmarks_config.txt file:////opt/nvidia/deepstream/deepstream-7.0/my_app/deepstream_tao_apps/apps/tao_others/deepstream-gaze-app/woman_turn_left.jpg ./gazenet

But I still two following questions:

  1. I want to use camera to test and run the command(run in container): ./deepstream-gaze-app 3 ../../../configs/nvinfer/facial_tao/sample_faciallandmarks_config.txt v4l2:///dev/video0 ./gazenet

I got the following error:

Could you please suggest how can I solve this problem?

  1. I want to implement this gaze estimation into my project. I am confused how can I use the model. Maybe two ways: 1) Pass the input to gazent with as requirement with four input, 2) entire end-to-end video analytic application, but I didn’t find the suitable guideline how to implement. Could you please give me some advice and refer to the proper documentation?

Thanks!

Current gaze sample use the uridecodebin as the source, it does not support V4L2 interface camera. deepstream_tao_apps/apps/tao_others/deepstream-gaze-app/deepstream_gaze_app.cpp at release/tao5.3_ds7.0ga · NVIDIA-AI-IOT/deepstream_tao_apps

Please refer to DeepStream SDK FAQ - Intelligent Video Analytics / DeepStream SDK - NVIDIA Developer Forums to find out the proper pipeline settings of your camera and modify the source code. deepstream_tao_apps/apps/tao_others/deepstream-gaze-app at release/tao5.3_ds7.0ga · NVIDIA-AI-IOT/deepstream_tao_apps is open source.

Do you mean you want to use the Gaze Estimation | NVIDIA NGC model directly in your project? The model is just a pre-trained sample model, you may refer to Get Started with TAO Toolkit | NVIDIA Developer | NVIDIA Developer to customize and train the model for your project.

As for the app, the deepstream_tao_apps/apps/tao_others/deepstream-gaze-app at release/tao5.3_ds7.0ga · NVIDIA-AI-IOT/deepstream_tao_apps is open source, you can check the source code for how the data is transferred through DeepStream SDK interfaces for inferencing. All the interfaces have been introduced in Welcome to the DeepStream Documentation — DeepStream documentation

You may choose the proper methods based on your own requirement and the resources you already have.

If I want to use the deployable version gazenet directly, how can I deploy it? The following page will bring me the latest version of Train Adapt Optimize (TAO) Toolkit, DeepStream or TensorRT which are not compatible of the Gaznet. Could you please give me the link of the previous version of how to deploy gazenet? Thanks

The GazeNet model does not change for a very long time. Gaze Estimation | NVIDIA NGC is compatible. deepstream_tao_apps/download_models.sh at release/tao5.3_ds7.0ga · NVIDIA-AI-IOT/deepstream_tao_apps also download model from this page.

The DeepStream 7.0 document: Welcome to the DeepStream Documentation — DeepStream documentation 6.4 documentation

The TAO toolkit document: The archived old documents NVIDIA TAO - NVIDIA Docs

When I execute mp4 video in the container, I got the core dump. I didn’t change the source codes. Could you please advise if I miss configure some files?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.