How to visualise the 3d gaze vector output of the GazeNet model?

Thanks for the info. I am afraid the different data distribution between your own images and training images from gaze model may will result in inaccurate inference result.
If possible, could you try to run training against your own dataset?

To narrow down, with gaze model from NVIDIA, can you run “tao inference” against your own dataset to check if it can work? .
For json file, you can leverage the facial points dumped from the deepstream-app.

is this (the example photo) an accurate prediction for where you were looking ?

Hi @Morganh,

I am not sure why we need to run “tao inference” on new images as we have everything we need to visualise from the deepstream-app.
I cannot train the model on my new images as I don’t have labels for them. Besides, I also don’t want to train the model as I just want to use it on new images.
In addition, what does the gazenet model do if it cannot be used on new images that are not from the training dataset?

Cheers,
Tin

To run “tao inference” on a test image is in order to check whether it can work. This will help narrow down the issue. If it works, the gaze model has not problem. Then we need to find the gap between “tao inference” and deepstream-app.
I will run “tao inference” against your test image.

On your side, could you please resize your test image to 1280x720 and try again with deepstream-app?

Hi,
Please run above-mentioned approach 3 for better result.
Step:
Refer to 3.21.08 doc https://docs.nvidia.com/tao/tao-toolkit-archive/tao-30-2108/text/tao_cv_inf_pipeline/requirements_and_installation.html#download-the-tao-toolkit-cv-inference-pipeline-quick-start to download scripts via TAO Computer Vision Inference Pipeline | NVIDIA NGC or
ngc registry resource download-version "nvidia/tao/tao_cv_inference_pipeline_quick_start:v0.3-ga"

Setup server

$ cd tao_cv_inference_pipeline_quick_start_vv0.3-ga/scripts
$ bash tao_cv_init.sh 
$ bash tao_cv_start_server.sh

Open another terminal to run client.

$ export DISPLAY=:0
$ cd tao_cv_inference_pipeline_quick_start_vv0.3-ga/scripts
$ bash tao_cv_start_client.sh

Modify several lines in samples/tao_cv/demo_gaze/demo.conf
root@xx:/workspace/tao_cv-pkg# vim samples/tao_cv/demo_gaze/demo.conf     
            video_path=  /tmp/yourtest.mp4 
            fps = yourvideo_fps
            is_video_path_file= true
            resolution_whc= 640,480,3

root@xx:/workspace/tao_cv-pkg# ./samples/tao_cv/demo_gaze/gaze samples/tao_cv/demo_gaze/demo.conf

BTW, run “docker cp” to cp your testvideo.mp4 to the client.
$ docker cp yourtest.mp4 image_tao_cv_client:/tmp/

Hi @Morganh ,

Thanks for your reply. I will give it a try.
Do you know why the utils_gazeviz.py script did not work well on the data dumped from the deepstream app?

Cheers,
Tin

I will check further. Not sure which part brings the difference.
For quick solution for you, please use above inference approach.

Thanks @Morganh.
My main purpose is to be able to visualise the gaze from the deepstream gaze app as we are developing applications on Jetson Nano using deepstream. Please let me know if you figure out what the issue was.

Thanks,
Tin

For deepstream gaze app, it is still developed by deepstream team ongoing for this gaze visualization feature. It will not be available in short term. Maybe two or three months later.

BTW, you can also leverage the code in samples/tao_cv/demo_gaze/

root@xx:/workspace/tao_cv-pkg# ls samples/tao_cv/demo_gaze/
CMakeLists.txt  Demo.cpp  VizUtils.cpp  VizUtils.hpp  anthropometic_3D_landmarks.txt  demo.conf  gaze

Thanks @Morganh.

Cheers,
Tin

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.