How to visualise the 3d gaze vector output of the GazeNet model?

Hi everyone,

I am using the deepstream gaze example to load the GazeNet model. I could compile and execute the pipeline. I got the gazenet model output which is x, y, z, theta, phi but didn’t know how to visualise them like in this example video from NVIDIA.
According to this document, the output x,y, z is the location where the person is looking relative to the camera but I don’t know how to convert that in the 2d dimension as I don’t know the camera location in the video.

Can anyone please show me how to do that or have some example code to do that?

Many thanks.
Tin

Refer to utils_gaveviz.py inside jupyter notebook.
See TAO Toolkit Quick Start Guide — TAO Toolkit 3.22.05 documentation
and
TAO Toolkit Computer Vision Sample Workflows | NVIDIA NGC

Thanks Morganh.

Cheers,
Tin

Hi Morganh,

I read the function visualise_frame in the file utils_gazeviz.py and see that the function requires calib (list): camera calibration parameters. However, that parameter calib contains the camera intrinsic parameters which are computed from the training dataset. In short, the visualisation function needs the training dataset to compute some camera parameters to visualise the gaze vector outputs.

Is there any way to visualise the gaze vectors on a test video without having the same camera setting as that of those in the training dataset?

Thanks,
Tin

Could you try to run the sample jupyter notebook and leverage its camera calibration parameters?

Yes, I did and I could visualise the images from the training dataset. But the camera calibration parameters used in the notebook are generated by the training dataset, will the visualisation function still work with the images having different camera settings?

In addition, is there any document about the function visualize_frame in the script utils_gazeviz.py?

Thanks,
Tin

I suggest you to have a try to visualize your own images with the existing parameters in notebook.

More, you can also download the official gaze model to run inference against your own images.See Gaze Estimation — Transfer Learning Toolkit 3.0 documentation and
Gaze Estimation | NVIDIA NGC

No, there is no more. The info only locates in def visualize_frame().

Hi Morganh,

Thanks for your answer.

When I run the inference on one image:

tao gazenet inference -e $SPECS_DIR/gazenet_tlt_pretrain.yaml -i $DATA_DOWNLOAD_DIR/MPIIFaceGaze/test-dataset/Data/test0001.png -m $USER_EXPERIMENT_DIR/experiment_result/exp1/model.tlt -o $USER_EXPERIMENT_DIR/experiment_result/test-dataset/ -k $KEY

I get the error:

NotADirectoryError: [Errno 20] Not a directory: '/home/ubuntu/sources/tao/cv_samples_v1.3.0/gazenet/data/MPIIFaceGaze/test-dataset/Data/test0001.png/errors'
Traceback (most recent call last):
  File "/usr/local/bin/gazenet", line 8, in <module>
    sys.exit(main())
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/mitgazenet/entrypoint/gazenet.py", line 13, in main
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/common/entrypoint/entrypoint.py", line 300, in launch_job
AssertionError: Process run failed.

It looks like the inference step does not accept a single file as in the model document.

When I run the inference on one directory:

tao gazenet inference -e $SPECS_DIR/gazenet_tlt_pretrain.yaml -i $DATA_DOWNLOAD_DIR/MPIIFaceGaze/test-dataset -m $USER_EXPERIMENT_DIR/experiment_result/exp1/model.tlt -o $USER_EXPERIMENT_DIR/experiment_result/test-dataset -k $KEY

I get the error:

Traceback (most recent call last):
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/mitgazenet/scripts/inference.py", line 121, in <module>
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/mitgazenet/scripts/inference.py", line 107, in main
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/dataio/custom_data_manager.py", line 58, in __init__
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/dataio/custom_jsonlabels_strategy.py", line 60, in __init__
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/dataio/custom_jsonlabels_strategy.py", line 65, in _extract_json
  File "/usr/lib/python3.6/posixpath.py", line 80, in join
    a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
Traceback (most recent call last):
  File "/usr/local/bin/gazenet", line 8, in <module>
    sys.exit(main())
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/mitgazenet/entrypoint/gazenet.py", line 13, in main
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/common/entrypoint/entrypoint.py", line 300, in launch_job
AssertionError: Process run failed.
2022-01-14 22:27:00,494 [INFO] tlt.components.docker_handler.docker_handler: Stopping container.

My input directory has the following structure:

The Config folder is copied from the inference-set dataset. Because the test-dataset has new images, I don’t have the json file as in the inference-set generated by the notebook
image

In short, do you have any suggestion to run the model inference on new images that are not from the training dataset?

Thanks,
Tin

For the error above, I am afraid there is something in the folder mapping.
What is your ~/.tao_mounts.json ?

I suggest you run all the steps in terminal instead of notebook.
Then, login the docker directly.
$ tao gazenet run /bin/bash

Then inside the docker, try to run
# gazenet inference -e $SPECS_DIR/gazenet_tlt_pretrain.yaml -i $DATA_DOWNLOAD_DIR/MPIIFaceGaze/test-dataset/Data/test0001.png -m $USER_EXPERIMENT_DIR/experiment_result/exp1/model.tlt -o $USER_EXPERIMENT_DIR/experiment_result/test-dataset/ -k $KEY

If the path is not available or test001.png is not available, please try to find it inside the docker.

Hi Morganh,

Here is the content of ~/.tao_mounts.json on my machine:

{
    "Mounts": [
        {
            "source": "/home/ubuntu/sources/tao/cv_samples_v1.3.0",
            "destination": "/home/ubuntu/sources/tao/cv_samples_v1.3.0"
        },
        {
            "source": "/home/ubuntu/sources/tao/cv_samples_v1.3.0/gazenet/specs",
            "destination": "/home/ubuntu/sources/tao/cv_samples_v1.3.0/gazenet/specs"
        }
    ]
}

I login to the docker and check all that paths and I am sure they are available:

but when I run the following command

gazenet inference -e $SPECS_DIR/gazenet_tlt_pretrain.yaml -i $DATA_DOWNLOAD_DIR/MPIIFaceGaze/test-dataset/Data/test0001.png -m $USER_EXPERIMENT_DIR/experiment_result/exp1/model.tlt -o $USER_EXPERIMENT_DIR/experiment_result/test-dataset/ -k $KEY

I get the same error

NotADirectoryError: [Errno 20] Not a directory: '/home/ubuntu/sources/tao/cv_samples_v1.3.0/gazenet/data/MPIIFaceGaze/test-dataset/Data/test0001.png/errors'
Traceback (most recent call last):
  File "/usr/local/bin/gazenet", line 8, in <module>
    sys.exit(main())
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/mitgazenet/entrypoint/gazenet.py", line 13, in main
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/common/entrypoint/entrypoint.py", line 300, in launch_job
AssertionError: Process run failed.

Looking at the error message, it says it assumes the following path /home/ubuntu/sources/tao/cv_samples_v1.3.0/gazenet/data/MPIIFaceGaze/test-dataset/Data/test0001.png/errors is a directory because it only accepts the input folder instead of a specific file. This is confirmed because when I run the inference step on an input folder

gazenet inference -e $SPECS_DIR/gazenet_tlt_pretrain.yaml -i $DATA_DOWNLOAD_DIR/MPIIFaceGaze/test-dataset/ -m $USER_EXPERIMENT_DIR/experiment_result/exp1/model.tlt -o $USER_EXPERIMENT_DIR/experiment_result/test-dataset/ -k $KEY

the error message is different when running on a file:

Traceback (most recent call last):
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/mitgazenet/scripts/inference.py", line 121, in <module>
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/mitgazenet/scripts/inference.py", line 107, in main
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/dataio/custom_data_manager.py", line 58, in __init__
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/dataio/custom_jsonlabels_strategy.py", line 60, in __init__
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/dataio/custom_jsonlabels_strategy.py", line 65, in _extract_json
  File "/usr/lib/python3.6/posixpath.py", line 80, in join
    a = os.fspath(a)
TypeError: expected str, bytes or os.PathLike object, not NoneType
Traceback (most recent call last):
  File "/usr/local/bin/gazenet", line 8, in <module>
    sys.exit(main())
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/mitgazenet/entrypoint/gazenet.py", line 13, in main
  File "/root/.cache/bazel/_bazel_root/ed34e6d125608f91724fda23656f1726/execroot/ai_infra/bazel-out/k8-fastbuild/bin/magnet/packages/driveix/build_wheel.runfiles/ai_infra/driveix/common/entrypoint/entrypoint.py", line 300, in launch_job
AssertionError: Process run failed.

The error happens when it tries to execute the script custom_jsonlabels_strategy.py to parse the json file.
I also confirm that I could run the inference step successfully on the validation dataset sample-dataset/inference-set generated from training dataset with the following command:

tao gazenet inference -e $SPECS_DIR/gazenet_tlt_pretrain.yaml \
                       -i $DATA_DOWNLOAD_DIR/MPIIFaceGaze/sample-dataset/inference-set \
                       -m $USER_EXPERIMENT_DIR/experiment_result/exp1/model.tlt \
                       -o $USER_EXPERIMENT_DIR/experiment_result/exp1 \
                       -k $KEY

so there should be no problem with the folder mapping.
It seems that the inference step can only accept an input folder and that input folder must have the additional json files in json_datafactory_v2 (those can only be generated from the training dataset).

Can you run the inference step on your machine with the input of a single image or a new image folder to validate that hypothesis?

Thanks,
Tin

I can reproduce your error. I am working on a solution for you.

1 Like

Hi,
There are three approaches to visualize the gaze vector.

  1. Use “gazenet inference”.

In this approach, please leverage the existing data/MPIIFaceGaze/sample-dataset/inference-set/json_datafactory_v2/p01_day03.json
Each image has the “annotations”. Its format follows https://docs.nvidia.com/tao/tao-toolkit/text/data_annotation_format.html#emotionnet-fpenet-gazenet-json-label-data-format

Need to generate a json file for your custom dataset.
Then run “gaze inerfence” to get the result.txt. And use section 7 (Visualize Inference) of https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/resources/cv_samples/version/v1.3.0/files/gazenet/gazenet.ipynb to visualize the gaze vector.

  1. Use deepstream-gaze-app deepstream_tao_apps/apps/tao_others/deepstream-gaze-app at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub

For example, below command will

  • generate an image(gazenet.jpg) with 80 facial landmarks points on it.
  • output the gaze info in the log

./deepstream-gaze-app 1 ../../../configs/facial_tao/sample_faciallandmarks_config.txt file:///opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/deepstream_tao_apps/apps/tao_others/deepstream-gaze-app/gaze_test.png ./gazenet

Please dump the facial landmarks points value from the “output array” in deepstream_tao_apps/deepstream_gaze_app.cpp at master · NVIDIA-AI-IOT/deepstream_tao_apps · GitHub
Then leverage the section 7 (Visualize Inference) of https://catalog.ngc.nvidia.com/orgs/nvidia/teams/tao/resources/cv_samples/version/v1.3.0/files/gazenet/gazenet.ipynb to visualize the gaze vector.

BTW, the algorithm is mentioned in utils_gazeviz.py
( wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tao/cv_samples/versions/v1.3.0/files/gazenet/utils_gazeviz.py )

  1. Use an inference pipeline which is mentioned in an old version of user guide. See https://docs.nvidia.com/tao/archive/tlt-30/text/tlt_cv_inf_pipeline/quick_start_scripts.html and
    https://docs.nvidia.com/tao/archive/tlt-30/text/tlt_cv_inf_pipeline/running_samples.html#running-the-gaze-estimation-sample
    But please note that this approach is already deprecated in latest user guide because we will not maintain it in future.

Hi Morganh,

Thanks for your reply.
I have some comments on each approach as below:

  1. Use “gazenet inference”.
    When looking at the inference-set/json_datafactory_v2/ p01_day03.json file, I see that the annotations are different for each file, i.e. for each image file, we need to know the location of faces, land marks, etc. So, I don’t think I can reuse them for a new image.

  2. Use deepstream-gaze-app
    The app can generate 80 facial landmarks, but the utils_gazeviz.py requires 104 landmarks. So, I guess they are not compatible.

  3. Use an old inference pipeline
    I am not clear how to process with this.

Did any approach above work for you?
Finally, my main goal is to visualise the gaze within the deepstream-gaze-app. Do you have any suggestion to do that directly in the deepstream-gaze-app?

Thanks,
Tin

Actually approach 3 is verified previously. The this example video from NVIDIA is running with it.
For utils_gazeviz.py, I will check if it works after reducing 104 to 80.

Just share a workaround for approach2 which is running inference with deepstream-gaze-app. Please follow the README in attached demo.zip.
It will dump the 80 facial landmarks points in order to visualize the cropped face along with the gaze vector.
demo.zip (372.6 KB)

Main change:

cp deepstream_gaze_app.cpp  bak_deepstream_gaze_app.cpp
cp deepstream_gaze_app_update.cpp  deepstream_gaze_app.cpp

make clean && make

./deepstream-gaze-app 1 ../../../configs/facial_tao/sample_faciallandmarks_config.txt file:///opt/nvidia/deepstream/deepstream-6.0/samples/configs/tao_pretrained_models/deepstream_tao_apps/apps/tao_others/deepstream-gaze-app/frame_816_571_0.png ./gazenet  |tee log.txt

python vis.py

result

Thanks Morganh. I will give it a try.

Cheers,
Tin

Hi Morganh,

I have changed the lines 419 and 423 to:

    "softargmax/strided_slice:0") == 0) {
    "softargmax/strided_slice_1:0") == 0) {

so that the pipeline could work with the TAO gazenet model on jetson nano.
I could visualise the gaze on some new images and the results did not look very accurate when a person was looking straight but the arrows pointed to different directions.
Looking at the utils_gazeviz.py script, I see that the script uses a 3D face model and the intrinsic camera parameters from a public training dataset. Are these camera parameters the same as those of the dataset used to train the TAO gazenet model. If they are different, is that the reason why the visualisation is not accurate? And where is the 3D face model from?

Thanks,
Tin

Firstly, may I know if you have trained a new model against your own dataset?

Hi Morganh,

No, I haven’t trained a new model. I use this gaze model from NVIDIA.

Cheers,
Tin