NVGaze on Xavier

Hi guys, are there any insights on how to start implementing NVGaze on Xavier for testing and performance measuring?
There are datasets [100gb] and a somewhat model [100mb], but the model appears to be rather for rendering datasets than for estimation of the position of the pupil.

Thanks

1 Like

Hi Andrey,
by acknowledging to the NvGaze license you can use the provided 3D eye model to render a customized synthetic data set from a virtual camera position that fits your application. As described in the paper changing the camera position has significant impact on the appearance of the eye and the resulting accuracy of pupil localization. Please be aware that the 3D model includes the eye but no face mesh due to the license of the face scans. Therefore, as you pointed out we provide our complete data set to work with (including synthetic data and real captured data).

Our trained model is only available with a business license.

If you want to train your own network we provide accelerated inference via TensorRT that comes with the Jetpack.
https://developer.nvidia.com/embedded/jetpack

Hi mstengel,
Thank you for your response.
It appears that the pupil localization algorithm is not accessible at this time.
Would you provide guidlines how to train own network using the dataset?
Which network would you point out to start with for the task?
May be you can share some insights on how to start with implementing own pupil localization model using Jetson & CSI camera? In a way it will be trained on the provided dataset and will process cameras inputs using the knowledge gained from training with the dataset?
The Idea that I am trying to investigate is how to get an algorithm to recognize the pupil of an eye at the image and move motors accordingly to center and put focus of camera on the pupil.

I have found a similar project available for everyone including models and dataset.
Probably its python implementation will run on Xavier

https://gazecapture.csail.mit.edu/

Some other related projects:
http://gaze360.csail.mit.edu/
http://gazefollow.csail.mit.edu/demo.html
https://www.csail.mit.edu/research/where-are-they-looking

You can try acomoeye-NN, my implementation of NVGaze gaze estimation with few upgrades.

1 Like

hi @czero69
Thank you for sharing!