Realtime pupil detection with Jetson Nano and DeepLabCut

Here’s a project that we’re using for neuroscience research: Realtime pupil and eyelid detection with DeepLabCut running on a Jetson Nano.
A short video of the setup is available here.
The interest on the research side is to get a realtime readout of animal or human cognitive states, as pupil size is an excellent indicator of attention, arousal, locomotion, and decision-making processes. As one of many example applications, you could use this setup to trigger a reward when the experimentee is alert.

The essential components on the hardware side are:

  1. FLIR (ex PointGrey) Chameleon3 USB3 camera https://www.flir.com/products/chameleon3-usb3/?model=CM3-U3-13Y3M-CS
  2. Ricoh 50mm f/1.4 VGA lens (FL-BC5014A-VG) https://industry.ricoh.com/en/fa_camera_lens/lens/vga/
  3. C to CS mount adapter and 12.5mm extension tube
  4. NVIDIA Jetson Nano

On the software side:

  1. JetPack for Jetson Nano I don't remember if this came with tensorflow included - if not, here are the instructions
  2. DeepLabCut (DLC) If you want to use DLC in real time, for now you'll need my realtime branch of the code: https://github.com/neurodroid/DeepLabCut/tree/realtime The change is rather trivial: instead of using cv2.VideoCapture to read from a file, you make it read from a camera. I've set up a virtualenv using the system's Python 3.6 (/usr/bin/python3) and have installed all required packages using pip within this virtualenv. DLC has a lot of version restrictions for its dependencies. I've emptied the install_requires list in DLC's setup.py, and have installed all required packages manually using pip within the virtualenv.
  3. aravis 0.6.4 We use version 0.6.4 of aravis to drive the usb3 camera on Linux. Versions >= 0.7 use a build system that's not in the bionic repositories.
  4. OpenCV 3.4.7 Built with many non-default options, and using the virtualenv with Python3.6 mentioned above. Not sure how to best share these - dump the CMakeVars.txt or the CMakeCache.txt files somewhere? Most importantly, you'll need to build opencv with aravis support. Moreover, if you'd like to software trigger the camera (i.e. trigger a capture from your code, rather than from some hardware input), you'll need this patch for OpenCV: https://github.com/opencv/opencv/pull/15714 . I've tried to get it integrated into opencv but failed to get my point across.

We’ve trained both resnet50 and MobileNet v2 (v2_0.35_224) as a network backbone. Using about 200 annotated training frames was enough to get robust pupil and eyelid detection in our case. To be clear, we do not train the network on the Jetson - we only use the Jetson to detect pupil and eye size with a trained network.
Without any optimizations, we get about 20fps throughput for a frame size of 160x128 using MobileNet v2. We’re detecting 8 points within each frame (4 poles of the pupil, 4 poles of the eye). There’s probably quite a bit of room for improvement.

Let me know if you have any questions.

Kudos to:
The DeepLabCut developers
The European Research Council
Institut Pasteur

1 Like

Hello I have some questions.

  1. Did you run the DeepLabcut on Jetson Nano module?
  2. If so, I want to know how to install deeplabcut on jetson nano (e.g., install anaconda3, install tensorflow)

That should be all answered in the original post: See instructions under “On the software side”

1 Like

hi,christoph:
I am also working in neuroscience and want to use the jetson nano to execute some detailed real-time mouse’ tracking analysis .Very interesting in your work.so can I ask you some questions.
1.Could you post your specific version of tensorflow.In my memories,only the 1.X could be supported for the deeplabcut.and also it could be very helpful for telling the version of your used .the latest one?
2.I completely know nothing about the aravis,so is it necessary if I only use the normal usb camera or the raspi camera v2 but not the PointGrey 's one.
3.you mentioned that ,"Most importantly, you’ll need to build opencv with aravis support. " Can you tell me more about this?

1 Like

Hi Christoph,
thank you for sharing your inspiring work! Do you know any working methods, to run deeplabcut on the jetson in Docker? Sadly there is no official Docker Image but it would make live much more easier.

1 Like