WalrusEye: helping scientists count and protect walrus

Use the NVIDIA Jetson Nano and custom-trained Object Detection Machine Learning model to detect and count walrus, and use the Sony Spresense to gather key environmental sensor data


As walrus population faces problems such as climate change and human invasion on their environment, it is important for scientists to monitor the walrus population efficiently to be able to gauge their status as a species.

To count the walrus population, currently scientists take drone photos and manually count the walrus by hand when walrus haul-out, or collect on the shore during summer time when all of the icebergs have melted. (please see picture below)

Counting the walrus manually is a tedious, time-consuming, and error-prone process.

Counting the walrus population allows the scientists to see how climate change is affecting the walrus population (the longer the walrus are in haul-out, the more quickly icebergs are melting).

WalrusEye can count & monitor walrus automatically and make scientists life easier by providing a scientist dashboard built with the following features:

  • Walrus Count
  • Temperature Sensor
  • Microphone Sensor
  • Accelerometer Sensor
  • GNSS Sensor

Detailed Summary:

Feature 1: Walrus Count

a custom-trained YOLOv4 model to detect & count walrus and identify other species in the walrus’s environment (Red Fox, Puffin) This model outputs a walrus count on the scientist’s dashboard (how many walrus are detected) that’s updated every minute.

Feature 2: Temperature Sensor

a temperature sensor to measure the surrounding environment’s temperature. This can help scientists know the impact of global warming on the walrus’s habitat over the years.

Feature 3: Microphone Sensor

Another use case scientists require is detecting stampedes. When walrus sense a human invasion on their environment, they start stampeding, which can sometimes cause some of them to be killed. It is important for scientists to be able to detect these stampedes and detect when they are going to happen.

a microphone sensor is used to detect when a stampede is going to happen. Before a stampede occurs, walrus make very loud noises. The microphone sensor takes volume readings in real-time. The scientists can see on the dashboard when the microphone readings are too high.

Feature 4: Accelerometer Sensor

an accelerometer is used to detect the vibrations walrus make when they stampede. The scientists can easily see on the dashboard any drastic changes in the accelerometer data.

Feature 5: GNSS Sensor

Scientists also need to be able to differentiate between multiple WalrusEyes deployed in the walrus’s environments.

a GNSS sensor reads the gps coordinates of the specific WalrusEye’s location from overhead satellites. In addition to helping differentiate between devices, this can let the scientists know the population of walrus at specific locations.

WalrusEye pushes all of the collected data from 5 features to AWS S3, where it is pushed to an AWS Spice Dataset and the WalrusEye Scientist Dashboard created in AWS QuickSight.


Before we get into the specifics of how these 5 features are implemented in WalrusEye, let’s check out an awesome demo:

Attribution: The Walrus Music is taken from “Singing Walrus” National Geographic Article written by Acacia Johnson

Technical Implementation:

Feature 1: Walrus Count

The Walrus Count is created using the output result from a custom-trained object detection model.

To train the YOLOv4-based WalrusEye model yourself, follow this guide to install DarkNet (I made this guide with Ubuntu-based Pop OS system, but you can use any other Ubuntu-based system after installing the Nvidia drivers for your GPU): Pop OS (Ubuntu): Install DarkNet for YOLOv4 Object Detection with GPU and OpenCV support | by Raunak Singh Inventor | Medium

And then follow the instructions in the official AlexeyAB YOLOv4 repo using my dataset. I used labelimg to label the dataset in YOLO format. There are 50 images per 4 classes.

Instructions: GitHub - AlexeyAB/darknet: YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

Dataset: walrus-4-classes-data-50 | Kaggle

Note: This Kaggle dataset and the one I mentioned below (the trained model files one) was created by me (RaunakingCoder). I had to do this because some files were over Github’s file size limit of 100MB.

After training, you should have a model that can detect:

  • Walrus

  • Red Fox

  • Puffin

  • Polar Bear

If you don’t want to train model, you can get the trained model file here: WalrusEye-model-yolov4-v2 | Kaggle.

Feature 2: Temperature Sensor

The temperature sensor is connected to the Sony Spresense main board + extension board. It measures the temperature of the surrounding enviroment and shows it in the scientist dashboard.

To connect the temperature sensor, follow schematic A:

Note: we are connecting the sensors power and ground to the power rail so that we can use parallel circuits to connect multiple sensors to the Spresense.

Feature 3: Microphone Sensor

The microphone sensor is connected to the Sony Spresense main board + extension board.

Solder on the 3 pin-header.

Connect it to the Spresense main board + extension board using Schematic B.

Note: The Spresense docs (https://developer.sony.com/develop/spresense/tutorials-sample-projects/spresense-tutorials/using-multiple-microphone-inputs-with-spresense) show how to connect the electret microphone to the 10-pin header, but because I didn’t need to get a audio MP3 file but rather I needed to measure the volume of the walrus’s calls, I opted to use a different connection. Please refer to Schematic B (this schematic includes Schematic A).

Feature 4: Accelerometer Sensor

Solder the 8-pin header on the accelerometer.

The accelerometer sensor is connected to the Sony Spresense main board + extension board. It is used to detect vibrations that occur when the walrus stampede.

Note: If you are using the Adafruit ADXL335 Breakout Board, like me, make sure to keep the 3.3V and ST pin unconnected.

Feature 5: GNSS Sensor

The GNSS Sensor doesn’t require any wiring as it is onboard the Sony Spresense main board. Just make sure to be outdoors when trying out the GNSS sensor as it can only catch the GPS readings from the overhead satellites when it is outdoors and in clear view of the sky.

Boxing Up WalrusEye

Drill 3 holes in the glass top of the junction box (do this under parent supervision if you’re a kid like me). Use ruler measurements to make them evenly spaced

Drill a large hole on the top to take out the webcam and power wires

Screw the Jetson Nano and Sony Spresense onto the screw holes

Pass the jumper wires through the 3 glass holes and then join them to the sensors using the schematics provided.

Test WalrusEye

  1. Follow the installation instructions and use JetPack 4.4

installation instructions: Get Started With Jetson Nano Developer Kit | NVIDIA Developer

JetPack 4.4 download: JetPack SDK 4.4 archive | NVIDIA Developer

Note: make sure to use the 5V DC Power Supply to power the Jetson.

  1. Open Arduino IDE on your PC and upload the walruseye_sensor_code.ino file to the Spresense

  2. Connect the Sony Spresense main board + extension board & the Webcam to the Nvidia Jetson Nano through USB.

  3. Connect a USB WiFi Dongle to the Jetson or connect a pair of WiFi Antennas to the Jetson.

  4. Setup boto3 library on Jetson using this installation guide: Python, Boto3, and AWS S3: Demystified – Real Python

  5. Create a bucket called walruseye in your AWS account

  6. Download my kaggle dataset on jetson nano which contains the code for deployment: walruseye-deployment | Kaggle

Note: I had to put my deployment code in this walruseye-deployment kaggle dataset that I created because some files were over Github’s file size limit of 100MB.

  1. Open terminal
cd ~/Downloads/walruseye-deployment/darknet/python CUSTOM_webcam_to_aws.py

run CUSTOM_webcam_to_aws.py as shown above. It is a python script that I created which:

a) clicks a photo from the webcam

b) runs the machine learning inference

c) grabs the temperature, microphone, accelerometer, and GNSS data from the Sony Spresense main board + extension board by reading the serial output from the arduino code

d) does it again! 🔁

If everything works out, you should see the datetime-marked files in s3. The python script pushes to s3 every minute. Try downloading one of the CSV files to check that the data is being pushed properly.

You should see the following columns:

Once the files are being pushed to s3 every minute, go to AWS Quicksight and click the S3 button:

Enter the name for the data source to be walrus eye.

Create a file called walruseye_manifest.json. copy paste the following contents:

    "fileLocations": [        
            "URIPrefixes": [
    "globalUploadSettings": {        
        "format": "CSV",        
        "delimiter": ",",        
        "textqualifier": "'",       
        "containsHeader": "true"    

Upload the.json manifest file to the pop up:

Click connect

Now you can use this AWS Spice dataset to replicate my WalrusEye Scientist dashboard or create a dashboard in your own style.

WalrusEye is built!

Keep on tinkering and never stop making - Raunak Singh Inventor 🤖