Made a Wizard's wand using Jetson Nano

ElderWand-JetsonNano: Bringing Magic to Life with Machine Learning

Imagine waving a wand to unlock a door or turn on a light, just like in the world of Harry Potter. With the ElderWand-JetsonNano project, you can experience a touch of magic in real life. This project uses machine learning and the NVIDIA Jetson Nano to recognize wand movements and perform specific actions based on the identified gestures.

How It Works

The Magic Wand

Once you start the project, you use a wand to draw letters in the air. The process involves:

  1. Swinging the Wand: Begin by swinging the wand to initiate the gesture recognition process.
  2. Pointing at the Green Blob: Trace the desired letter while pointing at a green blob on the screen.
  3. Completing the Gesture: To finish, point the wand at a red blob. This signals the completion of the drawing.

Machine Learning Model

The project employs pre-trained machine learning models to recognize the letters you draw. The models have been trained to identify specific letters and trigger corresponding actions.

Recognized Letters and Actions

  • ‘A’ for Alohomora: If the model predicts the letter ‘A’, it triggers the Alohomora spell, which opens a solenoid lock for 2 seconds before closing it.
  • ‘L’ for Lumos: If the model predicts the letter ‘L’, it activates the Lumos spell, turning on an LED bulb.
  • ‘N’ for Nox: If the model predicts the letter ‘N’, it invokes the Nox spell, turning off the LED bulb.

Technical Details

Project Structure

The project’s directory structure includes the following key components:

ElderWand-JetsonNano/
├── venv/                # Virtual environment directory
├── ACLO_rf.pkl          # Pre-trained model file for ACLO
├── ACNO_rf.pkl          # Pre-trained model file for ACNO
├── ACPN_rf.pkl          # Pre-trained model file for ACPN
├── LICENSE              # License file
├── NPAC_rf.pkl          # Pre-trained model file for NPAC
├── main.py              # Main script for the project
├── main2.py             # Secondary main script
├── predict.py           # Script for making predictions
├── train.py             # Script for training the models

Libraries Used

The project utilizes several Python libraries to handle various tasks:

  • scikit-learn: For machine learning models.
  • pandas: For data manipulation and analysis.
  • joblib: For model serialization.
  • numpy: For numerical operations.
  • pillow: For image processing.
  • opencv-python: For computer vision tasks.
  • Jetson.GPIO: For interfacing with the GPIO pins on the Jetson Nano.

Setting Up and Running the Project

To set up the project, follow these steps:

  1. Create and Activate a Virtual Environment:

    python3 -m venv venv
    source venv/bin/activate  # On Linux/macOS
    venv\Scripts\activate  # On Windows
    
  2. Install the Required Libraries:

    pip install scikit-learn pandas joblib numpy pillow opencv-python Jetson.GPIO
    
  3. Train the Models (if not already trained):

    python train.py
    
  4. Run the Main Script:

    python main.py
    

    Alternatively, you can run the secondary main script:

    python main2.py
    
  5. Make Predictions:

    To use the pre-trained models for making predictions:

    python predict.py
    

Conclusion

The ElderWand-JetsonNano project brings a touch of magic to the real world by combining machine learning and simple hardware components. Whether it’s unlocking a door or controlling a light, this project offers a fascinating glimpse into the possibilities of gesture recognition and automation. Dive into the code, train your models, and start casting your own spells!

For a demonstration of the project in action, check out this demo video.

For more information, check out the GitHub repository. Contributions and feedback are always welcome!


Bring a little bit of magic into your life with ElderWand-JetsonNano. Happy coding, and may your spells always work perfectly!