Hi, just wanted to share the super fun project I took on for my class at UCSD last quarter!
We used a Jetson Nano with the JetPack 4.5, so we built OpenCV from source for CUDA support. For our open-ended project I came up with the concept of a self-driving rideshare for UCSD students, where a student could request a ride, wait for the car to arrive via autonomous GPS navigation, then upon arrival the car would require identity verification with facial recognition using a livestream video over an Oak-D Lite that we mounted to the chassis. The program was adapted from the Python API by ageitgey, which uses dlib/cv2/numpy libraries to perform face rec over webcam video, but I modified it to work as a ROS2 package.
We only had 10 weeks for this project, and at the beginning of the class I’d actually never touched a SBC in my life (or Python, or Linux, or ROS, or Git, lol) so we didn’t get to optimize navigation in ROS2, but we were able to train the car in path following using DonkeyCar.
I named it “UCSDrive!” and had so much fun with it that I wound up joining Triton AI, my school’s autonomous racing team. You can check out the documentation here! I containerized it in Docker so it should be reproduceable, but the quarter ended before I was able to troubleshoot the image I tried to make in the arm64 architecture. Right now the x86 container works but not the Jetson image.
It’s definitely not perfect but it was a super awesome first experience with AI and autonomous systems! Thank you guys :)