Hello Everyone.
I am writing this as some of you requested me to share my final year project with you. I will be talking about my Project’s Mechanical, Electrical and Programming aspects.
Problem Statement
The Problem Statement we selected was about Automating the Entire Agricultural task of Weeding of Dry farmlands, which consists of Removal of Weed from the Soil. Thus, it is clear from the Manual Weeding Video that the Robot should maneuver over any Farm-field terrain that could consist of anything from small to significant obstacles. Also, the Robot’s Motor Torque should be sufficient for dragging the Weeding Tool.
Mechanical Aspects
So the Design started with selecting the number of Wheels for the Robot. We have options of 3 wheels, 4 wheels, and more than 4 wheel designs. In a 3-wheeled Robot, the load is concentrated on two motors, so expensive high torque motors would be required. In a 4-wheeled Robot, costly spring-based suspension or shock absorbers are required for maintaining continuous ground contact. So we are left with more than 4 wheels design.
We came across a paper describing the Rocker-Bogie Mechanism and its use case in NASA Jet Propulsion Laboratory’s Mars rover. Overall there were two Designs of this Mechanism; one made use of the Differential Gearbox, which ‘Opportunity’ and ‘Spirit’ Rovers used. Another was the Differential Bar Mechanism which ‘Curiosity’ and ‘Perseverance’ Rovers used. We choose Differential Bar because it was easy and cost-effective for procuring the components.
Weeding Tool was selected based on effectiveness, and weed removal efficiency was found in one paper. Based on this Tool, a Weeding mechanism was Design developed. The Robot CAD Model was created with Empirical Relations of Links given in one of the papers. The model was produced in Dassault Systèmes Solidworks and then exported to Autodesk Fusion 360 for Static Stress Analysis. After stress results were acceptable, then Motion Study was done once again in Solidworks for testing the extremities of the Rocker-Bogie Mechanism.
After the design was approved, the Fabrication was done, and the Robot was Mechanically done. Check out our video on Mechanical Aspects
Electrical and Electronics Aspects:
So once it was decided that a Computer Vision-based solution is to be used, a Microprocessor is obviously needed. When microprocessors are considered, usually only Raspberry Pi Foundation comes to mind. Basic OpenCV based programs struggle in different lighting conditions. Because of this, we choose the Deep learning-based Detection approach, so if the dataset is large enough, it will accurately work in any lighting conditions. Again, Raspberry Pi processors do not have any good GPU for processing Deep learning algorithms, so we choose NVIDIA’s Jetson Nano 2GB variant. We were shocked at the end because we were getting around 6FPS for 2 independent cameras to detect about 12 crops in each frame.
So the problem with using NVIDIA Jetson Nano 2GB is that it overheats a lot while running Deep learning algorithms. So for monitoring and controlling its temperature, we externally added a 1.8inch TFT (SPI) Display and a Cooling Fan. Also, Jetson needs 5V 3A power to run, and we only had AC-DC adapters that could source this much current, but this is not practical for mobile robots. So we were lucky enough that Xiaomi Technology’s MI Power bank 3i was just released, which sourced a 5V 3A supply.
When the selection of motor drivers is considered, usually, the L298n board is chosen. But the problem with this is that it cannot provide sufficient current as we used high torque motors, which draws a current of up to 15A. So we selected SmartElex 15S motor driver supplied and delivered by Robu.in.
We had 6 locomotion motors, 2 weeding motors, 2 Solenoid locks and 1 cooling fan, all connected parallelly to a single 12V Lithium Polymer Battery. So it is obvious to have PCB for power distribution; otherwise, the wiring will be messy. So we designed and fabricated a PCB using Autodesk’s EAGLE software which helped us a lot. Thanks to Owais Shaikh & Hrithik Wani from Agnel Robotics Club for helping us with Designing the PCB.
Finally, we made a Button based control of Robot, which was connected to Jetson. It made use of Software Interrupts and based on the button state, the respective python program ran in a new terminal. This is necessary because it would be easy for the customers (farmers) to use this product. Finally, in the next post, I will be talking about the programming aspect of this project.
Computer Vision and Programming Aspects
We have used 2 cameras as the sensors for our project. Further, we did 2 approaches of Lane Detection that were Based on Hough Transform and Deep Learning
ne Detection (Environment Perception) was based on Probabilistic Hough Transform from the OpenCV library. We did HSV color saturation, grayscale image conversion, Skeletonization by dilating and eroding the image for Detection of crops. Finally, we applied Probabilistic Hough Transform, which gave us 4 rows. We also used k means clustering for converging multiple lines
Back when our robot was not manufactured, it was not possible to test our lane detection algorithm. But with ROS and Gazebo, we were able to test our lane detection algorithm in Gazebo and improve the detection algorithm
Then finally, when we tested our Hough-based algorithm in real life, we got poor results. The problem was that the Detection used to fail, when lighting conditions changed. Further, when it is tested in a high Weed Density farm field, it wouldn’t differentiate between Weed and Crop. So we needed a new algorithm for Detection
To counter these problems mentioned above, we started with the Deep learning method with YOLO models. These models were known for Accurate and Fast Detection. Initially, we had a plan of buying Raspberry Pi 4b, but we purchased NVIDIA Jetson Nano 2GB as it came with a better GPU and was available at the same price as later
We started with data collection by taking pictures of Weeds in Groundnuts fields in Sangli, Maharashtra, India. We collected around 100 images, labeled the images with LabelImg, and trained the YOLOv3 model on Google Colab Notebook. We got an accuracy of 85-90%
We tested the YOLOv4 model, which was a more accurate and faster model than YOLOv3. So this time, we captured a new dataset in the Strawberry field with two classes - Weed and Crop (Strawberry) in Mahabaleshwar, Maharashtra, India. But we were getting only 2 FPS when we tested with a Video on Jetson. So we shifted to the tiny version of YOLOv4. This time we were getting around 12FPS which was good enough for this project
So before we directly test the robot in a farm field, we created an In-door farm field of Mock Crops, which were made up of Green Paper wrapped around a PVC pipe.
After the Mock Crops were detected, we found the centroid of each individual bounding box and categorized it into 4 arrays. Then Linear Regression was done, and 4 crop lines are created. Finally, Steering Angle was found out with the Angle of Inclination of lines detected. Initially, we were getting a very jerky motion, but with PID controller tuning, this problem was solved.
Check out our Video on Programming Aspects of this Project.
Thank You.
Owners:
Anish Dalvi
Tushar Toraskar
Airily Shefin Victor
Salvious Machado
All the codes used: