Gavins Autonomous Crop Sprayer

I have been building an autonomous crop sprayer to selectively spray weeds in my hay fields. I have so far been using my laptop to run my neural network that uses the fast.ai library built on top of pytorch. I have been waiting for a single board computer like the raspberry pi that can handle neural networks, the Jetson Nano seems to fit that bill and I have just bought one to find out if it will work for my application.

Hi there. Have you got a Nano to try yet? If not, you could send me your code to try on mine. I am focusing on audio application, but would be glad to lend a hand.

Hi there thanks for your interest. I did get a Nano about 6 months ago, its a great bit of kit.

My first step is to use a raspberry pi camera to trigger a sprayhead (solenoid) when it sees a particular object. For this I used one of the built in models that you can download on the nano and set the sprayer to come on when it saw a computer keyboard for instance. The next step is to use my own model to do the same with the particular object that I want. This is where I am stuck or need more time as translation of models between different frameworks I find is tricky. I used pytorch and fast.ai to train my model and struggle to port it to the nano in the correct format.

What is your idea for an audio application?

I am at about the same spot. My test model was Inception_v1 on Tensorflow (I had been working with Tensorflow and Keras on the main 1080 computer), which I chopped up into a Jupyter Notebook and ran. All OK.

But those models were connected to tensorflow.contrib.slim, which seemed to be obsolete, or at least, looked like too much trouble to get running on the 1080 for training.

So my plan now is to continue with Keras. I have just put my notebook with the Keras model from the 1080 computer onto the Nano, and need to change a few paths to test it.

Update: Just loaded the keras model in and ran it, no problems. It says count_params = 37815, so not a particularly huge model. %timeit gives 19ms.

On the 1080 computer, I have trained a hand-rolled CNN model on spectrograms of various whistle sounds. The model then runs on snips from the microphone, and sends the results via OSC (Open Sound Control) to an animation that responds to them.

Just now put a notebook on Kaggle
https://www.kaggle.com/katkitty/kerasdockmodel
trying out a keras model on the docknet images.

I know I can get that model onto the Nano, though of course the notebook training of it might not be the best.

Thanks repeatingshadow,

It would be great if there was an easy way to train and output the correct graph for running locally on the nano. I guess we are still very early days with this stuff.

For reference (Blog Keras to nano) this also looks helpful.

https://www.dlology.com/blog/how-to-run-keras-model-on-jetson-nano/

Thanks for the reference. I need to check that out, see if my models can be sped up.

Meanwhile, I got the mobilenet model trained to .99 accuracy. But it says a few of the grass pics are docks. They have some other broad leaf stuff in them, do you know offhand what they were?

Sorry, haven’t figured out how to put images up here.

Could be buttercups? Thats the other big weed in my fields. .99 accuracy is more than good enough for this application. I will try and hack together some hardware in the next week or two so we can test this out in the field.

I think I’m going to use a raspberry pi 2 camera for the vision and an Arduino to control the solenoid.

I did think about creating another dataset called “carrots on concrete” it would be a test dataset with could be used to test indoors. The pictures would be of a concrete floor using carrots or potatoes instead of weeds in a field. That way I could test all year round in relative comfort of a shed just using carrot or similar veg from the supermarket