Fruit Picking Platform - Machine Autosteer

Hello all,

I am looking for some help in setting up a vision based steering system for a fruit picking platform. The machine is already built and the electronics are CAN-BUS. The machine doesn’t travel faster than 2 mph and this steering system only has to keep the machine straight within an orchard row (So you will have a line of trees on your left and right side). The machine operator will turn on auto-steer when he/she drives in an orchard row and will turn it off at the end of the row to make a manual turn. This probably is a trivial task for anyone who knows what they are doing.

So far I have an Intel depth sense D455 camera, a raspberry pi 4 and a PICAN2 (to convert to CAN messaging).

Does anyone have a recommendation if I should reconsider switching the Pi to Jetson ? Can a Jetson communicate with a CAN based controller (I saw this may be possible with 3rd party hardware)? Once my company gets the ball rolling on AI, I think there will be a huge industry demand to do this in Agriculture, coming from an engineer at an industry leading company.

I am not an expert at all in this area, but maybe including GPS positioning is of help here as well?

Regarding CAN - it is supported by Xavier: Enabling CAN on Nvidia Jetson Xavier Developer Kit | by Ramin Nabati | Medium

I currently have GPS alone self driving the machine, but the problem is that the signal isn’t reliable and many orchard fields don’t have good signal. The tree’s also hinder the signal. I’ll look into the Xavier though, thanks !

One vision-based approach to this using DNN’s may be to collect data when the machine is under human operation (i.e. being driven through the orchard rows). Save the sensor data (inputs to the DNN) along with the steering controls (outputs of the DNN - labels). If you collect enough of this data, you may be able to train a model that mimics the human driving. This is similar concept to DriveNet:

Other approaches could include a segmentation network, or model that output the center line of the row as opposed to driving commands. These may require more manual data labeling however, whereas the DriveNet can be collected/labeled while the machine is in operation. If you have 3D virtual assets of an orchard, you could train it in simulation too.

Dusty,

Is there anyway I can get Nvidia support on a process with this ? What is the best way to get pricing on a recommended package, as well as guidance ?

My company has plans for this machine past the steer assist that might affect which vision/driving stack we go with.

Hi @eddie2, you can contact our partner companies from the Jetson Ecosystem about developing custom hw/sw solutions: