Best way of utilising Jetsons resources


I’ve had my Jetson for a little while now - I have set it up using Grinch, and I can remote connect to it and so on using my home network. Now it’s time to start using it for what I bought it for!

I thought I’d start a topic on what can be done with Jetson in the area I am interested in - autonomous vehicles. My background is that of an aircraft structures engineer, rather than a coder. I have dabbled in C++ and Python before and am ready to learn!

I’d like to use my Jetson as part of an autonomous vehicle architecture (The task I have taken on is: I want a standard system comprised of a number of components that can be put in to a vehicle regardless of type of vehicle, so land sea or air). I see the jetson handling high level functions like vehicle management, navigation including delivering commands to the autopilot device, and collision avoidance using onboard sensors to build up a virtual volume to work within.

My current knowledge limitation is centred around how I would organise the more heavy duty task of taking in sensor data (such as LIDAR and IR Cam) and processing it to create a useable volume within which algorithms can work out optimal paths for goals and ‘pop-up’ obstacles. Can I run this whole process on the GPU? Or is it a case, that using CUDA, ‘batches’ of data are sent to the GPU for processing and results sent back to the process running on the CPU?

I guess that’s the current limit of my knowledge - how does the code and the board hardware co-exist?

I hope this can spark a bit of discussion, or if I’m totally in the wrong place, someone can show me the way!



Hi Coanda,

There are some pretty interesting questions you’re asking. Dustin Franklin wrote a good overview on how one might start tying things together:

In the open source world, there’s Robot Operating System (ROS):, with installation for Jetson:

To say ROS Is a rich environment is an understatement.

With that said, it’s rather the brave new frontier as to what people can do with the Jetson in autonomous vehicles. At the same time, you shouldn’t feel limited to using just one Jetson for a larger vehicle. I’ll also note that it’s not for the faint of heart, it’s pretty hard core stuff given the current state of both the Jetson and autonomous vehicles in general. Like any new hardware platform, there are a whole bunch of questions on how to get access to everything. It’s easily more complex than a desktop PC from just a few years ago once you add in power management and multiple sensor monitoring.

This forum tends to be better for asking specific questions which makes it easier for people to lend their particular expertise. Another resource is the Jetson TK1 embedded community page on Google Plus.

Hope this helps, and I’m looking forward to hearing about what you build.

My long term project is to build a remote operated vehicle based on Jetson. It will not be autonomous but will have some simple logic in it, e.g. stop before hitting an obstacle etc. It could be a good base for autonomous operation though. I originally started to build a quadrotor from scratch but quickly noticed that I should start with something on the ground first.

I use an RC car as the base for the mechanics as I realized that it’s an easy way to get started with decent mechanics. Maybe some day all rigid parts are 3D printed and only tires, suspension and motors are from RC parts.

I’ve replaced the original electronics with a Linux board (to be Jetson) and a microcontroller board. Linux handles all the high level stuff and the microcontroller board handles motors and other real-time stuff. The Linux board sends commands to the MCU over USB.

On the SW side I’m currently using QT both on the car and on the PC controller application. I’ve written everything from scratch, which probably wasn’t a good idea. The only “clever” thing I think my approach has, is the communication protocol. I use a single UDP stream with a relay so that it can be used from one NAT’ed network to another. The protocol has low and high priority packets. The low priority packets (e.g. video stream) may get lost while the high priority packets has an acknowledge mechanism. But obsoleted high priority packets are not resent but are replaced by a new one. E.g. if a motor control packet is lost and a new command is issued before the retransmit of the old one, then the new command is sent and the obosoleted is discarded. The reason for this approach is to allow longer latencies (like 3G/LTE) while still trying to control the vehicle in real time.

I’ve been a bit lazy with the project lately. I’ve been able to control both rotating the camera and driving the car but in practice it the “experience” wasn’t smooth enough for proper remote operation. I need to fine tune the camera motor control.

If there is interest to build a (cheap) vehicle platform based on Jetson, MCU and RC parts, it would be really cool. So far I’ve been doing everything (even designing and soldering the MCU board) by myself and it’s way too slow.

Here’s a blog post of my project:

About the original question, I’m not familiar with CUDA. But AFAIK, you have a normal process on the CPU and it uses the CUDA API. GPU does most of work. On Tegra, the GPU accesses the same memory as the CPU but there are some considerations that need to be taken care of due to caching and whatnot. So I don’t think you need to always copy the data between the two but you can’t directly just use them either. I’m just guessing here but there’s plenty of docs about that out there though.

It might make sense to keep the CUDA questions apart from the robotics questions to get shorter and more precise answers :)

Thanks for the replies - interesting reads.

I agree - the different aspects of my original post will require their own threads in the right areas of these forums. This is just to get a bit of chat going, and I suppose, to emphasise that the different aspects are closely intertwined.

As for my project - a bit more detail. Like you Kulve, I am prototyping using an RC car. I’ve gone for a Tamiya CR-01 Unimog since it has 4 wheel drive (2wd is easy to achieve too), good ground clearance and a great shell for mounting equipment as more than half of it is a flatbed which can be easily modified. I will be using the Jetson and have that connected to a Pixhawk autopilot. The auto pilot will be used to command the vehicle and feed back vehicle specific data to the Jetson. The Jetson will get worked out running comms (as yet to be defined but could be wifi, could be gsm and will eventually have a sat phone module as a fallback comms method), sensors (IR and LIDAR of a suitable type), and processing that data in a virtual environment for path planning.
In terms of comms, I plan to use the mavlink protocol where I can.

The overall aim is to develop autonomous vehicle systems for agriculture (ground with air as required) and aerial systems for emergency services. The hope is that all that will be required is a separate software upload and config file to move from one machine to the next, thus easing the work of us as manufacturers in developing these solutions.

So, there’s a lot of work to do!

I will be taking some time to go through the CUDA education sub forum.

So here’s an update on the virtual model of the shell and electronics. I’ve only modeled the flat bed because I think everything will be contained there to begin with. you see the flatbed with forwards towards upper left corner. The shelf is on nylon hex standoffs, and the TK1 is also going to be on standoffs. At the moment. probably small makerbeam sections for solidity. The battery which will power the TK1 and the connected pixhawk autopilot will be located on the shell flatbed and should slide in an out one way or another! I still need to model the pixhawk and it’s GPS module on a smaller shelf over the TK1. I am hoping that the Aluminium shelves will shield the electronics from motor and battery noise - vehicle propulsion battery is separate and sits in the chassis.