End-to-End Deep Learning for Self-Driving Cars

Originally published at: End-to-End Deep Learning for Self-Driving Cars | NVIDIA Technical Blog

In a new automotive application, we have used convolutional neural networks (CNNs) to map the raw pixels from a front-facing camera to the steering commands for a self-driving car. This powerful end-to-end approach means that with minimum training data from humans, the system learns to steer, with or without lane markings, on both local roads…

What happen if there is no road?

This car drives on decided road.

Each steering decision is based solely on the current frame? Also, I didn't see the gas or brake pedal mentioned -- is the human still responsible for these?

That's correct. Each steering decision is based solely on the current frame. Speed is controlled by the car's adaptive cruise control.

That's very impressive! Have you considered using a LRCN (https://arxiv.org/abs/1411.... Also, the ACC, is this what is available currently to consumers?

I agree, it's quite amazing what the network can do with a single frame. We are working on a number of advancements whose results we will publish at some time in the future.
Yes, it's the standard ACC in the car, available to consumers.

I'm still not clear how a self-driving car deals with situations which have not been seen in the pre-trained image model?

Self driving cars used to be the thing of the future. But they are no longer something of the future. Companies like Tesla, Mercedes Benz and BMW are working towards making self driving cars a reality. You can also check http://conceptcarsuae.com/b...

Hi, in the last layer (Fig. 5), after flatten, should not be 64*1*18 = 1152 neurons? Thanks

Hello!
Are there any reports on how good the network converges? What is the minimal loss achieved?

Hi all!
Is the model available somewhere? and the dataset you generated?

I have a question about section 6:

"Since human drivers might not be driving in the center of the lane all the time, we manually calibrate
the lane center associated with each frame in the video used by the simulator. We call this position
the “ground truth”."

What do you mean with "manually calibrate the lane center"? What did you do exactly?

Many thanks!

'Our system has no dependencies on any particular vehicle make or model.'
How you make it not depended on any make or model? İf you are recording steering wheel angle it should be work only in that car which you drive to get data.

Hi

I was wondering how the YUV form of the image is helping the network. I am assuming that it might speed the process. But it is hard to get how would YUV would work faster over RGB as there is just a linear transformation between the two.

It would be a great help if someone could answer !

Thank you

Hello
Someone please guide me on how to collect data for self driving car using raspberry pi.

This is a forum for NVIDIA developers. If you need support with Raspberry Pi, you'll need to find another source, since we don't make or support that platform.

Hi, this is a powerful approach that requires minimum training data.

But recently, I find the end-to-end self-driving model can be manipulated by adding imperceivable perturbations to the input image. Adversarial Driving that attacks Autonomous Driving may raise some concern.

I’ll further investigate some defence strategies to improve the robustness of end-to-end driving models.

How does the network return the correct turn radius when the vehicle is trying to return to the center of the road without a speed input?

Does anyone know which chip /SOC mercedes are missing supply of for mbux 549 high with augmented reality. Basically the main holdup for mercedes cars is the augmented reality chip would like to know which it is and if anyone has XENTRY PASSTHRU WIS/ ASRA ESP CAN U link pdf of the details or part number if possible.

Details please