Trying to get oriented on the Nvidia approach to help my son with his Science Fair project.
Problem:
He’s trying to solve is the “burning a pancake” problem. The average young man that goes camping is impatient. He turns on the stove to heat up the griddle. Since he wants it to heat quickly, he puts it on high! By the time he puts the batter on for the pancake, it burns, so he turns the heat down. However, the griddle won’t cool quickly. The boy gets frustrated turning the burner up / down trying to control the heat just right, with a lot of pancakes in the trash!
Heat transfer in this scenario is classically dominated by diffusivity which establishes a time constant for achieving an isothermal state. The PID loop is well suited for this controls problem, but it suffers from the long time constant it takes to heat up a cast-iron griddle. The hypothesis is:
Is there a way with ML to heat up the griddle faster than the classical limit of the PID loop?
This is not just a solution in a computer; he’ll be building hardware. So he needs to operate temperature controllers, thermistors, …
The question is:
Are the Jetson products a good fit for this application? I think the answer is yes
What is the minimum viable platform? I think the answer is to deploy on Jetson Nano b/c the solution is small
What is the best environment for training? I have no idea if this requires Cuda to train
What are the best resources for getting started on RL on Nvidia products? Like the cartpole and Lunar Lander examples
Will we get trapped up in supply chain issues? He needs results in November. Can we buy the right hardware in time?
However if you already have a PC/laptop, you could just do a proof-of-concept on it using that and load Ubuntu and PyTorch/ect on it. This is a popular PyTorch repo for RL algorithms that hooks up to OpenAI Gym environments: https://github.com/DLR-RM/stable-baselines3
Hope that helps, and good luck with the science fair!
We’re working on starting with PyTorch and OpenAI. Going through tutorials now and learning how to modify environments.
We will need to deploy to hardware. I’m not sure what the best hardware platform is for this project. It seems like Arduino is under powered and Raspberry Pi seems plausible.
My attraction to Nano was an all-in-one solution. Run PyTorch on the Nano and Train and also Deploy.
Is this a good strategy?
I’m totally confused by the aftermarket Yahboom product. I don’t understand what they are describing is different about this product and the official Nvidia product.
I see that it has a 16 Gb eMMC as storage and that I can add storage with a U-disk. Do you anticipate I’ll need to do that? 16 Gb storage seems slim by today’s standards for Linux / OpenAI / PyTorch / …
This is good question, and I appreciate when it first looks working but later discover it fails in some cases…there is something to learn from this if you can explore the model.
Before buying, you may better define your needs. Nano has many interfaces and can inference with NN, but for training you’d better use host PC or cloud service.
You may better define these before buying :
What are the outputs of the system ?
resistor only? this might be done by EEPOT connected with SPI, I2C…ok with some HW interface).
For gas, safety requirements may make it more complex.
What would be the inputs?
IR camera? you would look what model is supported by your carrier board SDK
Training on a PC is preferred over the cloud. I’m assuming training locally is easier to setup and execute than training in the cloud. However, I need to configure a PC with native Ubuntu. Examples like Cartpole don’t render on Windows or an Ubuntu VM.
The hardware will be a mock-up of a stove.
It is too dangerous to attempt to connect a real gas / electric stove for control. The mock-up will be a TEC attached to a “griddle”. The TEC will interface to a controller that will provide input (analog voltage) for the “set temperature” and output for the temperature sensor (analog voltage). The TEC will attach to a mock “griddle” with a thermistor on the side where food would normally be placed.
Specific answers to the questions
Outputs are an analog voltage that goes to a Temperature Controller for a TEC, creating the “hot side”.
Inputs are the temperature of the TEC and the temperature of the “food side” of the griddle.
Training would begin with a finite difference model of the heat transfer. After training on this and successfully demonstrating control, it would be deployed on the mock hardware.
I think all of this is compatible with the Nano.
The nano advertises training using JetPack on the Nano
Why not train on the Nano itself? I don’t have another GPU other than a couple of PCs with Nvidia video cards with GPUs.
Any comment on the Yahboom implemenation of the Jetson Nano?
Why shouldn’t I use this hardware?
Is the limitation of 16 Gb eMMC a significant constraint? How can you fit Ubuntu, Anaconda, … on this?
Sorry I missed that. Might you please share the source link so that I won’t give this weird advice any longer ?
It may just be a GPU memory issue, but your problem may not require so much GPU memory and might be trained on Nano itself.
I haven’t said you shouldn’t, I don’t know this HW (very curious about their robot embedding the $99 LIDAR).
Just advised to check what links (GPIO, I2C, SPI, PWM, … ) you need for interfacing, and check that the system you’re about to buy provides these interfaces as available.
Also note that some carrier board vendors may not support for long time, and NVIDIA will only release updates for NVIDIA devkits.
I cannot say what will be the best eMMC size for you…Ubuntu may not use so much disk by itself, it mainly depends on what software you’ll want to install later.
it appeared to me that I could train simple RL on the Jetson Nano itself; I don’t necessarily need to set up a separate Linux box to do this. I thought the Jetson Nano had sufficient NNs on-board and GPUs such that training on the Nano would be faster than training on a PC that did not have an Nvidia GPU.
I think by experience in my case that a GTX1050 on a decent (<5 years old) PC would be 3x faster than a TX1 for GPU processing (Nano shouldn’t be that much different with half CUDA cores).
Probably someone more skilled would further comment on this, as each case is specific.