Is there going to be a release of a later JetPack 4.6.1 that will have CUDA 11 + that supports full hardware support for TensorFlow2 for the Jetson Nano.
I cannot find any official information out there about this from NVIDIA and my company have purchased 10ās of these devices and so it would be good to know if these boards have a future.
It would be great to hear from someone at NVIDIA on this subject as I need to make hardware decision soonāish.
Thanks for the response - I changed the label to be more accurate.
It isnāt really an issue as such, I want to know if this development is likely to happen. Will there be a JetPack version that can be run on a Jetson Nano [4GB] in the future that supports CUDA 11 and therefore Tensorflow2?
Regarding the links you sent, I have already invested quite a bit of time into TensorFlow2 and moving to PyTorch would be a big step, possibly this is worth considering.
It would be great to get a more specific answer but possibly I am asking the wrong question and I am open to this.
Hi @ryan.byrne, no there is not planned to be an upgrade to CUDA 11 for Jetson Nano/TX1/TX2. For more info, please see these previous announcements about the JetPack roadmap:
However, there are TensorFlow2 wheels and containers (with CUDA acceleration) available for Nano and JetPack 4.6.x here:
I will have a look at this over the weekend and report back on whether I was successful! Would be great to get it working with CUDA acceleration. Many thanks for your response.
However I got the same error and so therefore although the default python is 3.7.5 when I type $ python obviously 3.6.9 is still being pointed to.
I tried this with a fresh SD card image using Jetpack 4.6.1
I will try using the link you provided to upgrade to 4.6.3 and try again.
*EDIT will try and update using the SDK manager from Jetson Nano to Jetson Nano to install 4.6.3 as I only have a Windows machine.
**EDIT is it actually possible to install the SDK Manager on another Nano TX2 and manage one Nano from TX2?
P.s. I was unsuccessful with this procedure, Installing TensorFlow for Jetson Platform - NVIDIA Docs I think this would work on the Jetson Nano Orin but not the āJetson Nanoā althogh I did get it working in a virtual environment but I am not sure this was with GPU access (TBC)
@ryan.byrne it appears that somehow, a new protobuf 4 package appeared on the PyPi index for Python 3.6 which is in fact only for Python 3.7+ - instead, install protobuf<4. These TensorFlow wheels are for Python 3.6.
Here are the steps I just ran in a fresh environment to install TensorFlow wheel in JetPack 4.6:
@dusty_nv just for my own reference I attempted to perform these actions onto a clean Jetpack 4.6.1 SD image as a usr and everything went really smoothly until the install of the .whl there seemed to be a few errors toward the end and unfortunately I did not record these however if helpful I could obtain this data.
Will now try this within the nvcr.io/nvidia/l4t-base:r32.7.1 container. I am sure this will work as I have had success before with containers and venvs.
Just as a follow up question is the best way to develop code to use containers rather than as a user on an embedded pc? My application is for these to be static devices and due to lack of familiarity was concerned that a container would need internet connection rather than me just deploying the code in user then running with something like crontab?
I realise this question is nuanced but I would really appreciate any takes.
While theyāre by no means required, personally I use containers a lot to keep my main environment clean, to easily reproduce builds, and to be able to redistribute installs via the container images. A lot of these ML packages get quite complex with all their dependencies, and often when a new version of it or one of itās dependencies gets released to PyPi or apt it can break (this is by no means unique to Jetson or embedded)
If you check out what I have going on over at https://github.com/dusty-nv/jetson-containers with all the automated CI/CD, that gives a glimpse as to why I do it. I also have a lot of packages and support cases to handle though - for a singular āstaticā application development, it may not be as needed.
Running containers (once they are built or pulled) doesnāt require an internet connection any more than your uncontainerized application would. I get why there could be that perception though, because containers are often used for deploying microservices that use networking (however in the case of Jetson, those microservices typically communicate over localhost)
@dusty_nv I really appreciate your response. Even with previous [non ML] projects with only a few modules there were interdependency issues, so I really do understand your point on the advantages of a container.
Thanks for the examples I will check this out. Now I think about it, for my previously milestone it was actually nginx/static IP issue that made me change back to what I know [local usr$], so will definitely go back to containers and explore this for deployment.