Errors Flashing Jetson Xavier AGX

Hello,

I’m trying to flash my Jetson Xavier AGX from my laptop which has ubuntu 20.04 through the sdk manager. The Jetson AGX does not currently run. I put the jetson in recovery mode by holding onto the middle button and then pressing the power button. But then when the jetson gets to about 5% flashing complete, an error message pops up that says I need to put the jetson in recovery mode again.

Here are the error notes from the sdk manager terminal.
failed.txt (5.2 KB)

For what its worth, I have disconnected the mouse and keyboard and HDMI cables and it still gives me this error:

" The target is in a bad state

The Jetson target is in a bad state and cannot be flahsed. Please manually put the target into recovery mode and then retry flashing."

Also, I’m not 100% sure that it every goes into recovery mode. What I do is unplug everything (including USB to USB C), push and hold the middle button, then I press and hold the power button as well for a few seconds. This is what I’ve been trying but it still seems to not register being in recovery mode.

Please help thank you

In recovery mode it is just a custom USB device. One suggestion: Keep the keyboard/mouse (and monitor if you have it), and when this occurs, unplug and replug the USB. See if it is able to continue.

Thanks for the quick response. When I thought all was lost, I used the ./flash jetson-xavier … method from my host computer connected to the jetson that I’ve seen online and that seemed to work surprisingly. It didn’t even register as being in recovery mode still!

Thanks

It sounds like it now works, but I’m not positive, so I thought I’d ask to confirm.

Yes it seems to be working enough with OS so that I can install Jetpack thanks!

Currently in process of downloading Jetpack with this guys help Upgrade NVIDIA Jetson JetPack 5 - YouTube

So I have run sudo apt install nvidia-jetpack
and it downloaded a bunch of things. I saw both cuda and cudnn among them. When I run “apt search nvidia-jetpack” it shows the three meta packages as all installed.

But when I run nvcc or nvidia-smi, both pull up nothing, indicating cuda is not installed. Any ideas what I’m doing wrong?

Is this on the host PC? nvidia-smi requires a PCIe NVIDIA GPU (discrete dGPU). If run on a Jetson it will always fail since the GPU is integrated directly to the memory controller (iGPU).

On the host PC nvcc is tied to a particular release version. CUDA content on the PC is generally at “/usr/local”, and from there “cuda/” as a symbolic link to a specific CUDA release. Example from a PC:

$ ls -ld /usr/local/cuda*
lrwxrwxrwx  1 root root   22 Aug  5  2021 /usr/local/cuda -> /etc/alternatives/cuda/
lrwxrwxrwx  1 root root   25 Aug  5  2021 /usr/local/cuda-10 -> /etc/alternatives/cuda-10/
drwxr-xr-x 17 root root 4096 Aug  5  2021 /usr/local/cuda-10.2/
lrwxrwxrwx  1 root root   25 Aug 23 06:29 /usr/local/cuda-11 -> /etc/alternatives/cuda-11/
drwxr-xr-x 14 root root 4096 Aug 23 06:29 /usr/local/cuda-11.4/
drwxr-xr-x  5 root root 4096 Aug 23 06:25 /usr/local/cuda-11.7/

If the “/usr/local/cuda/bin” directory exists, then nvcc would be located there for a PC. If that directory is not in your search path, then you won’t find it unless you use the full path to that nvcc.

Is there a way to confirm on the Jetson that cuda has been installed?

What do you see from “ls -ld /usr/local/cuda*” when run on the Jetson?

I get this:

lrwxrwxrwx 1 root root 22 Nov 28 13:22 /usr/local/cuda → /etc/alternatives/cuda

Hmm, I do see a cuda-11.4/ folder in /usr/local

I suppose this means it actually is installed?

Jetsons normally only install one release of CUDA. PCs though often have multiple releases installed. Individual releases mostly have their name along with a version, e.g., the “cuda-11.4/” you noticed. Then, a “default” alias is often created as just “cuda/”. If a program does not want a specific release, then it would consult “/usr/local/cuda”. If instead a particular release is desired by the software, then it would look for that, e.g., for “/usr/local/cuda-11.4”. They might be the same release. So the short answer is that you have least CUDA 11.4 installed, and a Jetson would still have the “cuda/” alias even if there is just one release installed.

EDIT: This is in the Jetson TX2 forum, but I noticed you said Xavier. So CUDA 11.x is valid for an Xavier, but not a TX2.

I will note though that CUDA 11.4 is valid only for a PC or a Jetson with an L4T R34.x+ (JetPack 5+). If you have CUDA 11.4 on a TX2 (which cannot use JetPack 5+, and thus cannot have L4T R34.x+), then you’ve installed incompatible software. Is that CUDA directory listing from a Jetson, or is it instead from a PC?

Incidentally, if you look at “ls -l /etc/alternatives/cuda/*”, then this might tell you if the default is also CUDA 11.4 (it should be since it is a Jetson and not a PC).

That is on my AGX Xavier now running Jetpack 5.0.2.

I noticed before I updated to Jetpack 5.0.2 from 4.6, I could run nvcc —version and it pulled up cuda 10.x. Should there be any issues if I install nvcc? Or would there be any reason to?

Are you speaking of installing nvcc to the Jetson, or to the host PC? If you use apt to install it should be ok on any platform to which it is available. There isn’t usually any need on the Jetson though since most people build from the host PC (and set the architecture, e.g., arguments to nvcc-gencode arch=comput_72,code=sm_72” will compile on a PC even though this is for execution on Xavier; one can specify more than one architecture, but it increases file size). To execute that file on that architecture CUDA itself has to be installed on that system, e.g., to a TX2, but nvcc is not needed for execution (nvcc is for compile).

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.