Hello. I am trying to flash my Jetson Orin Agx because after installing nvidia-jetpack I could not connect to it anymore. I am using Windows 10 with wsl and Ubunto 22.04
I follow the steps in this links:
Until I get this error:
sdkmanager --query interactive --logintype devzone
Authenticating with NVIDIA server…
Loading user information…
User information loaded successfully.
Loading server data…
Server data loaded successfully.
- select product: Jetson
- hardware configuration: Host Machine
- select target operating system: Linux
- select version: JetPack 5.1.1 (rev. 1)
- get detailed options: Yes
Failed to load server data.
There are no components available for installation.
Sorry for my bad written English. Thanks in advance
I achive to run sdkmanager with GUI in window sub system linux but when I tried to flash Jetson Orin Agx it throw a message that the device is in flash mode already. Also when I tried to install SDK tools, the prompt ask me for ssh user and password. I did not configure ssh connection and I don’t where can I find this information.
Finally I could flash the device. I change some setups to manual and runtime and that was all.
New challenge T.T… I can not check if my CUDA configuration on my Jetson Orin is running. When I try to check it with nvcc --version it shows this message :
-bash: nvcc: command not found
I am trying to run openia-whisper into Jetson Orin. Any help? plz
CUDA compiler seems to be ok. But, I can´t achieve that whisper could use it as a device.
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2022 NVIDIA Corporation
Built on Sun_Oct_23_22:16:07_PDT_2022
Cuda compilation tools, release 11.4, V11.4.315
Also, I leave here this information.
Python version: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.104-tegra-aarch64-with-glibc2.29
Is CUDA available: False
CUDA runtime version: 11.4.315
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
Downgrading torch version and adding env var related to CUDA was the solution
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.