I am unable to install the Jetpack 4.2 SDK components (Cuda, vision, AI, Multimedia api, etc) on TX2 EVM.
The main issue is the Ubuntu 16.04 host does not recognize the nvidia USB device and does not create an interface that can be assigned 192.168.55.100. Thus, the SDK installation after OS Flash cannot continue.
On previous SDK (0.9.11 and before), the sdkmanager only checks for connectivity so I can get around this but the new sdkmanager (0.9.12+), the sdkmanager checks both nVidia USB device and the network connectivity on the Host and I can not get the installation to work even with manual re-configuration of network.
HOST (Ubuntu) Target (TX2) Notes
Make sure Python 2.7 is installed
Place in USB Recovery mode minimize issues by doing this
Proceed to step 3 and wait
until OS Flash finishes
finish first time configuration
l4tbr0 bridge is setup with 192.168.55.1
SDK component install fails lsusb on host does NOT show
nVidia device. Interface is not
created. No connectivity to
I think a lot of people is having the same issue with HOST not detecting the usb interface and therefore not having the ethernet connectivity to 192.168.55.1.
On previous sdkmanager, it only checks for network connectivity to 192.168.55.1 so I can work around this. But starting from 0.9.12, it checks both nVidia usb device present AND network connectivity, which means if the host does not detect the USB device, there is no way to continue SDK component installation.
I want to stress that when TX2 is IN USB Recovery, the nVidia device is detected by the HOST. When TX2 is NOT in USB Recovery, the nVidia device is NOT detected by the HOST.
The flashing of the File system/OS onto the TX2 always works since this only works over the micro-usb when TX2 is in USB Recovery mode.
What I need:
1. Is there a script that will allow me to execute the SDK component install using ONLY the target IP address.
2. What configuration/package are my missing on the UBUNTU HOST to make it detect the USB device.
NO, it is NOT automatic and it will NOT detect the TX2 micro-usb device after TX2 boots up normally.
The best way to verify whether the sdkmanager actually supports 16.04 or 18.04 ubuntu host is to perform a fresh install from scratch onto the host machine. And follow the published procedures to see how it fails.
I want to stress it is the SDK Components AFTER the OS flashing that fails and there is no manual way to perform this install.
I figured out a way to work around the SDK Component post install issue.
The following method will cause SDKM to only use normal network IP instead of relying on the nVidia USB interface for post install (i.e. SDK Components after OS flashing).
Step 1: open the SDKM level 3 json configuration file.
Typical location: ~/Downloads/nvidia/sdkm_downloads/sdkml3_jetpack_l4t_42.json
Step 2: Modify the target IP in the json file
"user": "nvidia", <<== Change user name and password to match that of TX2
"host": "192.168.55.1", <<== Change this to actual IP of target TX2 eth0
Step 3: Modify the following in the json file
Under "NV_L4T_DATETIME_TARGET_SETUP_COMP" section:
Step 4: Modify the following in the json file
Under "NV_ADDITIONAL_SETUP_TARGET_GROUP" section:
Making these changes will make the SDKM run more or less like the Jetpack 3.3 installer flow and alleviate the issue of USB interface not created on the ubuntu host.
Confirmed, this works!! After modifying the json file, run the SDKManager again and install the modules, now using the eth0 IP address and TX2 login.
Thank you so much for figuring out a workaround. I’ll link to this thread from mine addressing the same issue.
This worked for me as well. After many hours of messing with the USB bridge, this was a breeze. Nvidia should really fix this issue since many people are having it. Glad you were able to come up with a work around. Many thanks!
Actually, I may have spoken too soon. I’m not seeing all the SDK components. The SDKM went through all the steps and showed everything installed, but when I log into my TX2 I only see VisionWorks-SFM example. I’m not sure where it would have installed the other components?
Do you see anything like “cuda” in “/usr/local/”? Some components simply won’t be in your default shell search path (“echo $PATH” to see what is searched).
Please check /usr/src. Jetpack 4.2.1 changed the default install path.
nvidia@Jetpack421-EVM:/usr/src$ ls -al
drwxr-xr-x 8 root root 4096 Aug 2 16:36 .
drwxr-xr-x 12 root root 4096 Aug 2 16:31 …
drwxr-xr-x 5 root root 4096 Aug 2 16:25 cudnn_samples_v7
drwxr-xr-x 5 root root 4096 Aug 2 13:50 linux-headers-4.9.140-tegra-linux_x86_64
drwxr-xr-x 5 root root 4096 Aug 2 13:50 linux-headers-4.9.140-tegra-ubuntu18.04_aarch64
drwxr-xr-x 3 root root 4096 Aug 2 13:50 nvidia
drwxr-xr-x 7 root root 4096 Aug 2 16:36 tegra_multimedia_api
drwxr-xr-x 5 root root 4096 Aug 2 16:27 tensorrt
The initial problem I discovered was due to the target ip field during Jetpack 4.2 install could not be modified with earlier SDKM. The ip field is now modifiable so this mod is probably not required using later SDKM.
I would recommend update to the latest SDKM and try the install again if you are still experiencing install issues due to default IP address (192.168.55.1) issues.