L4T insllation excludes nividia tools

after installation of jetpack 3.3, the ubuntu tegra boots,but the nvidia tools as nvidia-smi, cuda-gdb aren’t installed, and manual installation wasn’t meant for TX2 ARM architecture, it meant for host https://developer.nvidia.com/cuda-downloads?target_os=Linux, why those tools wasn’t installed although claimed in the jetpack installation documentation, and how i can install those tools manually for target ARM TX2

or in other words, the driver tools aren’t included in the jetpack 3.3, how to install those?

i found cuda tools to be under /usr/local/cuda/bin, and other tools as nvidia-smi doesn’t exist

JetPack itself only runs on a desktop PC. The various tools never install during flash, but do install after flash completes and the unit reboots.

The tools from a desktop PC do not work on a Jetson. For example, “nvidia-smi” requires a PCI interface to a discrete GPU, but a Jetson integrates the GPU directly to the memory controller. This (plus being a different architecture) implies that anything from a desktop dGPU fails with a Jetson iGPU.

JetPack and SDK Manager allow you to uncheck the flash step and simply install packages. For this you ignore putting the Jetson in recovery mode and just make sure it is fully booted. In the case of JetPack you would connect wired ethernet. For SDKM there is a virtual ethernet in the USB (it is still “wired”, but it is a fake ethernet over USB).

but why tensorflow module isn’t installed, althought it’s packaged in jetpack

I have not worked on this so I’m not sure, but if this is Python, then perhaps it is a case of needing to use pip3 for install:
[url]https://docs.nvidia.com/deeplearning/dgx/install-tf-xavier/index.html#install[/url]

Python seems to want to do its own thing, whereas the Jetson installer software tends to work at the system “.deb” package level. Someone else may have more information.