install NVIDIA CUDA driver

I had used the JetPack3.1 update TX2, and CUDA and CUDA samples had existed. when i start to install the Clara SDK, it tell me: please install NVIDIA CUDA driver at https://developer.nvidia.com/cuda-downloads?target_os=Linux?

I have not seen Clara docs, I’ve only worked with the Jetson products which use JetPack. What follows hopefully explains some requirements, but they are guesses based on knowing about the Xavier SoC (presumably the same SoC as used in individual Jetson products, but ported to a PCIe Clara card). I assume you are working with a Clara PCIe card and not with a separate Jetson product.

The CUDA drivers at that location are for desktop PC systems. I don’t know about Clara, but generally speaking there is a video driver which has to be installed before CUDA can work with a GPU. If you have an NVIDIA GPU, and you have the NVIDIA Video driver, then you can install CUDA (CUDA tends to talk to the GPU through the video driver for most cases). If no NVIDIA GPU, then you can’t install the video driver, and thus no CUDA driver (since CUDA drivers depend on video, and which in turn depends on the physical GPU).

The Tegra SoC GPU itself (Xavier is for example one of the Tegra SoC series, and some variant of this would exist on a Clara PCIe card) differs from the desktop PC GPU in three important ways…

First, this is an integrated GPU (iGPU), not discrete (dGPU). This means the iGPU is directly wired to the memory controller, versus a desktop dGPU going through PCIe. Because of this the desktop PC video drivers cannot work on a Tegra series chip. The implication is that you need to have two video drivers, and two CUDA API installs…one of each on host PC and Clara PCIe card.

The second difference is that the SoC is arm64/aarch64/ARMv8-a architecture, whereas you’ll find a desktop PC to be x86_64 (sometimes shown as amd64). Even if both systems had the same iGPU or same dGPU (which they do not), then you would still need two separate drivers and CUDA installs.

The third difference is that there are both video and CUDA drivers available for a desktop PC running any of Windows, Mac, or Linux…but the SoC itself has only been ported to Linux. You also cannot install a Windows or Mac video or CUDA driver onto the Xavier since it is running Linux. The SoC should be considered its own separate environment…always be careful to note whether the request or failure of a package install is looking at the host PC, or if instead it is looking at the embedded SoC end (which also changes which version of a package needs to be downloaded).

I don’t know how Clara is supposed to be updated (I’ve not seen any Clara docs, nor a real Clara), but in theory installation and flash and update instructions are just an extension to the Jetson Xavier. When using JetPack one of the requirements is a CUDA-capable host if installing host side components (and with Clara perhaps this is mandatory instead of optional), so I am assuming your PC is running Ubuntu. If not, then there might be a steeper learning curve. I use Fedora, and so I have to install video and CUDA drivers on my host manually.

If JetPack is used for Clara (and I don’t know if it is), then JetPack on Ubuntu typically has the ability to set up the Ubuntu host PC with video (dGPU) and CUDA drivers. Then JetPack can flash the SoC or install extra packages on it (Linux arm64/aarch64 versions for iGPU). If something went wrong, then perhaps it would name what you need to do manually, e.g., install CUDA on the host PC.

Prior to Clara, just for the separate Jetson embedded systems, JetPack was set up to understand a number of different Tegra-based systems, and to act as a front end GUI to download and update both the embedded system and the Ubuntu host. I suspect Clara has only minor differences, mostly due to talking to the embedded system over PCIe (Clara) instead of USB and ethernet (Jetson). When JetPack does update a host PC or embedded Tegra system it determines what packages are needed, downloads those automatically, and installs to the correct location. In cases where there is some sort of failure to download content (e.g., some host PCs in certain parts of the world or behind corporate firewalls will block downloads) you will get a message similar to the one you’ve seen.

If this is a failure to download, the it implies perhaps you are behind a network which is blocking some content. For such a case you could correct the network issues (e.g., set up your proxy or firewall differently), or you could manually download that file and run it by hand.

Is this an Ubuntu host PC? Are you behind any kind of firewall or proxy which might filter or block content?

On the other hand, I have no idea if JetPack is the appropriate way to install to Clara. JetPack3.1 is quite old, and Clara is quite new (the Xavier SoC did not even exist in the days of JetPack3.1). Since I don’t have access to Clara I can’t read the docs myself, but whatever instructions are which might note host PC requirements and/or JetPack versions, you will need to look closely at that.

[b][/b

catalog

Register
Sign In
Terms Of Use

Catalog

Documentation

User Forum

Collapse

DeepStream

Pull 3.0-18.11

Publisher
NVIDIA
Built By
NVIDIA
Latest Tag
3.0-18.11
Modified
November 14, 2018
Size
1.18 GB
Description
DeepStream SDK delivers a complete toolkit for real-time situational awareness through intelligent video analytics (IVA). With hardware-accelerated building blocks, the application framework allows you to focus on building core deep learning networks and IP rather than designing end-to-end solutions from scratch.
Labels
Inference
Toolkit
Pull Command
docker pull nvcr.io/nvidia/deepstream:3.0-18.11

Overview
Tags
Layers
What is DeepStream?
DeepStream SDK delivers a complete toolkit for real-time situational awareness through intelligent video analytics (IVA). With hardware-accelerated building blocks, the application framework allows you to focus on building core deep learning networks and IP rather than designing end-to-end solutions from scratch.
The SDK supports a diversity of use cases, using AI to perceive pixels and analyze metadata with the flexibility to span from the edge-to-the-cloud. This includes retail analytics, intelligent traffic control, automated optical inspection, freight and goods tracking, web content filtering, ad injection, and more.
The SDK features:
NVIDIA® TensorRT™ and NVIDIA CUDA® for AI and other GPU computing tasks
Video CODEC SDK and multimedia APIs for accelerated encoding and decoding
Imaging APIs for capture and processing
A graph-based architecture and modular plugins to create a configurable processing pipeline
Running DeepStream
Prerequisites
Ensure these prerequisites are available on your system:
nvidia-docker
NVIDIA display driver version 410
Pull the container
Before running the container, use docker pull to ensure an up-to-date image is installed. Once the pull is complete, you can run the container image.
Procedure
In the Pull column, click the icon to copy the docker pull command for the deepstream container.
Open a command prompt and paste the pull command. The pulling of the container image begins. Ensure the pull completes successfully before proceeding to the next step.
Run the container
To run the container:
Allow external applications to connect to the host’s X display:
xhost +
Run the docker container using the nvidia-docker command
nvidia-docker run -it --rm -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /root nvcr.io/nvidia/deepstream:3.0-18.11
Note that the command mounts the host’s X11 display in the guest filesystem to render output videos.
Option explained:
-it means run in interactive mode
–rm will delete the container when finished
-v is the mounting directory, and used to mount host’s X11 display in the container filesystem to render output videos
3.0-18.11 is the tag for the image; 3.0 refers to DeepStream release and 18.11 refers to the version of the container for that release
user can mount additional directories (using -v option) as required containing configuration file and models for access by applications executed from within the container
See /root/DeepStream_Release/README inside the container for usage information.
Container Contents
The deepstream:3.0-18.11 container includes the plugins, binaries and sources of sample applications, along with models and configuration files from the DeepStream 3.0 release package. The container also has pre-installed the various external package dependencies as listed for installation in the README.
The container is indented to be a deployment container and is not setup for building sources. It does not have toolchains, libraries, include files, etc. required for building source code within the container. It is recommended to build software on a host machine and then transfer the required binaries to the container.
License
The DeepStream SDK license is available within the container at the location /root/DeepStream_Release/LicenseAgreement.pdf. By pulling and using the DeepStream SDK (deepstream) container in NGC, you accept the terms and conditions of this license.
Suggested Reading
Please access DeepStream 3.0 documentation containing user guide, API reference manual and release notes at https://developer.nvidia.com/compute/machine-learning/deepstream-documentation
If you have any questions or feedback, please refer to the discussions on DeepStream for Tesla Forum.
For more information, including blogs and webinars, see the DeepStream SDK website.]