Will a Nvidia Docker Image be provided for the DLI DeepStream online course?

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
Jetson Nano 4GB

• DeepStream Version:
5.1

• JetPack Version (valid for Jetson only)
JETSON_JETPACK=4.5.1

• TensorRT Version
JETSON_TENSORRT=7.1.3.0

**• Issue Type: New requirements:

Instead of using the DLI DeepStream SD Card Image, will a Nvidia Docker Image be provided for the DLI DeepStream online course? The"Getting Started With AI On Jetson Nano" online course uses a docker image. The following command launches the container:

sudo docker run --runtime nvidia -it --rm --network host --volume ~/nvdli-data:/nvdli-nano/data --device /dev/video0 nvcr.io/nvidia/dli/dli-nano-ai:v2.0.1-r32.5.0

NVIDIA DLI DeepStream Jetson Nano SD Card Image v1.0.0

Hi kaisar.khatak,

The current version in DS DLI is on older version. It’s on our roadmap to update the DS DLI but user can use the Jetson DLI image and install DS inside the image or use the DS image that’s available on NGC. We provide several images of DS on NGC - https://ngc.nvidia.com/catalog/containers/nvidia:deepstream-l4t

1 Like

@kayccc I have Jetpack 4.5.1 and DeepStream 5.1 installed on a Jetson Nano. The DLI course references DeepStream 4.0. I can use the DeepStream 5.1 that is installed on the Nano. However, I was wondering what is the best way to access the online course jupyter notebooks:

    Object Detection Application

    Notebook 1: Build a DeepStream pipeline to find objects in a video stream, annotate them with bounding boxes, and output the annotated stream along with a count of the objects found.
    Multiple Networks Application

    Notebook 2: Build a DeepStream application to find objects in a video stream, pass those images through multiple classification networks, and display detailed information about the objects in the output stream.
    Multiple Stream Input

    Notebook 3: Add the ability to run inference on multiple input streams with a tiled output.
    Video File Output

    Notebook 4: Add the ability to save an annotated video stream to a file in the format of your choice, for download and later use.
    (Optional) Using Different Neural Networks
    Requires an Internet connection to the Jetson Nano

    Notebook 5: Change the neural network in the DeepStream Pipeline to another, such as YOLO (You Only Look Once).
        Disclaimer: The YOLO model is an open model taken from http://pjreddie.com/darknet and https://github.com/pjreddie/darknet. NVIDIA doesn't guarantee accuracy of this model. The accuracy might vary based on the video.
        Tip: You can try your own videos with this lab, or download some from the Internet from sites such as https://www.pexels.com/ or https://www.videvo.net/.
    (Optional) Live Stream
    Requires a USB webcam connected to the Jetson Nano

    Notebook 6: Run inference on a live stream from a webcam connected to the Jetson Nano.
1 Like

Hi @kaisar.khatak
As @kayccc mentioned, the DLI is too old, some content are invalid now.
Could you build and run some DeepStream saples to ramp up DeepStream?

This is some introduction about the DeepStream C++ samples
https://docs.nvidia.com/metropolis/deepstream/dev-guide/text/DS_C_Sample_Apps.html

@mchi Thanks for responding. I have successfully built and run most of the C++ sample apps and downloaded and run the python sample apps which seem to demonstrate all the functionality in the DLI DeepStream 4.0 Notebooks. Class completed (joke)???

DeepStream is a bit complex and has quite a few layers (tech stack). It will take some time to become familiar with all the parts (e.g. TLT).

Yes. DS includes many plugins NVIDIA developed for the HW accelerators (CODEC, Video Image Compositor, GPU/CUDA/TRT, Display, etc), and it can also use the 3rd partty GStreamer plguins. By some config files or simple code, it links the plugins and builds the piepline you want,