Getting started on IVA with DeepStream using NGC Docker

Hi there, I just started to look on to DeepStream from NGC. So, I done all setups for docker, and nvidia-docker as instructed inside my guess Ubuntu Virtual Machine.

Then, I open up the terminal, and pull the DeepStream docker image as follows:

user@ubuntu16:~$ sudo docker pull nvcr.io/nvidia/deepstream:4.0-19.07

After that, I give the permission to use the display.

user@ubuntu16:~$ sudo xhost +
[sudo] password for user: 
access control disabled, clients can connect from any host

Then, I run the docker image:

user@ubuntu16:~$ sudo nvidia-docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -w /root nvcr.io/nvidia/deepstream:4.0-19.07
root@902800ad7200:~#

I looked up to the README file from the deepstream-app directory, and I skipped the installation as mentioned in the README file.

*****************************************************************************
* Copyright (c) 2018-2019 NVIDIA Corporation.  All rights reserved.
*
* NVIDIA Corporation and its licensors retain all intellectual property
* and proprietary rights in and to this software, related documentation
* and any modifications thereto.  Any use, reproduction, disclosure or
* distribution of this software and related documentation without an express
* license agreement from NVIDIA Corporation is strictly prohibited.
*****************************************************************************

================================================================================
DeepStream SDK
================================================================================
Setup pre-requisites:
- Ubuntu 18.04
- Gstreamer 1.14.1
- NVIDIA driver 418+
- CUDA 10.1
- TensorRT 5.1+

--------------------------------------------------------------------------------
Package Contents
--------------------------------------------------------------------------------
The DeepStream packages include:
1. binaries.tbz2 - Core binaries
2. sources - Sources for sample application and plugin
3. samples - Config files, Models, streams and tools to run the sample app

[b]Note for running with docker
-----------------------------
While running DeepStream with docker, necessary packages are already pre-installed.
Hence please skip the installation steps and proceed to "Running the samples" section of this document.[/b]

So, I run the sample by giving the command:

root@0a2289b82a96:~/deepstream_sdk_v4.0_x86_64# deepstream-app -c samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt

…And later it produces this output:

(deepstream-app:12): GLib-GObject-CRITICAL **: 05:59:06.863: g_object_set: assertion 'G_IS_OBJECT (object)' failed
** ERROR: <create_render_bin:90>: Failed to create 'sink_sub_bin_sink1'
** ERROR: <create_render_bin:168>: create_render_bin failed
** ERROR: <create_sink_bin:564>: create_sink_bin failed
** ERROR: <create_processing_instance:637>: create_processing_instance failed
** ERROR: <create_pipeline:967>: create_pipeline failed
** ERROR: <main:632>: Failed to create pipeline
Quitting
App run failed

May I know what have I missed? Please note that I totally a beginner.

It’s EGL display issue.
You can disable [sink0] and enable [sink1] in source30_1080p_dec_infer-resnet_tiled_display_int8.txt
to save result to file.

Thank you for the responds.

Okay, now I’ve made changes to the configuration file.

# Copyright (c) 2018 NVIDIA Corporation.  All rights reserved.
#
# NVIDIA Corporation and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto.  Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA Corporation is strictly prohibited.

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=5
columns=6
width=1280
height=720
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://../../streams/sample_1080p_h264.mp4
num-sources=15
#drop-frame-interval=2
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[source1]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI 4=RTSP
type=3
uri=file://../../streams/sample_1080p_h264.mp4
num-sources=15
gpu-id=0
# (0): memtype_device   - Memory type Device
# (1): memtype_pinned   - Memory type Host Pinned
# (2): memtype_unified  - Memory type Unified
cudadec-memtype=0

[b][sink0]
enable=0[/b]
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
nvbuf-memory-type=0

[b][sink1]
enable=1[/b]
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400


[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
live-source=0
batch-size=30
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
##Enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=0
nvbuf-memory-type=0

# config-file property is mandatory for any gie section.
# Other properties are optional and if set will override the properties set in
# the infer config file.
[primary-gie]
enable=1
gpu-id=0
model-engine-file=../../models/Primary_Detector/resnet10.caffemodel_b30_int8.engine
#Required to display the PGIE labels, should be added even when using config-file
#property
batch-size=30
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
interval=0
#Required by the app for SGIE, when used along with config-file property
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[tests]
file-loop=0

And it produce this output:

root@860e36bd15d5:~/deepstream_sdk_v4.0_x86_64# deepstream-app -c samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt

(deepstream-app:45): GLib-GObject-WARNING **: 08:52:40.980: g_object_set_is_valid_property: object class 'nvv4l2h264enc' has no property named 'bufapi-version'
Error: Could not get cuda device count (cudaErrorInsufficientDriver)
Failed to parse group property
** ERROR: <gst_nvinfer_parse_config_file:943>: failed
** ERROR: <main:651>: Failed to set pipeline to PAUSED
Quitting
ERROR from sink_sub_bin_encoder1: Could not open device '/dev/nvhost-msenc' for reading and writing.
Debug info: v4l2_calls.c(656): gst_v4l2_open (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/nvv4l2h264enc:sink_sub_bin_encoder1:
system error: No such file or directory
ERROR from sink_sub_bin_encoder1: Could not initialize supporting library.
Debug info: gstvideoencoder.c(1627): gst_video_encoder_change_state (): /GstPipeline:pipeline/GstBin:processing_bin_0/GstBin:sink_bin/GstBin:sink_sub_bin1/nvv4l2h264enc:sink_sub_bin_encoder1:
Failed to open encoder
App run failed

I think that’s because I’m running the docker in VMWare Workstation Ubuntu Guess, and the GPU is absent (only on host OS).

Is there any workaround, maybe to utilize CPU instead of GPU?

It seems nvidia driver problem.
Can you run this command $ nvidia-smi successfully?

It prints out like this:

root@23a8523ad955:~# nvidia-smi
bash: nvidia-smi: command not found

$ sudo docker pull nvcr.io/nvidia/deepstream:4.0-19.07
External Media

1st attempt: After around 10 minutes, I got “unauthorized: authentication required”
2nd attempt: In the middle of downloading eb0b03bec7ec, it failed again.

$ sudo docker pull nvcr.io/nvidia/deepstream:4.0-19.07
4.0-19.07: Pulling from nvidia/deepstream
6abc03819f3e: Pull complete 
05731e63f211: Pull complete 
0bd67c50d6be: Pull complete 
2f87bc35d330: Pull complete 
9fe964cf4376: Pull complete 
e4732fdd9b39: Pull complete 
b6d41e19faf6: Pull complete 
94dea506ace8: Downloading [=============================>                     ]  456.5MB/763.5MB
b248e477ad27: Download complete 
4bc5c8802384: Download complete 
9c4a5b7d5776: Download complete 
213004265555: Download complete 
7092094accc1: Download complete 
9cc3233abc4a: Download complete 
daff5e5f4fe3: Download complete 
5c2adfebe05e: Downloading 
eb0b03bec7ec: Downloading [=======>                                           ]  170.5MB/1.196GB
11f636721e6b: Downloading 
67c009e8b441: Waiting 
96cddb4da6e3: Waiting 
31982d083e06: Waiting 
9d02697d31ca: Waiting 
unauthorized: authentication required

Is there a way to continue download, instead of download from the start again?

$ nvidia-smi

Sat Aug 17 15:46:21 2019       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 430.40       Driver Version: 430.40       CUDA Version: 10.1     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GTX 1080    Off  | 00000000:01:00.0  On |                  N/A |
| 35%   54C    P0    43W / 180W |   1340MiB /  8118MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1284      G   /usr/lib/xorg/Xorg                           607MiB |
|    0      1532      G   /usr/bin/gnome-shell                         353MiB |
|    0      3967      G   ...quest-channel-token=6553523562115799474   316MiB |
|    0      6914      G   ...-token=4F9366D27F4FD6D7F53BA3F4E00003FC    60MiB |
+-----------------------------------------------------------------------------+

How to overcome “unauthorized: authentication required”?

Thank you.

Good day.

I’ve tried DeepStream 4.0 L4T on Jetson TX2
(running on JetPack 4.2.1, which is just flashed this morning from SDK Manager with two missing component: TensorRT and Multimedia API).

I pull the docker image, and run without problems.

sudo docker pull nvcr.io/nvidia/deepstream-l4t:4.0-19.07
sudo xhost +
sudo docker run -it --rm --net=host --runtime nvidia  -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/deepstream-l4t:4.0-19.07

in the docker container, I’ve run this command:

root@JetsonTX2:~/deepstream_sdk_v4.0_jetson# deepstream-app -c samples/configs/deepstream-app/source30_1080p_dec_infer-resnet_tiled_display_int8.txt

and it returns this:

deepstream-app: error while loading shared libraries: libnvinfer.so.5: cannot open shared object file: No such file or directory

I still get the same output; before, and after enabling the sink1, and disabling the sink0.

Also, I’ve also tried from JetPack 3.3 before with the same output.

Am I missing some steps?

I used to have this problem. I just keep on retry until it pull successfully.