[DeepStream2.0] How to run in nvidia-docker

I was trying to run deep stream 2.0 in nvidia-docker environment.

my nvidia-docker running command:

sudo docker run -it \
--runtime=nvidia \
-e DISPLAY=unix$DISPLAY \
-e NVIDIA_VISIBLE_DEVICES=all \
-e NVIDIA_DRIVER_CAPABILITIES=compute,utility,video \
-v /tmp/.X11-unix:/tmp/.X11-unix \
nvidia/cudagl:9.2-devel \
/bin/bash

my DeepStream2.0 config

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
flow-original-resolution=1
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=0
rows=5
columns=6
width=1280
height=720
gpu-id=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../../streams/sample_720p.mp4
num-sources=5
gpu-id=0

[source1]
enable=0
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
uri=file://../../streams/sample_720p.mp4
num-sources=15
gpu-id=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0

[sink1]
enable=0
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
## only SW mpeg4 is supported right now.
codec=3
sync=0
bitrate=2000000
output-file=out.mp4
source-id=0

[osd]
enable=0

Error while running

root@9b4d3cef06ae:/home/tunnel/DeepStream_Release/samples/configs/deepstream-app# deepstream-app -c source30_720p_dec_infer-resnet_tiled_display_int8.txt 
** ERROR: <main:490>: Failed to set pipeline to PAUSED
Quitting
App run failed

the error persisted after i remove gstreamer cache

Can you remove $Home/.cache/gstreamer-1.0/* and to get the real error?

Can you remove $Home/.cache/gstreamer-1.0/* and to get the real error?

Hi Chris. I performed the removal still get back the same error

Can you do this?
gst-inspect-1.0 nvdec_h264 gst-inspect-1.0 nvinfer

What’s your tensorRT, cuda, cudnn, nvidia driver version ?

gst-inspect-1.0 nvdec_h264

Factory Details:
  Rank                     primary + 10 (266)
  Long-name                Nvidia H.264 Video Decoder
  Klass                    Codec/Decoder/Video
  Description              Decode H.264 video streams
  Author                   Swapnil Rathi <srathi@nvidia.com>

Plugin Details:
  Name                     NvVideoCodecs
  Description              All Nvidia Video codecs
  Filename                 /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstnvvideocodecs.so
  Version                  1.8.2
  License                  Proprietary
  Source module            gst-nvvideocodecs
  Binary package           libgstnvvideocodecs
  Origin URL               http://nvidia.com/

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstVideoDecoder
                         +----Gstnvcuvid
                               +----GstNvcuvidH264Dec

Pad Templates:
  SRC template: 'src'
    Availability: Always
    Capabilities:
      video/x-raw(memory:NVMM)
                 format: { NV12 }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]

  SINK template: 'sink'
    Availability: Always
    Capabilities:
      video/x-h264
                 parsed: true
              alignment: au
          stream-format: byte-stream
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]


Element Flags:
  no flags set

Element Implementation:
  Has change_state() function: gst_video_decoder_change_state

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'
  SRC: 'src'
    Pad Template: 'src'

Element Properties:
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "nvcuvidh264dec0"
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  DecodeFPS           : Set DecodeFPS
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 1 - 4294967295 Default: 25 
  gpu-id              : Set GPU Device ID
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0 
  source-id           : Set Source ID

(gst-inspect-1.0:33): GLib-GObject-CRITICAL **: g_value_set_boolean: assertion 'G_VALUE_HOLDS_BOOLEAN (value)' failed
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0 
  silent              : Produce verbose output ?
                        flags: readable, writable
                        Boolean. Default: false
  smart-decode        : I frame only decode 
                        flags: readable, writable
                        Boolean. Default: false

gst-inspect-1.0 nvinfer

Factory Details:
  Rank                     primary (256)
  Long-name                NvInfer
  Klass                    NvInfer
  Description              Gstreamer Inference Element
  Author                   Bhushan Rupde <<brupde@nvidia.com>> Tushar Khinvasara <<tkhinvasara@nvidia.com>> Swapnil Rathi <<srathi@nvidia.com>>

Plugin Details:
  Name                     nvinfer
  Description              Gstreamer plugin for Inferencing
  Filename                 /usr/lib/x86_64-linux-gnu/gstreamer-1.0/libgstnvinfer.so
  Version                  1.8.2
  License                  Proprietary
  Source module            nvinfer
  Binary package           GStreamer NV Infer Plugin
  Origin URL               http://nvidia.com/

GObject
 +----GInitiallyUnowned
       +----GstObject
             +----GstElement
                   +----GstBaseTransform
                         +----GstNvInfer

Pad Templates:
  SRC template: 'src'
    Availability: Always
    Capabilities:
      video/x-raw(memory:NVMM)
                 format: { NV12 }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]

  SINK template: 'sink'
    Availability: Always
    Capabilities:
      video/x-raw(memory:NVMM)
                 format: { NV12 }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]
      video/x-raw
                 format: { NV12 }
                  width: [ 1, 2147483647 ]
                 height: [ 1, 2147483647 ]
              framerate: [ 0/1, 2147483647/1 ]


Element Flags:
  no flags set

Element Implementation:
  Has change_state() function: gst_nv_infer_change_state

Element has no clocking capabilities.
Element has no URI handling capabilities.

Pads:
  SINK: 'sink'
    Pad Template: 'sink'
  SRC: 'src'
    Pad Template: 'src'

Element Properties:
  name                : The name of the object
                        flags: readable, writable
                        String. Default: "nvinfer0"
  parent              : The parent of the object
                        flags: readable, writable
                        Object of type "GstObject"
  qos                 : Handle Quality-of-Service events
                        flags: readable, writable
                        Boolean. Default: false
  config-file-path    : Absolute path to configuration file, this property overrides all the property set explicity.
                        flags: readable, writable
                        String. Default: null
  gie-mode            : Select GIE Mode (1=Primary Mode  2=Secondary Mode)
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 1 - 2 Default: 1 
  gie-unique-id       : Unique ID used to identify metadata generated by this GIE
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 1 - 4294967295 Default: 0 
  infer-on-gie-id     : Infer on metadata generated by GIE with this unique ID
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0 
  net-stride          : Convolutional Neural Network Stride
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 16 
  class-thresh-params : Thresholding Parameters for all classes. Specified per-class
			Format: classid0(int),confidence-threshold0(float),eps0(float),group-threshold0(int),minBoxes0(int):classid1,confidence-threshold1,eps1,group-threshold1,minBoxes1:...
			 e.g. 0,0.7,0.1,3,2:1,0.5,0.1,3,2
                        flags: readable, writable
                        String. Default: "0,1.000000,0.100000,3,2"
  infer-on-class-ids  : Infer on objects with specified class ids
			 Use string with values of class ids 
			 in ClassID (int) to set the property.
			 e.g. 0:1:2:3
                        flags: readable, writable
                        String. Default: ""
  net-scale-factor    : Pixel normalization factor
                        flags: readable, writable, changeable only in NULL or READY state
                        Float. Range:               0 -    3.402823e+38 Default:               1 
  bbox-input-file     : Input file generated by the app (all_bbox.txt) which can be used to simulate
			primary gie functionality without actually inferencing the frame.
                        flags: readable, writable
                        String. Default: null
  model-path          : Absolute location of caffe model
                        flags: readable, writable
                        String. Default: null
  protofile-path      : Absolute location of caffe protofile
                        flags: readable, writable
                        String. Default: null
  int8calibrationfile-path: Absolute location of calibration file used in INT8 mode
                        flags: readable, writable
                        String. Default: null
  model-cache         : Absolute path to the optimized model cache file. If this file is mentioned and it
			exists, model-path and protofile-path will be ignored otherwise the file is
			generated using model and prototxt files. 
                        flags: readable, writable
                        String. Default: null
  uff-file-path       : Absolute path to Uff file.
                        flags: readable, writable
                        String. Default: null
  batch-size          : Number of units [frames(P.GIE) or objects(S.GIE)] to be inferred together in a batch
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 1 - 4294967295 Default: 1 
  num-buffers-in-batch: Number of Buffers in Batch
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 1 - 4294967295 Default: 0 
  max-objs-infer      : Number of Max Objects to be Infered per frame
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 1 - 4294967295 Default: 30 
  interval            : Specifies number of consecutive frames to be skipped for inference.
			Actual frames to be skipped = batch_size * interval
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 32 Default: 0 
  detected-min-w-h    : Minimum size in pixels of detected objects that will be outputted by the GIE.
			Specified per-class. Format: class-id,min-w,min-h:class-id,min-w,min-h:... 
			e.g. 0,128,128:1,128,128
                        flags: readable, writable
                        String. Default: "0,0,0:"
  detected-max-w-h    : Maximum size in pixels of detected objects that will be outputted by the GIE.
			Specified per-class. Format: class-id,max-w,max-h:class-id,max-w,max-h:... 
			e.g. 0,256,256:1,256,256
                        flags: readable, writable
                        String. Default: "0,0,0:"
  input-dims          : Dimensions of network input in (Channel,Height,Width,Order) format
			Here O=0(CHW) O=1(HWC)
			e.g. 3,224,224,0
                        flags: readable, writable
                        String. Default: "NULL"
  roi-top-offset      : Offset of the ROI from the top of the frame. Only objects within
			the ROI will be outputted.
			Format:  class-id,top-offset:class-id,top-offset:...
			e.g. 0,128:1,128
                        flags: readable, writable
                        String. Default: "0,0:"
  roi-bottom-offset   : Offset of the ROI from the bottom of the frame. Only objects within
			the ROI will be outputted.
			Format:  class-id,bottom-offset:class-id,bottom-offset:...
			e.g. 0,128:1,128
                        flags: readable, writable
                        String. Default: "0,0:"
  model-color-format  : Color format required by the model 
                        flags: readable, writable, changeable only in NULL or READY state
                        Enum "GstNvInferColorType" Default: 0, "RGB Format"
                           (0): RGB Format       - Color_Format_B8G8R8
                           (1): BGR Format       - Color_Format_R8G8B8
  meanfile-path       : Path of the mean data file (PPM format)
                        flags: readable, writable
                        String. Default: null
  detect-clr          : Detect the color of objects for given class or classes
			Use string with values to detect color of objects of given
			class in ClassID (int), detect color (boolean). 
			e.g. 0,1,2 will detect the color of objects for class-id 
			0, 1 and 2.
			Set value to -2 to detect the color of all the
			objects. Set value to -1 to not detect the color
                        flags: readable, writable
                        String. Default: ""
  labelfile-path      : Path to the text file containing the labels for the model.
                        flags: readable, writable
                        String. Default: null
  network-mode        : Network Mode : 0 for FP32, 1 for INT8, 2 for FP16
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 2 Default: 0 
  classifier-threshold: Threshold for classifier. Only when this GIE is used as classifier
                        flags: readable, writable, changeable only in NULL or READY state
                        Float. Range:               0 -               1 Default:            0.65 
  parse-func          : Detector BBOX parse function type
                        flags: readable, writable, changeable only in NULL or READY state
                        Enum "GstNvInferDetectorParseFuncType" Default: 1, "Googlenet parse function"
                           (0): Custom parse function - custom parse
                           (1): Googlenet parse function - googlenet
                           (2): Nvidia model type 0 parse function - nv0
                           (3): Nvidia model type 1 parse function - nv1
                           (4): Nvidia model resnet parse function - resnet
  is-classifier       : Whether this GIE is a classifier
                        flags: readable, writable
                        Boolean. Default: false
  offsets             : Array of mean values of color components to be subtracted from each pixel.
			e.g. 77.5;21.2;11.8
                        flags: readable, writable
                        String. Default: ""
  output-bbox-layer-name: Name of the Neural Network layer which outputs bounding box coordinates.
                        flags: readable, writable
                        String. Default: null
  output-coverage-layer-names: Array of the coverage layer names. Array should be semicolon seperated.
			e.g. coverage_layer0;coverage_layer1;coverage_layer2
                        flags: readable, writable
                        String. Default: ""
  parse-bbox-func-name: Name of the custom function for parsing bbox
                        flags: readable, writable
                        String. Default: null
  parse-bbox-lib-name : Name of the custom parsing bbox library
                        flags: readable, writable
                        String. Default: null
  parser-bbox-norm    : Parse bbox normalization information for various parse functions 
			Format: 
			 e.g. 35.0;35.0
                        flags: readable, writable
                        String. Default: "35.000000;35.000000"
  classifier-async-mode: Attach metadata that it generates asynchronously. Only for Classifier GIEs.
                        flags: readable, writable, changeable only in NULL or READY state
                        Boolean. Default: false
  gpu-id              : Set GPU Device ID
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 0 - 4294967295 Default: 0 
  queue-length-gie    : Property can be used in secondary mode, when operating on crops 
                        flags: readable, writable, changeable only in NULL or READY state
                        Unsigned Integer. Range: 4 - 32 Default: 4 
  silent              : Produce verbose output ?
                        flags: readable, writable
                        Boolean. Default: false

TensorRT Version

dpkg -l | grep nvinfer
ii  libnvinfer-dev                                             4.1.2-1+cuda9.2                          amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                         4.1.2-1+cuda9.2                          amd64        TensorRT samples and documentation
ii  libnvinfer4                                                4.1.2-1+cuda9.2                          amd64        TensorRT runtime libraries
ii  python3-libnvinfer                                         4.1.2-1+cuda9.2                          amd64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                                     4.1.2-1+cuda9.2                          amd64        Python 3 development package for TensorRT
ii  python3-libnvinfer-doc                                     4.1.2-1+cuda9.2                          amd64        Documention and samples of python bindings for TensorRT

CUDA = release 9.2, V9.2.148 , CuDNN = libcudnn.so.7.1.4, Nvidia-driver Version = 396.37

Thanks It’s good.

What’s your pipeline? Just source (sample_720p.mp4) -> EGLSink ?
What’s the result if you don’t modify source30_720p_dec_infer-resnet_tiled_display_int8.txt ?

I tested untouched source30_720p_dec_infer-resnet_tiled_display_int8.txt, removed gstreamer cache.
Default pipeline, I get back the same error.

Can you try command line like “gst-launch-1.0 filesrc location=./streams/sample_720p.h264 ! h264parse ! nvdec_h264 ! fakesink” ?

Thanks again Chris,

This is the output

# gst-launch-1.0 filesrc location=./streams/sample_720p.h264 ! h264parse ! nvdec_h264 ! fakesink
Setting pipeline to PAUSED ...
Caught SIGSEGV
exec gdb failed: No such file or directory
Spinning.  Please run 'gdb gst-launch-1.0 60' to continue debugging, Ctrl-C to quit, or Ctrl-\ to dump core.

It seems your docker image has some problems.
The below is my steps to make a docker image:

1. $ docker pull nvcr.io/nvidia/tensorrt:18.05-py2 
   This docker is OK for
     Ubuntu 16.04.4
         tensorRT 3.0.4 version  ->         /usr/lib/x86_64-linux-gnu/libnvinfer.so -> libnvinfer.so.4.0.4 
         cuda 9.0     -> /usr/local/cuda -> cuda-9.0 
 
2. Need to make sure AWS(P4) nvidia driver version is 396+ ?

   (You can install your nvidia driver, cuda9.2, cudnn, tensorRT 4)
 
3. $ ln -s /usr/lib/x86_64-linux-gnu/libnvcuvid.so.39x.xx  /usr/lib/x86_64-linux-gnu/libnvcuvid.so
 
4. $ apt-get update
   $ apt-get install libssl1.0.0 libjpeg8 libgstreamer1.0-0 gstreamer-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav libgtk-3-0
 
5.  Install opencv 3.4.1
     a. Download sources from https://opencv.org/releases.html
     b. Assuming package name is opencv-3.4.1.zip follow steps given below
         $ unzip opencv-3.4.1.zip
         $ cd opencv-3.4.1
         $ mkdir build && cd build
        $ cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local  -D WITH_CUDA=on -D ENABLE_FAST_MATH=1 -D CUDA_FAST_MATH=1  -D WITH_CUBLAS=1 -D WITH_NVCUVID=on -D CUDA_GENERATION=Auto  ..
        $ make
        $ make install
    The default OpenCV version for ubuntu16.4 by “apt-get” is 2.4.9, but we need 3.4.0+, so we need to install opencv from source code.
 
6.  Add opencv lib path
     $ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib

//7. Deepstream 2.0 SDK package is here: /workspace/deepstream_sdk 

8. In order to build source code, 
   $ apt-get install libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev
   $ apt-get install libx11-dev

//8. Upload noVNC: /workspace/noVNC

9. $ apt install net-tools

Can you show your “$ nvidia-smi”

i have the same error, but i don’t run the command in docker.

This is the output

Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
Got EOS from element "pipeline0".
Execution ended after 0:00:01.211326385
Setting pipeline to PAUSED ...
Setting pipeline to READY ...
Setting pipeline to NULL ...
Freeing pipeline ...

Reply #12

Your result is right.

but, it’s the same error in run deepstream-app -c config

this is the output

** ERROR: <main:490>: Failed to set pipeline to PAUSED
Quitting
App run failed

I also have this problem. Do you solve it?

Hi haifengli,

The DeepStream SDK 3.0 has been released, the Applications built with the DeepStream SDK can now be deployed using a Docker container:
https://developer.nvidia.com/deepstream-sdk

Thanks

Thank you very much.