Failure to run DeepStream SDK 2.0 for Tesla (GStreamer-CRITICAL) (Solved)

Hello friends,

I am an intern at NVIDIA in Santa Clara, CA.

I received the same error message every time when trying to run the sample application included in the DeepStream SDK 2.0 for Tesla:

** ERROR: <parse_config_file:1320>: parse_config_file failed
** ERROR: <main:456>: Failed to parse config file './configs/deepstream-app/source4_720p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt'

Quitting

(deepstream-app:2345): GStreamer-CRITICAL **: gst_element_get_static_pad: assertion 'GST_IS_ELEMENT (element)' failed

(deepstream-app:2345): GStreamer-CRITICAL **: gst_pad_send_event: assertion 'GST_IS_PAD (pad)' failed

(deepstream-app:2345): GStreamer-CRITICAL **: gst_element_set_state: assertion 'GST_IS_ELEMENT (element)' failed

(deepstream-app:2345): GStreamer-CRITICAL **: gst_element_get_bus: assertion 'GST_IS_ELEMENT (element)' failed

(deepstream-app:2345): GStreamer-CRITICAL **: gst_bus_remove_watch: assertion 'GST_IS_BUS (bus)' failed

(deepstream-app:2345): GStreamer-CRITICAL **: gst_object_unref: assertion 'object != NULL' failed

(deepstream-app:2345): GStreamer-CRITICAL **: gst_object_unref: assertion 'object != NULL' failed
App run failed

I am using a Titan X (Pascal) (Could that be an issue?) in Ubuntu 16.04 LTS environment. Following the instruction given by the user guide of DeepStream 2.0, I installed CUDA-9.2 (w/ cuDNN 7.1.4, NCCL 2.2.13 and GPU driver 396.26), TensorRT 4.0.1.6, OpenCV-3.4.0 (using exactly the same commands in the user guide) and all other necessary packages. I created the symlink for libnvcuvide.so as well.

I re-installed my system multiple times to make sure that I followed the instruction in the user guide perfectly.

The command window output for “nvidia-smi”, “nvcc --version”, “dpkg -l | grep TensorRT” and “pkg-config --modversion opencv” are all given as follows.

Thu Jul  5 16:36:00 2018       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.26                 Driver Version: 396.26                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  TITAN X (Pascal)    Off  | 00000000:65:00.0  On |                  N/A |
| 23%   31C    P8    18W / 250W |    357MiB / 12192MiB |      8%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID   Type   Process name                             Usage      |
|=============================================================================|
|    0      1128      G   /usr/lib/xorg/Xorg                           220MiB |
|    0      1789      G   /opt/teamviewer/tv_bin/TeamViewer              2MiB |
|    0      1986      G   compiz                                        76MiB |
|    0      2443      G   ...-token=4DD2B2A558A8B22E96B9A4C707E64416    55MiB |
+-----------------------------------------------------------------------------+
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Wed_Apr_11_23:16:29_CDT_2018
Cuda compilation tools, release 9.2, V9.2.88
ii  graphsurgeon-tf                                            4.1.2-1+cuda9.2                              amd64        GraphSurgeon for TensorRT package
ii  libnvinfer-dev                                             4.1.2-1+cuda9.2                              amd64        TensorRT development libraries and headers
ii  libnvinfer-samples                                         4.1.2-1+cuda9.2                              amd64        TensorRT samples and documentation
ii  libnvinfer4                                                4.1.2-1+cuda9.2                              amd64        TensorRT runtime libraries
ii  python-libnvinfer                                          4.1.2-1+cuda9.2                              amd64        Python bindings for TensorRT
ii  python-libnvinfer-dev                                      4.1.2-1+cuda9.2                              amd64        Python development package for TensorRT
ii  python-libnvinfer-doc                                      4.1.2-1+cuda9.2                              amd64        Documention and samples of python bindings for TensorRT
ii  python3-libnvinfer                                         4.1.2-1+cuda9.2                              amd64        Python 3 bindings for TensorRT
ii  python3-libnvinfer-dev                                     4.1.2-1+cuda9.2                              amd64        Python 3 development package for TensorRT
ii  python3-libnvinfer-doc                                     4.1.2-1+cuda9.2                              amd64        Documention and samples of python bindings for TensorRT
ii  tensorrt                                                   4.0.1.6-1+cuda9.2                            amd64        Meta package of TensorRT
ii  uff-converter-tf                                           4.1.2-1+cuda9.2                              amd64        UFF converter for TensorRT package
3.4.0

Hi,

Based on the log you shared, error occurs when parsing the configuration file.
Could you enter the config folder and try it again?

Thanks.

I didn’t realize that it makes a difference. The problem is solved. Thanks so much! :)

Hi,
I have the same problem. Could you please let me know how do you solve? Thank you.

Hi,

This error is caused by the parser cannot find the required file.
Please enter the ‘configs’ folder and try it again.

Thanks.

OK, thanks. I will try it.l