Deepstream 3 SDK Error: <main:564>: Failed to set pipeline to PAUSED

I am having trouble running the sample deepstream app with the following command :

~/Downloads/DeepStream_Release$ deepstream-app -c /home/alert/Downloads/DeepStream_Release/samples/configs/deepstream-app/source4_720p_dec_infer-resnet_tracker_sgie_tiled_display_int8.txt

which results in

** ERROR: main:564: Failed to set pipeline to PAUSED
Quitting
App run failed

I have gone through the setup and have all the pre-requisites.

I’ve tried ‘ rm {HOME}/.cache/gstreamer-1.0/registry.x86_64.bin’ to remove the cache but still no luck.

I also tried to run the dsexample using :

~/Downloads/DeepStream_Release$ gst-launch-1.0 filesrc location=/home/alert/Downloads/DeepStream_Release/samples/streams/sample_720p.mp4 ! decodebin ! nvvidconv ! dsexample full-frame=1 ! nvosd ! nveglglessink

but it results in :

Setting pipeline to PAUSED …
ERROR: Pipeline doesn’t want to pause.
Setting pipeline to NULL …
Freeing pipeline …

Any help that could point me in the right direction to resolving this would be much appreciated. Thank you.

HI,
Do you use sink type 2 EglSink? if yes and without running on desktop, the error expected, if you do not have
display, before run the app, do export DISPLAY=:0 first.

Hi, I am facing same issue. My set up is on T4 gcp and deepstream3.0 docker container.

I tried to change the sink0 to type 3 as mentioned in a similar post but i am getting below new error. (with sink0 type as 2, i was getting the main:564: Failed to set pipeline to PAUSED

~/DeepStream_Release/samples# deepstream-app -c /root/DeepStream_Release/samples/configs/deepstream-app/source30_720p_dec_infer-resnet_tiled_display_int8.txt

** ERROR: <create_encode_file_bin:265>: create_encode_file_bin failed
** ERROR: <create_sink_bin:530>: create_sink_bin failed
** ERROR: <create_processing_instance:660>: create_processing_instance failed
** ERROR: <create_pipeline:957>: create_pipeline failed
** ERROR: main:544: Failed to create pipeline
Quitting
App run failed

could you let me know what is going wrong. Also how to enable hostx on vm : is there documentation on how to set up if running on cloud.!

Hi Mouli
May I know where you get the container? and how about your host environments? gpu driver 410 installed? also cuda 10

Hi Amycao - thanks for the response, the container is from Nvidia NGC and the host is a Intel Skylake CPU with a debian OS and an Intel opitimized image with cuda10 and latest drivers with 410.

Can you paste output of ldd deepstream-app here?

Hi, I am learning to run deepstream 3.0 sample apps in my Telsa P4 GPU. I encountered the error “failed to set pipeline to PAUSE”, too.

I have googled the forum, some staff from Nvidia said it might to due to the wrong diver used. Nvidia deepstream 3.0 document said that deepstream 3.0 requires
(1). Cuda 10
(2). Nvidia driver 410.72 and
(3). Nvidia Video SDK 8.3

but from Nvidia website, I can only find “uda-repo-ubuntu1604-10-0-local-10.0.130-410.48_1.0-1_amd64.deb”, which means driver 410.48 will be installed together with cuda 10?

In the mean time, Nvidia Video SDK 8.3 document says that Cuda 8 is required to installed it.

So, would you adise me, how to
(1). install cuda 10 with driver 410.72
(2). install Nvidia Video SDK 8.3 under cuda 10?

or, deepstream3.0 can work with driver 410.48 besides dirver 410.72, and
cuda 10 can support Nvidai SDK 8.3?

thanks in advance

Hi,
Please refer Deepstream package README.txt, it stated 410+ driver, 410.48 is also ok for DS3.0, and Nvidia Video SDK 8.3 is not a pre-requisite for DS3.0, another nvidia sdk, basically for video encode/decode using nvidia GPU hardware, here is one link, https://developer.nvidia.com/nvidia-video-codec-sdk, different from DS3.0, as for the error “failed to set pipeline to PAUSE” you met, it’s expected since p4 is just compute card, rendering output to display requires nvidia display card, 2 options,

  1. Output to sink type 1 Fakesink or 3 File;
  2. A hacky way to use Tesla p4 for virtual display, but just suggest for developments, since it will take some percent of device memory, finally impact the inference perf in this case deepstream,

first need to install nvidia graphic driver with opengl installed

sudo nvidia-xconfig --query-gpu-info

Number of GPUs: 2

GPU #0:

Name : Tesla T4

UUID : GPU-b58f5878-b235-c28e-4e2a-44d8623d133a

PCI BusID : PCI:3:0:0

Number of Display Devices: 0

GPU #1:
Name : Tesla P4
UUID : GPU-55bc88aa-fc94-0e86-9319-abd5fadf49ab
PCI BusID : PCI:4:0:0

Number of Display Devices: 0

sudo nvidia-xconfig --busid=PCI:4:0:0 --allow-empty-initial-configuration

reboot system, install nomachine from your windows system, and also need install nomachine on your linux server which have p4 installed, and login to desktop using nomachine

from windows system.