HI,
Do you use sink type 2 EglSink? if yes and without running on desktop, the error expected, if you do not have
display, before run the app, do export DISPLAY=:0 first.
Hi, I am facing same issue. My set up is on T4 gcp and deepstream3.0 docker container.
I tried to change the sink0 to type 3 as mentioned in a similar post but i am getting below new error. (with sink0 type as 2, i was getting the main:564: Failed to set pipeline to PAUSED
Hi Amycao - thanks for the response, the container is from Nvidia NGC and the host is a Intel Skylake CPU with a debian OS and an Intel opitimized image with cuda10 and latest drivers with 410.
Hi, I am learning to run deepstream 3.0 sample apps in my Telsa P4 GPU. I encountered the error “failed to set pipeline to PAUSE”, too.
I have googled the forum, some staff from Nvidia said it might to due to the wrong diver used. Nvidia deepstream 3.0 document said that deepstream 3.0 requires
(1). Cuda 10
(2). Nvidia driver 410.72 and
(3). Nvidia Video SDK 8.3
but from Nvidia website, I can only find “uda-repo-ubuntu1604-10-0-local-10.0.130-410.48_1.0-1_amd64.deb”, which means driver 410.48 will be installed together with cuda 10?
In the mean time, Nvidia Video SDK 8.3 document says that Cuda 8 is required to installed it.
So, would you adise me, how to
(1). install cuda 10 with driver 410.72
(2). install Nvidia Video SDK 8.3 under cuda 10?
or, deepstream3.0 can work with driver 410.48 besides dirver 410.72, and
cuda 10 can support Nvidai SDK 8.3?
Hi,
Please refer Deepstream package README.txt, it stated 410+ driver, 410.48 is also ok for DS3.0, and Nvidia Video SDK 8.3 is not a pre-requisite for DS3.0, another nvidia sdk, basically for video encode/decode using nvidia GPU hardware, here is one link, NVIDIA VIDEO CODEC SDK | NVIDIA Developer, different from DS3.0, as for the error “failed to set pipeline to PAUSE” you met, it’s expected since p4 is just compute card, rendering output to display requires nvidia display card, 2 options,
Output to sink type 1 Fakesink or 3 File;
A hacky way to use Tesla p4 for virtual display, but just suggest for developments, since it will take some percent of device memory, finally impact the inference perf in this case deepstream,
first need to install nvidia graphic driver with opengl installed
sudo nvidia-xconfig --query-gpu-info
Number of GPUs: 2
GPU #0:
Name : Tesla T4
UUID : GPU-b58f5878-b235-c28e-4e2a-44d8623d133a
PCI BusID : PCI:3:0:0
Number of Display Devices: 0
GPU #1:
Name : Tesla P4
UUID : GPU-55bc88aa-fc94-0e86-9319-abd5fadf49ab
PCI BusID : PCI:4:0:0
reboot system, install nomachine from your windows system, and also need install nomachine on your linux server which have p4 installed, and login to desktop using nomachine