dlopen "libnvcuvid.so" failed!

Hi, i am using a 2U server with dual XEON Gold 6142 CPU’s and 1x Nvidia V100 and 1x Nvidia P4. The server has on-board graphics output 2D chip AST2400 (made by ASPEED). The AST2400 is a SoC that combines the IPMI management and output VGA 2D graphics in one chip.

Operating system is Ubuntu 16.04

I have followed closely the DeepStream install procedures with all requisite software installed. when i try to run the deepstream-app (from /DeepStream_Release/samples/configs) “deepstream-app -c deepstream-app/source30_720p_dec_infer_resnet_tiled_display_int8.txt” i get the following error resulting in a segmentation fault (core dumped).

adlink@adlink:~/DeepStream_Release/samples/configs$ deepstream-app -c deepstream-app/source30_720p_dec_infer-resnet_tiled_display_int8.txt

When above command is executed, output below is generated and a blank (black with no tiled display) window momentarily pops up then closes.

Using Cached GIE model /home/adlink/DeepStream_Release/samples/configs/deepstream-app/…/…/models/Primary_Detector/resnet10.caffemodel_b30_int8.cache crypto flags(0)

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

** INFO: <bus_callback:75>: Pipeline ready

dlopen “libnvcuvid.so” failed!
Segmentation fault (core dumped)

I"m running a second server with XEON and using single Nvidia GTX 1080Ti for GPU acceleration/inferencing AND video output and have NO Issues.

is the problem a software related or hardware related to my 2U server having AST2400 video ouput??

Below is the location of my libnvcuvid.so which is different from the location of the symbolic link located in the DeepStream for Tesla users guide.

/////////////// From DeepStream 2.0 for Tesla User’s Guide ///////////////////////

sudo ln -s /usr/lib/nvidia-/libnvcuvid.so /usr/lib/x86_64-linux-
gnu/libnvcuvid.so

//////////////// End ///////////////////////////////////////////////

adlink@adlink:~$ sudo find / -name libnvcuvid.so
[sudo] password for adlink:

/var/lib/dkms/nvidia/396.26/build/libnvcuvid.so

NOTE: Nvidia driver 396.26 was installed (runfile version) with following command due to previous attempts to install resulted in indefinite login loop once Ubuntu desktop was later installed:

sudo apt-get install NVIDIA-Linux-x86_64-396.26.run --no-opengl-files --dkms

Hi Mamdouh,

You need a nvidia graphic for display.
Otherwise nvEGLsink in deepstream can’t run.

Thanks
wayne zhu

you can do like this and try
sudo ln -s /var/lib/dkms/nvidia/396.26/build/libnvcuvid.so /usr/lib/x86_64-linux-gnu/libnvcuvid.so
but besides this, you should need one nvidia graphic card for display as wayne suggested, that’s why
your second server do well.

Hello Amycao/Waynezhu, Thank you very much for your reply.

Another question please. If i use same 2U server (w/o Nvidia Display GPU) as a Video Analytics platform at the Cellular Network Edge (IoT Video Analytics), can i use RDMA/GPUDirect (or other…VDI…etc) to remotely display the video output at a remote server (located in a dataceneter) that can render video analytics tiled display using Nvidia GPU graphics device on the remote server?

in other words, i’d like to do the video analytics at the Edge of the network, but have the video output sent over IP Network for display at a remote site??

Is this possible within DeepStream SDK? Does DeepStream have the capability to render it’s display output to a remote machine that has Nvidia Graphics capability?

Hi Mandouh,
Is my understanding correct?
You want to do inference in some server, then output results to some remote nvidia server?

If yes, we can’t now, and I don’t see a clear plan for supporting distributed system.

Thanks
wayne zhu