VPI linker error inside l4t-jetpack Docker container

When trying to build the sample VPI applications in the l4t-jetpack:35.3.1 container, I receive the following linker errors:

root@orin-agx:/tmp/samples/01-convolve_2d/build# make -j
[ 50%] Linking CXX executable vpi_sample_01_convolve_2d
/usr/bin/ld: warning: libnvpvaintf.so, needed by /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0, not found (try using -rpath or -rpath-link)
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaProgramSetDMADescriptors'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaExecutableGetSymbolMemHandleTable'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaQueueCreateCUDAWrapper'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaQueueCreate'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaContextDestroy'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaQueueDestroy'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaSyncObjCreate'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaSyncObjDestroy'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaProgramSetParameterValue'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaFenceFillFromNvSciSyncFence'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaProgramSetHWSequencerBin'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaFillNvSciSyncAttrList'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaProgramStatusQuery'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaSetVPUPrintBufferSize'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaFenceSynchronizeWithConfig'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaContextCreateCUDAWrapper'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaMemDestroy'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaReadVPUPrintBuffer'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaContextCreate'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaProgramCreate'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaQueueSubmitV2'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaSyncObjImportFromNvSciSync'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaExecutableDestroy'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaMemFillNvSciBufAttrs'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaGetCharacteristics'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaProgramDestroy'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaExecutableCreate'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaFenceGetTimeStamp'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaProgramSetDMAChannels'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaMemImportFromNvSciBuf'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaNvSciSyncFenceFillFromPvaFence'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaProgramSetPointerValue'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaProgramInitDMAParams'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaMemAlloc'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaMemGetHostPtr'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaMemImportFromCudaDevicePtr'
/usr/bin/ld: /opt/nvidia/vpi2/lib/aarch64-linux-gnu/priv/libcupva_host.so.2.0: undefined reference to `PvaMemImportFromHostPtr'
collect2: error: ld returned 1 exit status
make[2]: *** [CMakeFiles/vpi_sample_01_convolve_2d.dir/build.make:88: vpi_sample_01_convolve_2d] Error 1
make[1]: *** [CMakeFiles/Makefile2:76: CMakeFiles/vpi_sample_01_convolve_2d.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

I have installed/verified installation of cupva-2.0-l4t, nvidia-cupva, nvidia-vpi-dev, cuda

The cuda, cuDNN, and tensorrt samples all work as expected.

I see in the image instructions that “Note: VPI currently does not support PVA backend within containers.” I do not want/need to use the PVA backend but I cannot get it to build at all currently to use the cuda/some other backend.

Hi,

Could you try to launch the container with --security-opt=systempaths=unconfined?
Thanks.

Hello,

That did not fix the issue. I’m still getting the same linker errors on a fresh image pull and running with that option added.

I have been able to work around this issue by installing libnvvpi2 on the host AGX Orin. This correctly installs libnvpvaintf.so to /usr/lib/libnvpvaintf.so, so either running with --security-opt=systempaths=unconfined as suggested or copying the .so directly to the container fixes the issue and allows the VPI samples to build and run.

However, can the container image be modified to fix this issue? I don’t want the Docker execution environment to depend on packages installed on the host device. It would be optimal if libnvpvaintf.so was present/installable in the l4t-jetpack image.

Thanks.

Hi,

Thanks.
We will check this issue with our internal team.

Thanks.

Hi,

We test the sample on l4t-jetpack:r35.4.1 container and it can work correctly without any manually installation.

Thanks.

Are you able to do so without the --security-opt=systempaths=unconfined setting?

Hi,

Yes, we run it with

$ sudo docker run -it --rm --net=host --runtime nvidia -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix nvcr.io/nvidia/l4t-jetpack:r35.4.1

Thanks.

Ok, thank you. When you run it, where does libnvpvaintf.so show up in the container’s filesystem? I don’t see it without manually copying it so that is likely the source of my issue.

Hi,

We just compile the 01-convolve_2d sample and it can work correctly.
There is no error when compiling and executing.

Thanks.