I’m trying to use DeepStream 6.3 with Jetson Nvidia-Docker.
While I was able to set up DeepStream 6.2 when using L4T 35.3.1, I encountered an installation failure for DeepStream 6.3 on L4T 35.4.1(nvcr.io/nvidia/l4t-base:35.4.1).
The error is as follows:
dpkg: dependency problems prevent configuration of deepstream-6.3:
deepstream-6.3 depends on libnvvpi2 (>= 2.0.2); however:
Package libnvvpi2 is not configured yet.
dpkg: error processing package deepstream-6.3 (--configure):
I attempted to install libnvvpi2 on its own, but this also failed.
sudo apt install libnvvpi2
Setting up libnvvpi2 (2.3.9) ...
pva_allow and/or /etc/pva/allow.d missing! Falling back to force-overwrite of system allowlist
cp: cannot create regular file '/lib/firmware/pva_auth_allowlist': No such file or directory
dpkg: error processing package libnvvpi2 (--configure):
installed libnvvpi2 package post-installation script subprocess returned error exit status 1
I have successfully installed it in the host environment of L4T35.4.1.
I use this container as a CI environment. and I don’t want to run the CI runner on the host.
Is there any good way to do this?
If you want to use the Deepstream-6.3 docker,there are two options.
1.Use the image provided by NGC.
Run the following command line.
docker pull nvcr.io/nvidia/deepstream:6.3-triton-multiarch
And other images to choose.
2.Build image by yourself. the build script is opensource, this is github link
Due to hardware limitations, some libraries are shared between host and docker.
so Jetpack must be installed normally.
Thanks for your information.
I tried building it myself, and the DeepStream samples are working!
Just to confirm, the README mentioned using two files related to VPI, vpi-dev-2.3.9-aarch64-l4t.deb and vpi-lib-2.3.9-aarch64-l4t.deb, but I couldn’t find files with the same names.
I’ve been downloading and building what are probably the equivalent files from Jetson’s apt.
Is this procedure acceptable?
apt repository has vpi2*
$ apt search vpi | grep 2.3.9
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
libnvvpi2/stable,stable,now 2.3.9 arm64 [installed]
python3.8-vpi2/stable,stable 2.3.9 arm64
python3.9-vpi2/stable,stable 2.3.9 arm64
vpi2-demos/stable 2.3.9 arm64
vpi2-dev/stable 2.3.9 arm64
vpi2-samples/stable 2.3.9 arm64
download and move to jetson directory
sudo apt install --reinstall --download-only libnvvpi2 vpi2-dev
cp /var/cache/apt/archives/vpi2-dev_2.3.9_arm64.deb jetson/
cp /var/cache/apt/archives/libnvvpi2_2.3.9_arm64.deb jetson/
rename in Dockerfile
- ADD vpi-dev-2.3.9-aarch64-l4t.deb /root
- ADD vpi-lib-2.3.9-aarch64-l4t.deb /root
+ ADD vpi2-dev_2.3.9_arm64.deb /root
+ ADD libnvvpi2_2.3.9_arm64.deb /root
Sample is working
root@38d3e2b210ba:/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app# gst-launch-1.0 -e filesrc location = /opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! m.sink_0 nvstreammux name=m width=1280 height=720 batch_size=1 ! nvinfer config-file-path=config_infer_primary.txt ! nvvideoconvert ! 'video/x-raw(memory:NVMM), format=RGBA' ! fakesink
Setting pipeline to PAUSED ...
Opening in BLOCKING MODE
WARNING: Deserialize engine failed because file path: /opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine open error
0:00:03.954638289 73 0xaaaad013e8a0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1976> [UID = 1]: deserialize engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed
0:00:04.156302917 73 0xaaaad013e8a0 WARN nvinfer gstnvinfer.cpp:679:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2081> [UID = 1]: deserialize backend context from engine from file :/opt/nvidia/deepstream/deepstream-6.3/samples/configs/deepstream-app/../../models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine failed, try rebuild
0:00:04.156440422 73 0xaaaad013e8a0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2002> [UID = 1]: Trying to create engine from model files
WARNING: [TRT]: The implicit batch dimension mode has been deprecated. Please create the network with NetworkDefinitionCreationFlag::kEXPLICIT_BATCH flag whenever possible.
0:04:35.567795022 73 0xaaaad013e8a0 INFO nvinfer gstnvinfer.cpp:682:gst_nvinfer_logger:<nvinfer0> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::buildModel() <nvdsinfer_context_impl.cpp:2034> [UID = 1]: serialize cuda engine to file: /opt/nvidia/deepstream/deepstream-6.3/samples/models/Primary_Detector/resnet10.caffemodel_b30_gpu0_int8.engine successfully
INFO: [Implicit Engine Info]: layers num: 3
0 INPUT kFLOAT input_1 3x368x640
1 OUTPUT kFLOAT conv2d_bbox 16x23x40
2 OUTPUT kFLOAT conv2d_cov/Sigmoid 4x23x40
0:04:36.078418117 73 0xaaaad013e8a0 INFO nvinfer gstnvinfer_impl.cpp:328:notifyLoadModelStatus:<nvinfer0> [UID 1]: Load new model:config_infer_primary.txt sucessfully
Pipeline is PREROLLING ...
NvMMLiteOpen : Block : BlockType = 261
NvMMLiteBlockCreate : Block : BlockType = 261
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstSystemClock
nvstreammux: Successfully handled EOS for source_id=0
Got EOS from element "pipeline0".
Execution ended after 0:00:03.253293272
Setting pipeline to NULL ...
Freeing pipeline ...
I’m not sure they are exactly the same
I usually extract the jetpack package from sdkmanger.
sdkmanager is used to install jetpack.
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.