I have run the code of deepstream_parallel_inference_app successfully. But this code should be run at the deepstream 6.1.1.
I try to run the code at deepstream 5.1, and copy many sources files to the current environment. And I rebuild corresponding files, got an error as below:
I have checked the files mentioned in the error, and found the function had been defined in the file. I’m confused why cause such error:
Hope response
No. deepstream_parallel_inference_app is not supported by DeepStream 5.1.
Sir,I want to use this code at deepstream5.1, just because If I upgrade my deployment, it would cause huge work.
I know it is hard to deploy, I just want to try. During deploying, I have upgrade the gcc and g++ version, reinstall the correct version of yaml-cpp. And I also move many source files to corresponding path, such as all the files in /opt/nvidia/deepstream/deepstream-6.1/sources/includes & /opt/nvidia/deepstream/deepstream-6.1/sources/libs & /opt/nvidia/deepstream/deepstream-6.1/sources/gst_plugins & /opt/nvidia/deepstream/deepstream-6.1/sources/apps/apps-common,that almost files have been update.
Unfortunately, I still got an error when build the file of deepstream_paraller_infer. I checked the source code, and found it is unreasonable, because I can find the function defined in the header file. I’m not clear why it cause such error.
We understand. But the DeepStream 5.1 does not support the parallel app. Some new features inside DeepStream core are needed. Sorry, but it is not supported.
Thanks. Is it possible to run the docker container at the jetson4.2.1 which is suitable for deepstream5.1. I have try to run the deepstream6.1.1-triton images, but get the error that request CUDA version should be not less than 11.4。 But I think the CUDA-11.4 have been install in the images. —Just the host machine with CUDA 10.2 installed.
No. CUDA is not installed inside the docker. It uses the CUDA in host machine.
So, Is it possible to run the container by install the CUDA -11.4 inside to avoid such error?
I run the docker container by the parameter: --runtime=nvidia. If I want to use the CUDA inside container, should I remove this parameter?
Theoretically it is possible.
No. This option is mandatory. It is not for CUDA only.
Does it mean if the CUDA inside docker container would be first detected, if installed, CUDA in host machine would be discarded. If no CUDA detected in the docker container, CUDA in host machine would be used for the container?
It is clearly that I can find the CUDA installed inside the docker container:
I have tried to add the CUDA path to the environment variable in docker images, and run the container in Jetson Xiaver NX with Jetson4.5.1, but got the following error:
It seems the driver version to be lower? Could you pls told me how to upgrade the driver version。
Our environment shown below: --I can’t see the driver version?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
It’s my current environment:
I have installed the cuda of 11.7 inside the docker images( deepstream6.1.1), and when I run this docker images,and just get the error mentioned before.
And this images can be run successfully at the environment of jetpackage5.0.2 which is official recommendation at Jetson Xiaver NX card. —And I also ran similar container basing on the deepstream6.1.1 for x86 version, it is also worked.
No. You should prepare the docker from baremetal. All things should be installed manually. The cuda dependency is not the only dependency for DeepStream docker containers.
Do you mean I need to build my images from the dockerfile mentioned in the official document?
No. That may not help you. It is not recommended to run docker with full installation on Jetson, it will be too large. So all the base l4t dockers are using mapped divers and libraries. You need to build a whole new docker from empty. The best way is to upgrade your Jetson device to the latest JetPack version.
We deployed some services on the Jetson device, And I just want to upgrade my docker. If the whole Jetpack need to upgrade, it would cause too much work… That’s why I try to test the new docker images.
It is not recommended. To generate a whole new docker image is also a big effort.
Yes, I want to build a new docker image as your suggestion. In my understanding, I can build the docker images as per the information indicated in official document which I show the snapshot before. Am I right?
No. The instruction in the document is based on the prerequisite that the docker shared the same driver and libraries with host. So it is no use for you.
Is there any information I can learn how to build the image from base?
What kind of base image do you mean? such as nvcr.io/nvidia/l4t-base:r35.1.0 or someone else?