Deployment of the deepstream app

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) Jetson Xavier NX
• DeepStream Version DeepStream 6.0
• JetPack Version (valid for Jetson only) 4.6
• TensorRT Version 8.0.1-1
• Issue Type( questions, new requirements, bugs) questions
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Hi, I am currently using the Deepstream SDK to develop a smart CCTV program by modifying the codes in the sample_apps/deepstream_app.
Currently, I am trying to deploy my whole program from development board to test boards (all Jetson Xavier NX platforms).
What I was trying to do is simply compress the entier /opt/nvidia directory by using tar command (i.e. “tar -cvzf nvidia.tar.gz nvidia”), copy and paste it to all testing boards, and extract it.
However, when I extract it on the test board, I found that I was not able to do either rebuilding the entire deepstream program or running the program.

Am doing something wrong? or would there be any more simple way to do the deployment step with deepstream based application?

Do you miss any library? Can you show the error log?

./deepstream-app: error while loading shared libraries: libnvdsgst_meta.so: cannot open shared object file: No such file or directory

But I could find the “libnvdsgst_meta.so” in the lib directory…
As I mentioned above, I literally compressed all files in the /opt/nvidia directory, which means that all libarary files and symbolic link files also should be included in it…
Isn’t it?

Seems you need add the path with LD_LIBRARY_PATH to use those library.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.