Deployment of deepstream apps

Hi,
I want to compile and deploy a deep stream app on both Jetson and dGPU platforms. As the deepstream deployment containers doesn’t come with building resources what should be the ideal procedure to compiling and containerizing app?

• Hardware Platform (Jetson / GPU): Orin Nano/Tesla T4
• DeepStream Version: 6.3
• JetPack Version (valid for Jetson only): 5.1.1-b56
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only) : 525.85.12
• Issue Type( questions, new requirements, bugs): Question

Please refer to Docker Containers — DeepStream 6.3 Release documentation

Especially the part Docker Containers — DeepStream 6.3 Release documentation

Thank you for the link. To clarify my question, the above link provides the construction of custom docker images and I still can’t seem to figure out how I can add my own app here.

You can develop the app in the DeepStream container. And then copy your app and configs to the container you generated. The ideal procedure depends on your own app, you can contain or add necessary files and libs according to your needs. There is no special limitation from DeepStream point of view. We even open source the dockerfile of the DeepStream container to all users, everyone can customize his own container based on it.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.