NVidia Deep Stream - Docker Pipeline Support

Hello,

I have a AI pipeline which comprises of

  1. ML Model (TFLEARN)
  2. Custom Source Code

I have containerized code with business logic and ML model with Docker and it is working as a pipeline on my Laptop. I would like to know if this Containerized pipeline can be deployed on Deep Stream and can it can work as it is ? If not then please guide me on how I can use my existing Docker container on Deep Stream

I have seen that just ML model can be copied to deep stream and it works. But here in my case I have ML model and associated custom code to do some business logic along with it.

Kindly help.

Amit

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

This seriously depends what the source, ML model and other logic in your application, it’s hard to say whether it can or not without informantion about them.

I think you could take a DS introduction in DeepStream SDK — Accelerating Real Time AI Based Video and Image Analytics - YouTube to understand what DS is and what it can do… it may help you know if it can do what you want

Hi,

Thank you for your reply. I went through that video and its very informative. Can you please share any end to end demonstration of same ? Or any GitHub source ? I can follow and try to implement on similar lines.