Hello,
I have a AI pipeline which comprises of
- ML Model (TFLEARN)
- Custom Source Code
I have containerized code with business logic and ML model with Docker and it is working as a pipeline on my Laptop. I would like to know if this Containerized pipeline can be deployed on Deep Stream and can it can work as it is ? If not then please guide me on how I can use my existing Docker container on Deep Stream
I have seen that just ML model can be copied to deep stream and it works. But here in my case I have ML model and associated custom code to do some business logic along with it.
Kindly help.
Amit
• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)