Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU)
GPU first, then Jetson for other applications
• DeepStream Version
6.3
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
536.99
• Issue Type( questions, new requirements, bugs)
Question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi,
I am working on a real time object detection application Initially, I tried with just the Triton server with a multi GPU setup, but the throughput is insufficient. I’m using an SSD Mobilenet v2 640 model without batching (I don’t know how to make a model like that perform with batching enabled, but that’s a different matter.)
I have a few questions and I hope that this is the right place to ask:
- Is it possible to run the doker image on a windows 10 computer? I’ve managed this with Triton, but I’m not sure if it will work with the Deepstream image, it seems to be more complex than the Triton one.
- Do I need to maintain a separate Triton service? Or does the deepstream docker image come with all necessary components?
- Regarding the cameras, can I apply preprocessing custom filters to the captured images? Or is the pipeline fixed?
- And the last one, I’m sending 640 tiles with some overlap to the inference server, Can I achieve this with the deepstream sdk? I don’t wish to resize the images due to the potential loss of quality, especially since the objects I need to detect are quite small.
Thank you in advance!