We mainly focus on video detection, we tried to run models faster by integrating through deep stream, we were successful running DINO. Therefore we planned to run NanoOWL inside deepstream. Also I fould that we can run custom models inside deepstream Using a Custom Model with DeepStream — DeepStream documentation 6.4 documentation. Can you tell by using the engine file “owl_image_encoder_patch32.engine” how can i run NanoOWL inside deepstream. For the model configs what will be value for “output-blob-names”
We are using deepstream as the framework for running our AI logic , example Dino model for person detection
we want to use NanoOWL as the model to run within the deepstream to get some extra insights about the things happens in the video or image.
If deepstream doesn’t have the multimodel support, then do you have any featureres what will be the best way to run multimodel like NanoOwl faster combining to run with other models like DINO which is slower comparatively
Is there an option to set the text input in the custom-lib-path and implement it as a Custom Model? I am interested in using nanoOWL as a zero-shot model capable of detecting different types of objects on demand.