Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) :- dGPU
• DeepStream Version :- DS 7.1
Hello, Is it possible to use VLMs with Deepstream ? If no, is there any other Nvidia Service ?
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) :- dGPU
• DeepStream Version :- DS 7.1
Hello, Is it possible to use VLMs with Deepstream ? If no, is there any other Nvidia Service ?
can we add it as a model (nvinfer) with fixed prompt ?
It depends on how you define the VLM model, if the ONNX models inputs and outputs can be get and calculated by the DeepStream plugins or your customized algorithms, it can. But the different VLM models may need quite different inputs and outputs. Can you tell us which VLM you want to deployed with DeepStream? Have you generated the ONNX model of it? What are the ONNX model’s inputs and outputs?