Connect Deepstream to a Video Capture Box

Hi,
my question is - can one use stream data from a Video Capture Box and connect it to a DeepStream docker container?
In my case it is AverMedia AVerMedia CU511B
(USB 3.0 Capture Box CU511B│AVerMedia )

Such Video Capture Boxes (VCBs) are mainly used to get video signal from old VGA / DVI ports and allows to write them into files, watch them, etc.
I tested that VLC video player, for example, can connect to this device and show resulting videostream. But there is no ip:port or link to connect to Deepstream as an rtsp stream (I don’t even now if the video stream is rtsp in fact). Does anyone has experience with such setups and how to connect such a device’s output to DeepStream?

The platform is dGPU (RTX 2080 TI), Ubuntu (regular stationary PC, basically)
DeepStream Version - 5.1
TensorRT Version (default in official DS 5.1 Docker container)
NVIDIA GPU Driver - 460.80.
Issue type - Question

Any of the version of the drivers and/or DS are optional, I can install new ones if required. I can even buy Jetson if dGPU on a regular PC kind of setup doesn’t work.

My question is really a bit general in that sense - can i connect DS to such types of devices for real-time inference of video streams?

you are using x86 PC, right?
what’s the HW and SW inference to receive the video data from the camera?

Hi, mchi, thanks for the reply.

Yes, I have regular x86 PC with dual-boot Windows/Ubuntu.

There is no camera per se. We have an industrial X-ray scanner that has a VGA output on its monitor. We needed to grab this output for our video-analytics. The question was how to do it. We used VGA splitter (FJGEAR FJ-3502 ) on that VGA output and Video Capture Box ( AverMedia CU511B) to capture that video output. But that setup only works on Windows PC (Drivers and support for AverMedia device are Windows only). And works is a bit of a stretch as yes, I can see the stream on VLC player for example or on the default AverMedia player, but I don’t know how to connect it to DeepStream as an input.

Really, we need any feasible setup to pair that splitted VGA output (we don’t have access to the internal PC of the X-ray itself) with our Neural Network to make inference. I figured may be DeepStream is the way to go here, but haven’t been able to understand how to make this work. May be I’m missing a simpler and more straightforward approach?

P.S. The only quasi-workable solution I have is to just play input from Video Capture Box fullscreen using default AverMedia player and then use VLC player to stream rtsp of my screen to a local port, from which deepstream can grab it. But I get a 3-4 second delay and in such way it’s not useful for our actual application.