Running connected deepstream pipeline on two machines

Please provide complete information as applicable to your setup.

Tesla T4
• DeepStream 6.1
• TensorRT 8.2.5.1
**• NVIDIA GPU Driver Version 510.47.03 **

We currently have multiple hardware including NVIDIA Jetson AGX Orin Kit. I wanted to run two pipelines on two different machines, but those pipelines must be connected.

Meaning camera is connected to one machine with running deepstream pipeline, the result of pipeline then goes to second machine where it is captured by second deepstream pipeline.

I know that I can stream using pipeline to another machine. And can acquire the streamed data using deepstream. I was wondering are there any different ways to do that which wouldn’t involve encoding and decoding of data?
I hope I could explain the case. Thank you.

It depends on the physical connections between the two devices. The things behide “encoding and decoding of data” is that there is ethernet connection between the hardwares, right? If you want different ways, please figure out how do you connect the devices in different ways physically.

Yes, you are right. I knew all along that everything comes down to the type of connection. Currently, it is an ethernet connection, and we are looking to change it. But I am not sure where to start yet. I am not sure if I am supposed to be asking you, but it is worth a shot, so is there any open-source research or material done on this topic on your side using Deepstream? Or are there any references that you can lead me to? Meaning, to which connection can I shift instead of the ethernet?

Sorry If I am asking too much. I just wanted to first go over your opinion before going with further research

It also depends on what you want to transferring between the devices. Videos, text, messages, or other kinds of data/info?

Video. We want to send frames from one device to another. I know that without encoding it would require high bandwidth.

If I will be more specific: a camera is connected to Orin Kit (it gets the original frames), then after certain pre-processing using Deepstream we want to send the pre-processed data to another hardware with Nvidia Tesla T4, where we would like to run the inference itself again using Deepstream. I know you might wonder and questions the reasonings behind such an action, but it is linked with our student research.

Why don’t you run inferencing locally in Orin device? What kind of research you want to do?

It is hard to explain fully. It is a bit big project and I am just responsible for one part of it. We are able to run inference on Orin Kit, without any troubles. I am just told to explore the possibilities of the design that I explained above.

So, can you suggest any reference or material on this? Especially using Orin Kit.

I think your topic has nothing to do with Orin kit or DeepStream. If your purpose is to find a way to transfer video between two devices, the ethernet connection is a mature way with which you don’t need to develope any hardware driver or software stack for it. There are also some special interfaces such as HDMI, LVDS, USB,… which can be used to transfer multimedia data. But these may need extra hardware driver and software stack support.

Got your point. Can I just clarify one moment, I know that Deepstream allows streaming the frames over ethernet. I think there are no options for streaming over other mediums, right?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

Currently there is no other option.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.