Running deepstream_imagedata-multistream_cupy on Jetson Orin – Alternatives to x86 Code

Please provide complete information as applicable to your setup.

• x86/orin
• DeepStream 7
• JetPack Version (5.1.2)
• NA
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions)

Hi @fanzh , all

I am working on a project involving DeepStream’s deepstream_imagedata-multistream_cupy application, which builds on top of the deepstream-imagedata-multistream sample. This application allows for GPU-based image buffer access using CuPy arrays and supports multistream sources with uridecodebin. However, the current version of the code is supported only on x86 architectures, and I am trying to get it working on NVIDIA Jetson Orin.

Is there any alternative code or approach that can achieve similar functionality on Jetson Orin or other Jetson devices? Specifically, I am looking for:

  1. Accessing image data from GPU in multistream pipelines using CuPy (or an alternative).
  2. Modifying the image buffers in-place with changes reflected downstream.
  3. Handling multiple RTSP or file sources with uridecodebin.

If anyone has managed to port this or knows of a workaround to make it work on Jetson Orin, I’d appreciate any pointers or code examples. Thanks in advance!

Currently there is no such sample for Jetson.

What will you do to the image buffers?

The sample pipeline already support multiple uri sources, you just need to set the RTSP url address as the uri.

I need to access cupy based buffer in GPU, so that I can run inferences asynchronous inference operations.
I dont need triton inference server, since we are developing more sophisticated system, which triton wont support. All other points are not important. The main reason is performance gain.

Are you working with Orin platform?

I use both x86 and orin

If you don’t use Triton or TensorRT to inference, what will you use?

I read cupy to extract frames and then run custom algorithm on top of it and inject overlays using meta.

So it is just a extra video algorithm after inferencing?

No, inference is the algorithm.
It have sophisticated steps, which wont allow us to use already existing frame work from deepstream.

Can you share us more details?

We use cupy to read frames, converts them to a batch manually.

  1. use cupy, tensorflow and opencv to do a algorithm to do the inference
  2. inject bounding boxes to pipeline
  3. pipeline will draw it using nvdsosd
  4. and we encode the video in to rtsp

The tensorflow model can be converted to ONNX model. What kind of opencv algorithms?

It wont work,

we tried it for 3 years. We are doing not just inference using tf. we need frame in cupy.

Hi @Fiona.Chen

We cannot adhere to triton inference server or nvinfer plugin related inference.
We are doing our own way of inference. We need cupy direct extraction of surfaces from deepstream pipeline. Please help us. We are still waiting for an answer or new direction.

Which company and which project are you working for?

I am working for a client who sells video analytics solutions, and we are trying to do improvement in surveillance performance. We have sophisticated state of the art anomaly detection custom application. We are trying to scale up the system using nvidia’s capabilities.

A suggestion is to use the latest Service Maker python APIs which is released with DeepStream 7.1 GA.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.