Deepstream API Integration

We are using DeepStream’s python binding for out implementation and we are facing below stated issues.

  1. We want to modularize as model loading, inference, and post-processing. But these things are getting done in the backend C++ code which is a deprecated one. So We are unable to access the model variable and getting the frame-by-frame inference from the package.

  2. We need to build a single function-based inference from the deepStream model in python. As the current python pipeline is suitable for the video-wise inference demo.

  3. If somehow we can manage to get the model loading and inference part from the backend code, I hope we’ll be done with whatever we’re currently trying to get.

Please provide more details of your issue. Thanks

Hey, we don’t know what’s your exact requirement here, could you provide more details.

Hi @shankarjadhav232 ,

I’m also facing the same issues while doing the modularization and creating the API structure for the deepStream inference engine in python.

1 Like

Can we do the inferencing part separately without using Gst as the streaming part?

It is possible to access the loaded inference model variables from the plugins from the python pipeline ? If so, how ?

Sorry for the delay, we will check internally and update you soon.

Also please provide the following setup info:
Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU)
• DeepStream Version
• JetPack Version (valid for Jetson only)
• TensorRT Version
• NVIDIA GPU Driver Version (valid for GPU only)
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

Setup information
• Hardware Platform (Jetson / GPU)-Jetson nano
• DeepStream Version - 5.1
• JetPack Version - 4.5
• TensorRT Version - cuda10.2

Thanks. we had checked it internally, but it’s not very clear for what you want to do.

What do you mean by modularize? If any modification is required - user needs to modify gstnvinfer plugin.

Incorrect info - backend C++ code in NOT deprecated. Where is this information coming from? Which backend do you mean?

Can we do the inferencing part separately without using Gst as the streaming part?

There is no point of using DeepStream without GStreamer. Why do you want to use DeepStream? Also, maybe this will be helpful to you: GitHub - dusty-nv/jetson-inference: Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.

We want to cut down the current code into a modularized way, such as model loading and inferencing part. in python so that we can form an API for the DeepStream inferencing in the same fashion we do for other deep learning models. Is it possible to access the loaded inference model variables from the plugins from the python pipeline? If so, how?

Can we leverage Deepstream to the full extent only if we use C++? Or is there a paid version enabling us to do so?

How can we get single image inference (using C++/ python binding) of the deepStream model inference?

I didn’t get, so you just need to use the low level lib(TensorRT) of gst-nvinfer?

Sorry, I really didn’t get what you are trying to do, seems you want to split current nvinfer to 3 parts(preprocess/inference/postprocess), and each parts can provide a API for your APP, right?

please specify more details.

what is single image inference, do you mean input one jpeg to deepstream pipeline, currently we can support it.

reply to 1st answer as:

yes, how should we do that, can we proceed for that, can you give me some more insights for the same.

reply to 2nd answer as:

skip it.

reply to 3rd answer as:

can you guide me how should I do that ? Please attach relatable blogs, if any

You can refer deepstream-segmentation-test

We will review it internally and update you.

Currently, we don’t support it, you need to modify gstnvinfer plugin which is open sourced to achieve that