We are using DeepStream’s python binding for out implementation and we are facing below stated issues.
We want to modularize as model loading, inference, and post-processing. But these things are getting done in the backend C++ code which is a deprecated one. So We are unable to access the model variable and getting the frame-by-frame inference from the package.
We need to build a single function-based inference from the deepStream model in python. As the current python pipeline is suitable for the video-wise inference demo.
If somehow we can manage to get the model loading and inference part from the backend code, I hope we’ll be done with whatever we’re currently trying to get.
Sorry for the delay, we will check internally and update you soon.
Also please provide the following setup info:
Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) • DeepStream Version • JetPack Version (valid for Jetson only) • TensorRT Version • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
We want to cut down the current code into a modularized way, such as model loading and inferencing part. in python so that we can form an API for the DeepStream inferencing in the same fashion we do for other deep learning models. Is it possible to access the loaded inference model variables from the plugins from the python pipeline? If so, how?
Can we leverage Deepstream to the full extent only if we use C++? Or is there a paid version enabling us to do so?
How can we get single image inference (using C++/ python binding) of the deepStream model inference?
Sorry, I really didn’t get what you are trying to do, seems you want to split current nvinfer to 3 parts(preprocess/inference/postprocess), and each parts can provide a API for your APP, right?
please specify more details.
what is single image inference, do you mean input one jpeg to deepstream pipeline, currently we can support it.