The platform DeepStream Version 6.2 running on Xavier AGX running JetPack 5.1. I’m building a parallel inference pipeline in Python basing it on your C++ example. I was wonder whether the inference engine include with DeepStream can run parallel inferencing or do I have install and build the C++ example (GitHub - NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. ) . It is unclear to me.
Thank you for your help
Steve
What do you mean by this?
It is TensorRT who load and build the models. The DeepStream SDK is just a framework. It has nothing to do with which programmin language you use.
My apologies for being unclear, and I guess I was really asking this question.
Does DeepSteam 6.2 natively support parallel inferencing?
Thank you for your help
Steve
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
Yes. NVIDIA-AI-IOT/deepstream_parallel_inference_app: A project demonstrating how to use nvmetamux to run multiple models in parallel. (github.com) can work on DeepStream 6.2.
system
Closed
September 11, 2023, 7:50am
6
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.