Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU • DeepStream Version 6.2 • JetPack Version (valid for Jetson only) • TensorRT Version 8.5.2.2 • NVIDIA GPU Driver Version (valid for GPU only) 525.85.12 • Issue Type( questions, new requirements, bugs) question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
When I am trying to parse my triton custom ensemble model following the deepstream-ssd-parser example , I can do it just fine with one video stream.
However, when i am trying to do the same with multiple stream, I am getting this error.
could you share the media pipeline? could you share the configuration file? and you might check the nvinferserver code because it is opensource from DS6.2.
Previously when it was only a single stream, I managed to parse my triton ensemble output using the output tensor metadata, so I am looking for a similar way to do it with multistream. Thanks in advance
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks
there are a lot of custom code in deepstream_pipeline.py, can you simplify it? and please refer to deepstream_test_3.py, which can accept multiple inputs, you only need to port this code multiple sources