Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) RTX 4000
• DeepStream Version DS 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5.0
• NVIDIA GPU Driver Version (valid for GPU only) 525.x.x.x
• Issue Type( questions, new requirements, bugs) question
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hello,
I have built C++ BLS backend and BLS works fine if I use python application with grpc call for inferencing but if I use deepstream app do so, It is getting crashed.
I see input tensor memory is different in both the scenario, in case of python app, I am getting memory type 0 for input tensor which indicates CPU but if I use deepstream app then getting memory type 2 for input tensor which indicates GPU memory.
How can I access input image (tensor) from GPU memory into BLS model. below is my pipeline structure into triton-server.
ensemble_model → DALI → TensorRT MODEL ->BLS