Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU • DeepStream Version 6.1.1 • TensorRT Version 8.5.1 • NVIDIA GPU Driver Version (valid for GPU only) 520.61.05 • Issue Type( questions, new requirements, bugs) question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
I am running an inference pipeline in Deepstream that includes an ensemble model built from:
Python model - preprocessor
Tensorrt RCNN model
Python model - postprocessor
It is working as planned except for state management of the RCNN. I read here:
It mentions using the following configuration in the model’s config file:
sequence_batching {
state [
{
input_name: "PreviousState" # layer name of the network for state input
output_name: "leaky_re_lu_47" # This is the state output from, the network
data_type: TYPE_FP16
dims: [ -1 ]
initial_state: {
data_type: TYPE_FP16
dims: [ 1 ]
zero_data: true
name: "initial state"
}
}
]
}
But when I run the pipeline I get the error:
ERROR: infer_trtis_server.cpp:259 Triton: TritonServer response error received., triton_err_str:Invalid argument, err_msg:in ensemble 'ensemble_python_smoke_16', inference request to model 'smoke_16' must specify a non-zero or non-empty correlation ID
Am I missing something in the config to cause this?
PLEASE HELP
I have been researching this for a few days - I think I can safely say that the problem isn’t in the Triton server side and model config - but in the Deepstream side, so I can rephrase the issue:
How to send correlation ID from the Deepstream Triton client side?
Sure. I have tried many iterations and I am now confident that the problem is not in my settings - but in the lack of possibility to attach a “correlationid” from the Deepstream client request to Triton.
Sorry for missing some information from you! We don’t need GST_DEBUG=7 log. Please use the following settings to get the log: export GST_DEBUG=nvinferserver:7 export NVDSINFERSERVER_LOG_LEVEL=5