Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) GPU • DeepStream Version 6.2 • JetPack Version (valid for Jetson only) • TensorRT Version 8.5.2-1+cuda11.8 • NVIDIA GPU Driver Version (valid for GPU only) 525.85.12 • Issue Type( questions, new requirements, bugs) questions
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
My custom pipeline includes prepocess for changing a input image size.
To reduce computation, i configured “network-mode” as FP16(2) from default FP32.
To match data format between preprocess and pgie, I also set “tensor-data-type” as FP16(5).
However, there was a system fault like below.
Custom Lib: Cuda Stream Synchronization failed
CustomTensorPreparation failed
ERROR: nvdsinfer_backend.cpp:506 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1720 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR
For your information, there was no error when I set “tensor-data-type” as FP32(0).
Can you explain why this error occurs?
Do you mean you configure the "network-mode” as FP16 with nvinfer and “tensor-data-type” as FP16 with nvdspreprocess ?
Yes, you’re right.
What is your model’s input layer data type?
My model’s input layer data type is FP32.
There was a ‘no’ error, when I set ‘FP32’ for data format for inference, at ‘preprocess’ configuration,
even though I used ‘FP16’ for data format to be used by inference, at ‘nvinfer’ configuration.
And there was an error, when I set ‘FP16’ for data format for inference, at ‘preprocess’ configuration,
even though I used ‘FP16’ for data format to be used by inference, at ‘nvinfer’ configuration.
My understanding is like below.
From nv12 format input buffer, system converts it for inference.
And this conversion is related to assigned value in network-mode in nvinfer.
Additionally, this process has no relationship with tensor-data-type in preprocess configuration.
Is it correct understanding?
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks