Configuration in preprocess

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU
• DeepStream Version 6.2
• JetPack Version (valid for Jetson only)
• TensorRT Version 8.5.2-1+cuda11.8
• NVIDIA GPU Driver Version (valid for GPU only) 525.85.12
• Issue Type( questions, new requirements, bugs) questions

• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

My custom pipeline includes prepocess for changing a input image size.
To reduce computation, i configured “network-mode” as FP16(2) from default FP32.
To match data format between preprocess and pgie, I also set “tensor-data-type” as FP16(5).
However, there was a system fault like below.

Custom Lib: Cuda Stream Synchronization failed
CustomTensorPreparation failed

ERROR: nvdsinfer_backend.cpp:506 Failed to enqueue trt inference batch
ERROR: nvdsinfer_context_impl.cpp:1720 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_TENSORRT_ERROR

For your information, there was no error when I set “tensor-data-type” as FP32(0).
Can you explain why this error occurs?

Do you mean you configure the "network-mode” as FP16 with nvinfer and “tensor-data-type” as FP16 with nvdspreprocess?

What is your model’s input layer data type?

Do you mean you configure the "network-mode” as FP16 with nvinfer and “tensor-data-type” as FP16 with nvdspreprocess ?

Yes, you’re right.

What is your model’s input layer data type?

My model’s input layer data type is FP32.

There was a ‘no’ error, when I set ‘FP32’ for data format for inference, at ‘preprocess’ configuration,
even though I used ‘FP16’ for data format to be used by inference, at ‘nvinfer’ configuration.
And there was an error, when I set ‘FP16’ for data format for inference, at ‘preprocess’ configuration,
even though I used ‘FP16’ for data format to be used by inference, at ‘nvinfer’ configuration.

“tensor-data-type” in nvdspreprocess should be aligned with your model’s input layers but has nothing to do with the model’s data type.

if input layer’s type is FP32, and ‘network-mode’ in nvinfer is FP16,
does nvinfer process type-casting(FP32 tensor to FP16 tensor) automatically?

The “network-mode” in nvinfer just build the TensorRT plan in FP16, it will not change the model’s input or output. Developer Guide :: NVIDIA Deep Learning TensorRT Documentation

My understanding is like below.
From nv12 format input buffer, system converts it for inference.
And this conversion is related to assigned value in network-mode in nvinfer.
Additionally, this process has no relationship with tensor-data-type in preprocess configuration.
Is it correct understanding?

No. The “network-mode” in nvinfer is just a TensoRT parameter to build the TensorRT plan.

No. There is no relationship between tensor-data-type and the TensorRT plan data type. tensor-data-type is the data type of the model’s input layer.

Please study TensorRT first before you try to understand the gst-nvinfer source code.

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.