can the mode be run successfully by thirdparty tool?
nvinfer plugin is opensource, you can find this error in NvDsInferContextImpl::allocateBuffers of \opt\nvidia\deepstream\deepstream-6.2\sources\libs\nvdsinfer\nvdsinfer_context_impl.cpp, and you can add log check if size is zero when allocate buffer. especially please rebuild the code /opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvinfer/ and replace the old so /opt/nvidia/deepstream/deepstream/lib/gst-plugins/libnvdsgst_infer.so.
I am seeing where that is being caught now. It seems the layerInfo.inferDims.numElements is 0. Do you know why this is the case? Inspecting the onnx model used for TRT conversion shows static sizes
When converting the ONNX to TRT I see this in the output:
[07/25/2023-10:44:41] [I] Created input binding for image with dimensions 1x3x2176x3840
[07/25/2023-10:44:41] [I] Using random values for output scores
[07/25/2023-10:44:41] [I] Created output binding for scores with dimensions 4254264
[07/25/2023-10:44:41] [I] Using random values for output boxes
[07/25/2023-10:44:41] [I] Created output binding for boxes with dimensions 4254259x4
[07/25/2023-10:44:41] [I] Using random values for output labels
[07/25/2023-10:44:41] [I] Created output binding for labels with dimensions 4254259
So it looks like the size is correct when the engine is built. But then I see this when running it in deepstream: