Deepstream 5.0 patches

This topic is used to release the DS official patches for some issues.

1. What’s the issue
For some models which use deconvolution In int8 mode, you may meet following error.

   ERROR: ../nvdsinfer/nvdsinfer_func_utils.cpp:33 [TRT]: ../rtSafe/cuda/caskDeconvolutionRunner.cpp (293) - Cask Error in execute: 2 (Cask Deconvolution execution)

2. How to apply the patch

  • run “git apply ds5_ga_enqueue_full_batch.diff” under /opt/nvidia/deepstream/deepstream-5.0/
  • recompile and reinstall nvinfer plugin and nvdsinfer lib
  • add enqueue-full-batch=1 in nvinfer config file
    ds5_enqueue_full_batch.zip (4.5 KB)
    Note ds5_ga_enqueue_full_batch.diff for DS 5.0 GA and ds5_dp_enqueue_full_batch.diff for DS 5.0 DP
1 Like

1.What’s the issue

For the ONNX models with fixed input dimensions, you may meet following error

WARNING: nvdsinfer_backend.cpp:162 Backend context bufferIdx(0) request dims:3x3x320x512 is out of range, [min: 4x3x320x512, max: 4x3x320x512]
ERROR: nvdsinfer_backend.cpp:425 Failed to enqueue buffer in fulldims mode because binding idx: 0 with batchDims: 3x3x320x512 is not supported
ERROR: nvdsinfer_context_impl.cpp:1532 Infer context enqueue buffer failed, nvinfer error:NVDSINFER_INVALID_PARAMS
0:00:19.639248438   184 0x564871e0b140 WARN                 nvinfer gstnvinfer.cpp:1216:gst_nvinfer_input_queue_loop:<primary_gie> error: Failed to queue input batch for inferencing
ERROR from primary_gie: Failed to queue input batch for inferencing

2.How to aplly the patch

Apply following patch and rebuild and reinstall the nvdsinfer lib.

diff --git a/nvdsinfer_backend.cpp b/nvdsinfer_backend.cpp
index 029b025..9f8a011 100644
--- a/nvdsinfer_backend.cpp
+++ b/nvdsinfer_backend.cpp
@@ -417,6 +417,9 @@ FullDimTrtBackendContext::enqueueBuffer(
         NvDsInferBatchDims batchDims = buffer->getBatchDims(iL);
         assert(batchDims.batchSize == buffer->getBatchDims(0).batchSize);
+        //fix for onnx model which has fixed input dimensions
+        if(batchDims.batchSize < m_AllLayers[iL].profileDims[kSELECTOR_MIN].batchSize)
+            batchDims.batchSize  = m_AllLayers[iL].profileDims[kSELECTOR_MIN].batchSize;
         if (!canSupportBatchDims(iL, batchDims))
         {
             dsInferError( (edited) 
1 Like