NvBufSurfTransform failed with error -3 while converting buffer1

Please provide complete information as applicable to your setup.

**• Hardware Platform GPU
**• DeepStream Version 7.1
**• TensorRT Version 10.3.0.26-1+cuda12.5
**• NVIDIA GPU Driver Version 12.7
**• Issue Type bugs
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)

I have two nvinferserver components. The first one outputs a detection box, and the second one performs face recognition on the first output box. When my avatar is high enough in the camera image, it will output an error in the title

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

In the following code, I found that there is an issue with the NvBufSurfTransformConfig Params parameter

// in /opt/nvidia/deepstream/deepstream-7.1/sources/libs/nvdsinferserver/infer_preprocess.cpp 475
NvDsInferStatus CropSurfaceConverter::resizeBatch(SharedBatchBuf &src,
                                                  SharedBatchBuf &dst) {
  assert(m_ConverStream);
  InferDebug("NetworkPreprocessor id:%d resize batch buffer", uniqueId());

  int devId = dst->getBufDesc().devId;
  RETURN_CUDA_ERR(
      cudaSetDevice(devId),
      "CropSurfaceConverter failed to set cuda device(%d) during resize "
      "batch",
      devId);

  NvBufSurfTransformConfigParams configParams{m_ComputeHW, devId,
                                              m_ConverStream->ptr()};
  std::shared_ptr<BatchSurfaceBuffer> srcSurface =
      std::static_pointer_cast<BatchSurfaceBuffer>(src);
  assert(srcSurface);
  uint32_t frameCount = srcSurface->getBatchSize();
  assert(frameCount <= m_MaxBatchSize);
  std::shared_ptr<SurfaceBuffer> dstSurface =
      std::static_pointer_cast<SurfaceBuffer>(dst);
  assert(dstSurface);
  assert(frameCount <= dstSurface->getReservedSize());
  dstSurface->setBatchSize(frameCount);
  NvBufSurface *nvDstBuf = dstSurface->getBufSurface();
  NvBufSurfTransformSyncObj_t syncObj = nullptr;

  for (uint32_t i = 0; i < frameCount; ++i) {
    NvBufSurfTransformRect rect = srcSurface->getCropArea(i);

    //debug print
    printf("NvBufSurfTransformRect l: %u, t: %u, w: %u, h: %u\n", rect.left, rect.top,
           rect.width, rect.height);

    int srcL = INFER_ROUND_UP(rect.left, 2);
    int srcT = INFER_ROUND_UP(rect.top, 2);
    int srcW = INFER_ROUND_DOWN(rect.width, 2);
    int srcH = INFER_ROUND_DOWN(rect.height, 2);
    m_TransformParam.src_rect[i].left = srcL;
    m_TransformParam.src_rect[i].top = srcT;
    m_TransformParam.src_rect[i].width = srcW;
    m_TransformParam.src_rect[i].height = srcH;
    int dstW = m_DstWidth, dstH = m_DstHeight;
    if (m_MaintainAspectRatio) {
      double hdest = m_DstWidth * srcH / (double)srcW;
      double wdest = m_DstHeight * srcW / (double)srcH;

      if (hdest <= m_DstHeight) {
        dstH = hdest;
      } else {
        dstW = wdest;
      }
    }

output:

NvBufSurfTransformRect l: 1060, t: 4294967287, w: 62, h: 51

The gst-nvinferserver is open source, you can debug with the source code to find out whether the wrong src rect comes from the PGIE bboxes.

In my first nvinferserver component, I enlarged the bounding box by 15% for subsequent face recognition. However, due to the inability to obtain the original pixel size within the custom_parse_bbox_func function, it is impossible to determine the boundaries, leading to negative results when pixel points exceed the image range. Is there any other method to achieve this?

What kind of enlarging? To include more background around the detected face?

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks