Failed with error -3 while converting buffer

Please provide complete information as applicable to your setup.

• Hardware Platform (Jetson / GPU) GPU RTX3090
• DeepStream Version 5.1
• JetPack Version (valid for Jetson only)
• TensorRT Version 7.2.2.3
• NVIDIA GPU Driver Version (valid for GPU only) 470.63 cuda11.1
• Issue Type( questions, new requirements, bugs)
• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)
• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

I run deepstream-test3.py. My pipeline has detector and classifier, I classify detected boxes from detector(first detect then classify). I encountered this error while frame has two or more objects , but if there is one object in frame it works fine. Then I turned off classifier model and detector works fine with more objects(detects objects without any problem). Classifer engine and Detector engine have same batch_size.

0:02:34.202632301 269 0x2fc1050 WARN nvinfer gstnvinfer.cpp:1277:convert_batch_and_push_to_input_thread: error: NvBufSurfTransform failed with error -3 while converting buffer
Error: gst-stream-error-quark: NvBufSurfTransform failed with error -3 while converting buffer (1): gstnvinfer.cpp(1277): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline0/GstNvInfer:secondary2-nvinference-engine

Please refer to deepstream-test2 sample for multiple models(PGIE+SGIEs). deepstream_python_apps/apps/deepstream-test2 at master · NVIDIA-AI-IOT/deepstream_python_apps (github.com)

What if I have two detectors and one classifier? my pipeline: input frame->first detector->output(bboxes)->second detector->output(bboxes)->classifier.

Please refer to NVIDIA-AI-IOT/deepstream_lpr_app: Sample app code for LPR deployment on DeepStream (github.com), two detectors and one classifier.

Creating a Real-Time License Plate Detection and Recognition App | NVIDIA Developer Blog

I changed config file and then run ./deepstream-lpr-app. Still having the same error, when number of objects increase. If one object in frame it works fine, but when more than 2 then fails with error below.
Error: gst-stream-error-quark: NvBufSurfTransform failed with error -3 while converting buffer (1): gstnvinfer.cpp(1277): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline0/GstNvInfer:secondary2-nvinference-engine

where secondary2-nvinference-engine is classifier.

Can the original deepstream-lpr-app with Nvidia pretrained models work?

Yes, original works. I changed engine path and labels and so on. I think, a problem is shape of bounding boxes. Second detector(Yolov5) generates bounding box, then classifier can not take it properly (efficient-net-b0 pretrained input size 224x224). Post processing of Second detector does not satisfy to preprocessing of classifier. What if bounding box size is higher than 224x224 or coordinates of bounding boxes does not satisfy(takes negative value)?

I found this in forum, NvBufSurfTransform failed with error -3 while converting buffer. Someone had the same issue. How to set limits?

  1. I don’t know whether you have config your models correctly with nvinfer.
  2. Do you mean your first detector can output wrong bbox?
  3. When bbox size is higher than 224x224, nvinfer plugin will resize to the model size. It is important to config nvinfer correctly. Please refer to deepstream-lpr-app config files.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.