NvBufSurfTransform failed with error -2 while converting buffer

Please provide complete information as applicable to your setup.

**• Hardware Platform (Jetson / GPU)jetson nano
**• DeepStream Version6.0
**• JetPack Version (valid for Jetson only)4.6.1
**• TensorRT Version8.2.1
• NVIDIA GPU Driver Version (valid for GPU only)
**• Issue Type( questions, new requirements, bugs)questions
**• How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing)*Hi
I was intended to deploy retinaface (face detection model) in deepstream using deepstream-app
I built .engine file and I wrote custom parser for it and I thinks everything is correct but now when want to run the application:
octoaiz@octoaiz-desktop:~/retinaface-v0$ deepstream-app -c retina.txt

Using winsys: x11
0:00:00.958084553 7815 0x7f40001f80 WARN nvinfer gstnvinfer.cpp:635:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Warning from NvDsInferContextImpl::initialize() <nvdsinfer_context_impl.cpp:1161> [UID = 1]: Warning, OpenCV has been deprecated. Using NMS for clustering instead of cv::groupRectangles with topK = 20 and NMS Threshold = 0.5
0:00:12.578942403 7815 0x7f40001f80 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1900> [UID = 1]: deserialized trt engine from :/home/octoaiz/retinaface-v0/retina2.engine
INFO: [Implicit Engine Info]: layers num: 4
0 INPUT kFLOAT input 3x1x1
1 OUTPUT kFLOAT bbox 6x4
2 OUTPUT kFLOAT landmark 6x10
3 OUTPUT kFLOAT confidence 6x2

0:00:12.582081549 7815 0x7f40001f80 INFO nvinfer gstnvinfer.cpp:638:gst_nvinfer_logger:<primary_gie> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:2004> [UID = 1]: Use deserialized engine model: /home/octoaiz/retinaface-v0/retina2.engine
0:00:12.621700444 7815 0x7f40001f80 INFO nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary_gie> [UID 1]: Load new model:/home/octoaiz/retinaface-v0/retina_config.txt sucessfully

Runtime commands:
h: Print this help
q: Quit

p: Pause
r: Resume

NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
To go back to the tiled display, right-click anywhere on the window.

**PERF: FPS 0 (Avg)
**PERF: 0.00 (0.00)
** INFO: <bus_callback:194>: Pipeline ready

Opening in BLOCKING MODE
NvMMLiteOpen : Block : BlockType = 261
NVMEDIA: Reading vendor.tegra.display-size : status: 6
NvMMLiteBlockCreate : Block : BlockType = 261
** INFO: <bus_callback:180>: Pipeline running

/dvs/git/dirty/git-master_linux/nvutils/nvbufsurftransform/nvbufsurftransform.cpp:3885: => VIC Configuration failed image scale factor exceeds 16, use GPU for Transformation
0:00:13.995581001 7815 0x30426800 WARN nvinfer gstnvinfer.cpp:1376:convert_batch_and_push_to_input_thread:<primary_gie> error: NvBufSurfTransform failed with error -2 while converting buffer
ERROR from primary_gie: NvBufSurfTransform failed with error -2 while converting buffer
Debug info: /dvs/git/dirty/git-master_linux/deepstream/sdk/src/gst-plugins/gst-nvinfer/gstnvinfer.cpp(1376): convert_batch_and_push_to_input_thread (): /GstPipeline:pipeline/GstBin:primary_gie_bin/GstNvInfer:primary_gie
Quitting
App run failed

my configs are
retina.txt which is app config

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=5
#gie-kitti-output-dir=streamscl

[tiled-display]
enable=1
rows=2
columns=2
width=640
height=640
gpu-id=0
#(0): nvbuf-mem-default - Default memory allocated, specific to particular platform
#(1): nvbuf-mem-cuda-pinned - Allocate Pinned/Host cuda memory, applicable for Tesla
#(2): nvbuf-mem-cuda-device - Allocate Device cuda memory, applicable for Tesla
#(3): nvbuf-mem-cuda-unified - Allocate Unified cuda memory, applicable for Tesla
#(4): nvbuf-mem-surface-array - Allocate Surface Array memory, applicable for Jetson
nvbuf-memory-type=0

[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=2
uri=file://medium_0n69.mp4
num-sources=1
drop-frame-interval=1
gpu-id=0

(0): memtype_device - Memory type Device

(1): memtype_pinned - Memory type Host Pinned

(2): memtype_unified - Memory type Unified

cudadec-memtype=0

[sink0]
enable=1
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[sink1]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=4
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000

set below properties in case of RTSPStreaming

rtsp-port=8554
udp-port=5400

[sink2]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming
type=3
#1=h264 2=h265
codec=1
sync=0
bitrate=4000000
container=1
output-file=./output.mp4

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;1
nvbuf-memory-type=0

[streammux]
gpu-id=0
##Boolean property to inform muxer that sources are live
#live-source=0
batch-size=1
##time out in usec, to wait after the first buffer is available
##to push the batch even if the complete batch is not formed
#batched-push-timeout=40000

Set muxer output width and height

width=640
height=640
#width=640
#height=640
#enable to maintain aspect ratio wrt source, and allow black borders, works
##along with width, height properties
enable-padding=1
#nvbuf-memory-type=0

config-file property is mandatory for any gie section.

Other properties are optional and if set will override the properties set in

the infer config file.

[primary-gie]
enable=1
gpu-id=0
#model-engine-file=model_b1_fp32.engine
#labelfile-path=retinaface/labels.txt
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
nvbuf-memory-type=0

config-file=retina_config.txt

[tests]
file-loop=0

And retina_config.txt which is pgie config:
[property]

gpu-id=0
#0=RGB, 1=BGR
model-color-format=1
net-scale-factor=0.0039215697906911373
#onnx-file=FaceDetector_simplified.onnx
model-engine-file=retina2.engine
labelfile-path=labels.txt

process-mode=1

0=FP32, 1=INT8, 2=FP16 mode

network-mode=2
gie-unique-id=1
network-type=0
#output-blob-names=prob

0=Group Rectangles, 1=DBSCAN, 2=NMS, 3= DBSCAN+NMS Hybrid, 4 = None(No clustering)

#cluster-mode=2
#maintain-aspect-ratio=1
batch-size=1
num-detected-classes=1
#output-tensor-meta=1
operate-on-gie-id=1

custom detection parser

parse-bbox-func-name=NvDsInferParseCustomRetinaface
custom-lib-path=///home/octoaiz/retinaface-v0/custom_parser/libnvdsinfer_our_custom_impl_retinaface.so
#offsets=104.0;117.0;123.0
#force-implicit-batch-dim=1

number of consecutive batches to skip for inference

interval=2

• Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

here is the error, please refer to this FAQ.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.