Please provide complete information as applicable to your setup.
• Hardware Platform (GPU) • DeepStream Version 6.2 • JetPack Version (valid for Jetson only) • TensorRT Version 8.6.1 • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type( questions, new requirements, bugs) • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
Hi,
I am observing a visual inconsistency issue when processing image regions using NvBufSurfTransform together with CV-CUDA Resize, and I would like to confirm whether this behavior is expected.
My workflow is:
From a source image, I crop a rectangle region using NvBufSurfTransform and resize it to 256×256.
Then I resize this processed region back to its original size using cvcuda::Resize.
I paste the restored region back to the same rectangle position in the original image.
What I observe:
When comparing the final results of two outputs that come from different crop rectangles, the visual difference between the two restored regions is noticeable, even if the rectangles differ only by a few pixels.
These small differences produce visible artifacts when the results are encoded into a video stream, making the playback look slightly unstable or “jumpy.”
I would like to know if this kind of output variation is expected when using NvBufSurfTransform together with CV-CUDA resize operations, or if there are known recommendations for this kind of workflow.
For reference, I tested the same two-stage resize process using OpenCV CPU cv::resize.
The visual jitter was extremely small, almost unnoticeable.
However, using NvBufSurfTransform + CV-CUDA Resize produces much larger visible jitter when comparing different nearby crop regions.
Yes. The interpolation algorithm inside NvBufSurfTransform is not aligned to opencv algorithm. Why do you need to use NvBufSurfTransform? What is your final purpose?
NvBufSurfTransform is being used because it is part of the nvdspreprocess pipeline.
In my post, I simplified the workflow to describe the visual behavior clearly.
The actual processing chain is based on nvdspreprocess, and the resize/crop operations were extracted conceptually for explanation purposes.
Yes, this affects our nvdspreprocess usage. The cropped and resized output from nvdspreprocess was originally intended to be fed into an inference model. For testing, we removed the model stage and simply resized the processed region back to its original rectangle and pasted it back into the frame. Even in this simplified pipeline, the visual inconsistency still appears. When the crop rectangle changes slightly between frames, the restored regions do not match visually, which results in jitter when encoded into a video stream.
Since the implementation of NvBufSurfTransform is not open source, I am not fully aware of its internal processing logic. Is there any information available regarding how the transform handles cropping + resizing internally? For example, whether it applies pixel alignment, padding, hardware-sampling alignment, or any format-dependent adjustments. Understanding this behavior would help us evaluate why the restored output differs when the crop rectangles change slightly.
The nvdspreprocess is used to generate the tensor input of inference models. We just want to know whether there is any inference cases be impacted by the nvdspreprocess algorithm?