NvBufSurfTransform - are VIC interpolation methods implemented?

nvbufsurftransform.h has the following interpolation methods that can be used in NvBufSurfTransform(), by setting a value in NvBufSurfTransformParams:

/**
 * Specifies video interpolation methods.
 */
typedef enum
{
  /** Specifies Nearest Interpolation Method interpolation. */
  NvBufSurfTransformInter_Nearest = 0,
  /** Specifies Bilinear Interpolation Method interpolation. */
  NvBufSurfTransformInter_Bilinear,
  /** Specifies GPU-Cubic, VIC-5 Tap interpolation. */
  NvBufSurfTransformInter_Algo1,
  /** Specifies GPU-Super, VIC-10 Tap interpolation. */
  NvBufSurfTransformInter_Algo2,
  /** Specifies GPU-Lanzos, VIC-Smart interpolation. */
  NvBufSurfTransformInter_Algo3,
  /** Specifies GPU-Ignored, VIC-Nicest interpolation. */
  NvBufSurfTransformInter_Algo4,
  /** Specifies GPU-Nearest, VIC-Nearest interpolation. */
  NvBufSurfTransformInter_Default
} NvBufSurfTransform_Inter;

However, no matter which one I pick, the result is the same - looks like bi-linear (or even nearest, hard to tell by eye). I set transform_filter to a random integer (8) not in the enum, and there was no error. This leads me to the question - is this even implemented?

Hi,
The algorithm is different and you should see deviation by checking the byte values. So you run upscaling or downscaling in your use-case? And please share your release version:

$ head -1 /etc/nv_tegra_release

R32 (release), REVISION: 6.1, GCID: 27863751, BOARD: t186ref, EABI: aarch64, DATE: Mon Jul 26 19:36:31 UTC 2021

The BSP is from ConnectTech.

My use-case is downscaling (2-4x) for feature extraction, where aliasing is detrimental to the algorithm’s performance. I don’t expect it to be flawless, but overlaying them on each other with difference blend mode in GIMP yields no differences at all.

I run this before images are processed:

#define NVBUF_ERROR_COUT_NOTICE(name, status) {if(status!=0){std::cout << fmt::format(name " failed with: {:d}", status) << std::endl;}}
void configureNvBufSurfTransform() {
	NvBufSurfTransformConfigParams sp;
	memset(&sp, 0, sizeof(NvBufSurfTransformConfigParams));
	sp.compute_mode = NvBufSurfTransformCompute_VIC;

	auto status = NvBufSurfTransformSetSessionParams(&sp);
	NVBUF_ERROR_COUT_NOTICE("NvBufSurfTransformSetSessionParams", status);
}

The transform part (scales contents of inputSurface (1920x1080, GRAY8) to fill intermediateSurface (480x270, GRAY8)):

// Scale down
		{
			NvBufSurfTransformParams tp;

			memset(&tp, 0, sizeof tp);

			NvBufSurfTransformRect tr_input;
			tr_input.height = inputSurface->surfaceList[0].height;
			tr_input.width = inputSurface->surfaceList[0].width;
			tr_input.top = tr_input.left = 0;
			tp.src_rect = &tr_input;

			NvBufSurfTransformRect tr_intermediate;
			tr_intermediate.height = intermediateSurface->surfaceList[0].height;
			tr_intermediate.width = intermediateSurface->surfaceList[0].width;
			tr_intermediate.top = tr_intermediate.left = 0;
			tp.dst_rect = &tr_intermediate;

			tp.transform_flip = NvBufSurfTransform_None;
			tp.transform_filter = (NvBufSurfTransform_Inter)8; //No error here. I'd want to use, for example NvBufSurfTransform_Inter::NvBufSurfTransformInter_Algo2;

			auto status = NvBufSurfTransform(inputSurface, intermediateSurface, &tp);
			NVBUF_ERROR_COUT_NOTICE("NvBufSurfTransform", status);
		}

I tried writing all 1s to transform_flags, but it had no effect. Not sure what that field is for.

nvidia_nvbufsurftransform_minimal_working_example.zip (3.1 MB)

I made a minimal sample application that demonstrates the issue (requires OpenCV). A sample image is included.

The results are always the same, with different image formats (tried GRAY8, RGBA). Both with down and upscaling.

Tried the same on CUDA, by modifying session settings:

	NvBufSurfTransformConfigParams sp;
	memset(&sp, 0, sizeof(NvBufSurfTransformConfigParams));
	sp.compute_mode = NvBufSurfTransformCompute_GPU;

The results are correct - the images are different and smoothly filtered with some algorithms.

Hopefully you can provide a workaround/fix for the VIC method or any other alternative method that doesn’t take up CUDA resources.

Hi,
Please share how to overcome this error:

CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
NVBUFSURFACE
    linked by target "nvbufsurf_example" in directory /home/nvidia/topic_211690
NVBUFSURFACETRANSFORM
    linked by target "nvbufsurf_example" in directory /home/nvidia/topic_211690

-- Configuring incomplete, errors occurred!

in CMakeLists.txt, change the path after “HINTS” to where nvbufsurface.so and nvbufsurftransform.so are located. This should be in the deepstream installation folder.

find_library(NVBUFSURFACE          nvbufsurface           HINTS /opt/nvidia/deepstream/deepstream-5.1/lib)
find_library(NVBUFSURFACETRANSFORM nvbufsurftransform     HINTS /opt/nvidia/deepstream/deepstream-5.1/lib)

You can try “/opt/nvidia/deepstream/deepstream/lib”, which should work regardless of which version of deepstream you have:

find_library(NVBUFSURFACE          nvbufsurface           HINTS /opt/nvidia/deepstream/deepstream/lib)
find_library(NVBUFSURFACETRANSFORM nvbufsurftransform     HINTS /opt/nvidia/deepstream/deepstream/lib)

The commands I used to compile it:

cd <the project folder>
cmake .
make
./nvbufsurf_example

The program reads pattern.png from the current folder and saves scaled images (result0.png, result1.png, … ). You should be able to understand the program just by looking at main() in main.cpp.

Hi,
Please add this line and try again:

tp.transform_flag |= NVBUFSURF_TRANSFORM_FILTER;

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.