• Hardware Platform (Jetson / GPU)
All of the above. • DeepStream Version
5.0 • JetPack Version (valid for Jetson only)
4.4 • TensorRT Version
7.1 • NVIDIA GPU Driver Version (valid for GPU only)
This is a feature request and not a bug. Could box interpolation possibly be added to the next release of the NvBufSurfTransform API. I am making a perceptual image hashing plugin and every pixel in the source image needs to contribute to the hash. Linear, nearest, and other methods throw out too much when scaling down to something as small as 8*8, so box interpolation would be very useful.
If this isn’t useful (and I’d understand why), please let me know and I will roll my own thing in CUDA instead.
Sorry for late response!
Trying to understand it’s use case and how to implement it. If it’s on dGPU platform, the solution must be on CUDA/GPU, on Jetson, will check if it’s doable with VIC.
I think the above link is unrelated. What I’m thinking of is this:
So the issue is that if I scale 1920*1080 down to 8x8 with linear/cubic/whatever It’s going to only sample a few of the pixels, and for my purposes, I need all of the pixels sampled or the image hash could be widly incorrect in some cases.
so the idea is:
chop up the image into 8x8 (or whatever)
For each box the corresponding output pixel is a quantized mean of all pixels in the box.
I don’t imagine it’s super useful outside of my particular use case. My intent is to use the result to detect changes in portions of the image between frames and use that to optimize the rest of the pipeline. For example, if a previous frame (or a portion of it) is more or less identical to the current, maybe it’s possible to skip some work, and when work does need to be done, maybe it’s possible to only do it for that portion of the image (and perhaps neighbors).