How to customize nvdspreprocess_lib

Hi I am trying to customize nvdspreprocess_lib following the example of libcustom2d_preprocess.so. after I changed the function in nvdspreprocess_conversion.cu to black out a triangular area, the results were not reflected on the downstream output. global void NvDsPreProcessConvert_CxToP3FloatKernelWithPolygonBlackout is the function gets called in the nvdspreprocess_impl.cpp. I am thinking whether this is the case that libcustom2d_preprocess.so actually only modified the input but not push the updates to downstream.

Below is the code snipet for NvDsPreProcessConvert_CxToP3FloatKernelWithPolygonBlackout:
//start of code
global void NvDsPreProcessConvert_CxToP3FloatKernelWithPolygonBlackout(
float *outBuffer,
unsigned char *inBuffer,
unsigned int width,
unsigned int height,
unsigned int pitch,
unsigned int inputPixelSize,
float scaleFactor,
float *meanDataBuffer)
{

// Define the vertices of the polygon (triangle as an example)
Point a = {width / 4, height / 4};
Point b = {3 * width / 4, height / 4};
Point c = {width / 2, 3 * height / 4};

unsigned int row = blockIdx.y * blockDim.y + threadIdx.y;
unsigned int col = blockIdx.x * blockDim.x + threadIdx.x;

if (col < width && row < height)
{
    Point p = {static_cast<int>(col), static_cast<int>(row)};
    if (isInsideTriangle(p, a, b, c))
    {
        // Blackout the pixel by setting to zero
        for (unsigned int k = 0; k < 3; k++)
        {
            outBuffer[width * height * k + row * width + col] = 0.0f;
        }
    }
    else
    {
        // Normal processing
        for (unsigned int k = 0; k < 3; k++)
        {
            float pixelValue = static_cast<float>(inBuffer[row * pitch + col * inputPixelSize + k]);
            float meanValue = meanDataBuffer ? meanDataBuffer[row * width * 3 + col * 3 + k] : 0.0f;
            // outBuffer[width * height * k + row * width + col] =
            //     scaleFactor * (pixelValue - meanValue);
            outBuffer[width * height * k + row * width + col] = 128;
        }
    }
}

}

Does your project related with Metropolis Microservices for Jetson?
Here is the doc for nvdspreprocess: Gst-nvdspreprocess (Alpha) — DeepStream documentation 6.4 documentation . The output of nvdspreprocess is tensor mete which will feed to nvinfer for inference.
Can you share more on your use case?

Thanks for the reply. Yeah, our project is based on Metropolis Microservices for Jetson. Since we can only flash our NVIDIA® Jetson XavierNX to Jetpack 5.0.2, we are currently modifying the nvdspreprocess based on the Deepstream 6.2. I suppose there wouldn’t be too much difference compared to 6.4?

The user case here is to customize the ROI with mutliple polygon area. Therefore we are thinking of modifying the libcustom2d_preprocess to black out a certain area as in the code snipet I shared in the begining. After I set the entire area to be outBuffer[width * height * k + row * width + col] = 0 or 128 the inference is still happenning which indicates that the modification of the outBuffer wasn’t reflected on the downstream.

nvinfer will use the input tensor from nvdspreprocess if you set nvinfer config: input-tensor-from-meta=1

1 Like

Thanks, it seems to be working now. Just to confirm, the modified input via preprocess library won’t be pushed to output rtsp stream, is it?

Yes, it is. nvdspreprocess only generate input tensor for nvinfer based on the input video, not modify the input video. So the output RTSP stream don’t modified.

1 Like

Yeah all my questions solved. Really appreciate your help!

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.