Debug nvinfer input

• Hardware Platform: GPU
• DeepStream Version: 5.0.0
• TensorRT Version: 7.0.0.11
• NVIDIA GPU Driver Version (valid for GPU only): 460.32.03

Hi, I have a pipeline including ... -> pgie -> queue -> sgie -> ..., where pgie is a detector and the cropped detections are fed into sgie. However the output tensors generated by sgie is not as expected so I would like to debug what is actually fed into sgie. How would I access and ideally store that input images for debugging purpose?

did you use samples to reduced this problem? this happened in several topics related problem with performance of the secondary engine output. U can see when search on forum, but have not hear any responded for that yet.
[+1 concern]

Can you elaborate that? I don’t understand it. What I did so far is, I deconstructed my problem to not take a video as input but an image and found an image which has just one detection in PGIE so I would get a minimal setup in order to debug that.

Can you share links to some of those threads?

How did you work around or solve that?

[1] use sample convert buffer to save crop image from buffer by opencv in function get_converted_buffer of gst-nvinfer.

[2] I found this thread.

I wanted to have the crop which is being fed into SGIE. So what I did is the following:

In gstnvinfer.cpp’s gst_nvinfer_process_objects method, after get_converted_buffer is called, as I wanted to make use of the actual used transformation parameters, I added the following code:

//create temporary buffer
NvBufSurface *nvbuf;
cv::Mat mat;
//create_params used to setup temporary buffer
NvBufSurfaceCreateParams create_params;
create_params.gpuId = 0;
create_params.width = nvinfer->transform_params.dst_rect[0].width;
create_params.height = nvinfer->transform_params.dst_rect[0].height;
create_params.size = 0;
create_params.colorFormat = NVBUF_COLOR_FORMAT_GRAY8; //my sgie model takes gray input
create_params.layout = NVBUF_LAYOUT_PITCH;
create_params.memType = NVBUF_MEM_CUDA_UNIFIED;

NvBufSurfaceCreate(&nvbuf, 1, &create_params);

//initialize with empty data
NvBufSurfaceMemSet(nvbuf, 0, 0, 0);

//transform according to in get_converted_buffer computed transform_params
auto err = NvBufSurfTransform(&nvinfer->tmp_surf, nvbuf, &nvinfer->transform_params);
if(err != NvBufSurfTransformError_Success) {
GST_ELEMENT_ERROR(nvinfer, STREAM, FAILED, (“NvBufSurfTransform failed with error %d while converting buffer”, err), (NULL));
}

if(NvBufSurfaceMap(nvbuf, 0, 0, NVBUF_MAP_READ) != 0) {
g_printerr(“NvBufSurfaceMap(nvbuf, 0, 0, NVBUF_MAP_READ) != 0\n”);
}

NvBufSurfaceSyncForCpu(nvbuf, 0, 0);

mat = cv::Mat(nvbuf->surfaceList[0].height, nvbuf->surfaceList[0].width, CV_8UC1, nvbuf->surfaceList[0].mappedAddr.addr[0], nvbuf->surfaceList[0].pitch);

char filename[64];
snprintf(filename, 64, “image.jpg”);
cv::imwrite(filename, mat);
g_print(“wrote file image file %s\n”, filename);

which actually gives me the expected output. May anyone confirm that this is the right way to do it?

yes, I did it to debug too. Then how did it work?

I took this code to create the cropped PGIE output and save it as file. It looks like I expected it. Further on I removed the SGIE from my pipeline and configured the PGIE with the same parameters I used for the SGIE previously and fed the cropped image in. However, the output is the same. No idea whats wrong with that.

can check more if you post your config-file both pgie end sgie here

Well, my PGIE config for the cropped input looks like this:

[property]
gpu-id=0
net-scale-factor=1
gie-unique-id=1
onnx-file=model.onnx
batch-size=200
process-mode=1
## 0=FP32, 1=INT8, 2=FP16 mode
network-mode=0

model-color-format=2

network-type=100
output-tensor-meta=1
output-blob-names=import/0_conv_1x1_parts/BiasAdd:0

Nothing special

Hey, you can refer DeepStream SDK FAQ - #9 by mchi to dump the input

I did that but the created libnvds_infer is not even linked to my app. So the code I edited there is not reached at all.

Have you rebuild and reinstall the lib when you modify the source code?

Yea, did a make && make install

Hey, please make sure the lib is really replaced with your new compiled lib, I don’t think it’s hard to debug by yourself.

The problem in my case was that PGIE outputted a gray scale image with values ranging from 0 to 255. However my model expected a float input between 0 and 1. So the solution was to scale the pixels on input using net-scale-factor=0.003921568859368563.

Great work!