Access ROI from nvdspreprocess in python

I am using-
• Hardware Platform Jetson
• DeepStream Version 6.2
• JetPack Version 5.1

I am working on a deepstream inference pipeline in python. I am able to run inference on RGB stream with detection model. My pipeline is: source → nvstreammux → nvdspreprocess → nvinfer → nvdsosd → sink. Nvdspreprocess plugin processes the frame for multiple ROIs in single frame and then nvinfer runs inference on it as batch. This is based on apps available in deepstream_python_apps.

My requirement is to access the frame’s ROIs that are getting sent for inference in the python code, so as to visualize them.

For example, in the pipeline below -

gst-launch-1.0 v4l2src ! “video/x-raw, height=1080, width=1920” ! queue ! nvvidconv ! “video/x-raw(memory:NVMM), format=NV12” ! m.sink_0 nvstreammux name=m batch-size=1 width=1920 height=1080 ! nvvideoconvert ! nvdspreprocess config-file=/opt/nvidia/deepstream/deepstream/sources/gst-plugins/gst-nvdspreprocess/config_preprocess.txt ! nvinfer config-file-path=/opt/nvidia/deepstream/deepstream/samples/configs/deepstream-app/config_infer_primary.txt input-tensor-meta=1 batch-size=2 ! nvvideoconvert ! nvdsosd ! nvvideoconvert ! autovideoconvert ! autovideosink sync=false

we can see that there are two ROIs in the frame and batch size 2 is used in nvinfer. If I convert this pipeline in python, can I access the ROIs? I do not want to visualize the detections or get full frame (like they have done in deepstream-imagedata-multistream), I want to check the images that are being sent to the model. Is there a way to do that?

If you want to get the image that are being sent to the model, you need to dump the image from our C/C++ open source code.
You can refer to the queueInputBatch function in the sources\libs\nvdsinfer\nvdsinfer_context_impl.cpp dir.

I tried changes in queueBatchOutput refering to the following:

But the images were not getting saved. After some debugging I found that the function queueInputBatch is not called, but queueInputBatchPreprocessed is called instead. So I replicated the same changes from the patch to the later function. I am now getting the following error when I run it:

Caught SIGSEGV
#0 0x0000ffff8acc3098 in __GI___poll
#1 0x0000ffff8adf0b38 in () at /lib/aarch64-linux-gnu/libglib-2.0.so.0
#2 0x0000ffff8adf0ef8 in g_main_loop_run ()
#3 0x0000ffff8af96e48 in gst_bus_poll ()
#4 0x0000aaaab3f84980 in event_loop
#5 0x0000aaaab3f83868 in main (argc=, argv=)

After further debug, I found that the issue is in line “float scale = m_Preprocessor->getScale();” of the patch.
I do not have enough knowledge about nvinfer and c++ to solve this. Please help.

Yes. If you use preprocess plugin, you refer to the queueInputBatchPreprocessed function.
About the patch, did you refer to the 2nd part of the Dump the Inference Input?

Yes. I have used that patch itself to make the changes before enqueue() function.

OK. For DS 7.0, there are no getScale() for the m_Preprocessor class. You may consider using the following parameters m_Scale.

Also since you are using the nvdspreprocess, you can dump that from the nvdspreprocess plugin. You can refer to prepare_tensor in the sources\gst-plugins\gst-nvdspreprocess\nvdspreprocess_lib\nvdspreprocess_impl.cpp. Just open the DEBUG_LIB macro according to your needs.

Okay, but I am using:

I will check how I can do this from nvdspreprocess. Thank you!

I tried saving the .bin files in nvdspreprocess. I defined DEBUG_LIB as 1, no other change was done.
The .bin files that are saved with this are 0 bytes in size.

Is there anything else that needs to be done?

Yes. The default buffer should be the GPU buffer. If you want to dump that to a file, you need to change to CPU buffer. You can refer to the code below.

#ifdef DEBUG_LIB
    static int batch_num2 = 0;
    int debug_size = 4*m_NetworkSize.channels*m_NetworkSize.width*m_NetworkSize.height;
    char *debug_buffer = (char *)malloc(debug_size);
    std::ofstream outfile2("impl_out_batch_" + std::to_string(batch_num2) + ".bin");
    cudaMemcpy (debug_buffer, outPtr, debug_size, cudaMemcpyDeviceToHost);
    outfile2.write((char*) debug_buffer, debug_size);
    outfile2.close();
    free(debug_buffer);
    batch_num2 ++;
#endif

Thank you, this worked. Now the .bin files are not empty. But when I tried to read them in python using

numpy.fromfile(“path_to_bin_file.bin”)

I am getting an array of shape (353280,); whereas when I printed the debug_size, channels, width and height in nvdspreprocess code, I got

size = 2826240 channels = 3 width = 640 height = 368

Now how do I reshape the numpy array so as to see the actual image?

Edit : 353280 is actually 640 x 368 x 1.5

Edit2 : Also, I have observed that all values in the array are zero

It’s just a demo code, You need to confirm the debug_size according to your specific configuration.
Like below
bytesPerElement(tensorParam.params.data_type)*m_NetworkSize.channels*m_NetworkSize.width*m_NetworkSize.height

The data here already separates the channels of R G B. You can use OpenCV to show the 3 channels separately.

Okay. I am now able to save the images, but there is a height and width mismatch, and I can see mixture of the images I want.

Yes. As I attached before

There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.