I am using- • Hardware Platform Jetson • DeepStream Version 6.2 • JetPack Version 5.1
I am working on a deepstream inference pipeline in python. I am able to run inference on RGB stream with detection model. My pipeline is: source → nvstreammux → nvdspreprocess → nvinfer → nvdsosd → sink. Nvdspreprocess plugin processes the frame for multiple ROIs in single frame and then nvinfer runs inference on it as batch. This is based on apps available in deepstream_python_apps.
My requirement is to access the frame’s ROIs that are getting sent for inference in the python code, so as to visualize them.
we can see that there are two ROIs in the frame and batch size 2 is used in nvinfer. If I convert this pipeline in python, can I access the ROIs? I do not want to visualize the detections or get full frame (like they have done in deepstream-imagedata-multistream), I want to check the images that are being sent to the model. Is there a way to do that?
If you want to get the image that are being sent to the model, you need to dump the image from our C/C++ open source code.
You can refer to the queueInputBatch function in the sources\libs\nvdsinfer\nvdsinfer_context_impl.cpp dir.
But the images were not getting saved. After some debugging I found that the function queueInputBatch is not called, but queueInputBatchPreprocessed is called instead. So I replicated the same changes from the patch to the later function. I am now getting the following error when I run it:
Caught SIGSEGV #0 0x0000ffff8acc3098 in __GI___poll #1 0x0000ffff8adf0b38 in () at /lib/aarch64-linux-gnu/libglib-2.0.so.0 #2 0x0000ffff8adf0ef8 in g_main_loop_run () #3 0x0000ffff8af96e48 in gst_bus_poll () #4 0x0000aaaab3f84980 in event_loop #5 0x0000aaaab3f83868 in main (argc=, argv=)
After further debug, I found that the issue is in line “float scale = m_Preprocessor->getScale();” of the patch.
I do not have enough knowledge about nvinfer and c++ to solve this. Please help.
Yes. If you use preprocess plugin, you refer to the queueInputBatchPreprocessed function.
About the patch, did you refer to the 2nd part of the Dump the Inference Input?
OK. For DS 7.0, there are no getScale() for the m_Preprocessor class. You may consider using the following parameters m_Scale.
Also since you are using the nvdspreprocess, you can dump that from the nvdspreprocess plugin. You can refer to prepare_tensor in the sources\gst-plugins\gst-nvdspreprocess\nvdspreprocess_lib\nvdspreprocess_impl.cpp. Just open the DEBUG_LIB macro according to your needs.
I tried saving the .bin files in nvdspreprocess. I defined DEBUG_LIB as 1, no other change was done.
The .bin files that are saved with this are 0 bytes in size.
Yes. The default buffer should be the GPU buffer. If you want to dump that to a file, you need to change to CPU buffer. You can refer to the code below.
It’s just a demo code, You need to confirm the debug_size according to your specific configuration.
Like below bytesPerElement(tensorParam.params.data_type)*m_NetworkSize.channels*m_NetworkSize.width*m_NetworkSize.height
The data here already separates the channels of R G B. You can use OpenCV to show the 3 channels separately.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks