Optimal way to save images of objects detected

The line 140 (frame copy = np.array(n frame, copy=True, order=‘C’)) in the deepstream imagedata-multistream redaction example code caused me to wonder why we copy the matrix rather than simply converting the list to numpy format e.g : np.array(n frame). The time it took to cast each cropped image into a numpy array, change the color space, and save them was greater than the time it took to execute the entire pipeline when I tried to crop and store each item found in each frame. I was wondering if there was a solution that could make these processes go faster.
I’ll include source code and profiler screenshots below

I’m running the code here on 8 cameras.

Source Code
Screenshot from 2022-08-15 13-58-27



NVIDIA RTX2070• Hardware Platform (Jetson / GPU)**
6.1• DeepStream Version**• TensorRT Version**

Hi @Ch4ki , could you attach your source code with the Implementation of the crop_image, to_numpy function?

Screenshot from 2022-08-15 13-56-31

Screenshot from 2022-08-15 13-59-13

I used precisely the same code as in the example for imagedata-multistream redaction. Me trying to save each detected object in each frame was the only difference.

There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.

Ok, we just save the image every 10 images, so it isn’t necessary to do such conversation for the entire pipeline. If you used your method to save image in every frame and verified it faster, you could use it for your needs.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.