Hi,
The line 140 (frame copy = np.array(n frame, copy=True, order=‘C’)) in the deepstream imagedata-multistream redaction example code caused me to wonder why we copy the matrix rather than simply converting the list to numpy format e.g : np.array(n frame). The time it took to cast each cropped image into a numpy array, change the color space, and save them was greater than the time it took to execute the entire pipeline when I tried to crop and store each item found in each frame. I was wondering if there was a solution that could make these processes go faster.
I’ll include source code and profiler screenshots below
I used precisely the same code as in the example for imagedata-multistream redaction. Me trying to save each detected object in each frame was the only difference.
There is no update from you for a period, assuming this is not an issue anymore.
Hence we are closing this topic. If need further support, please open a new one.
Thanks
Ok, we just save the image every 10 images, so it isn’t necessary to do such conversation for the entire pipeline. If you used your method to save image in every frame and verified it faster, you could use it for your needs.