I see that it’s possible to access the ‘full frame’ like in this example deepstream_python_apps/deepstream_imagedata-multistream.py at 2931f6b295b58aed15cb29074d13763c0f8d47be · NVIDIA-AI-IOT/deepstream_python_apps · GitHub .
But it’s a frame with nvstreammux resolution, not with the original resolution of the source.
I have different cameras (FHD, 4K, 5Mp) connected to a single pipeline, and I want to get the best quality possible out of each frame, i.e., I want to get original frames and not the frames resized by nvstreammux.
Usually, nvstreammux is used with FHD resolution, and I see no reason to change it since neural networks operate on a smaller resolution, like 120x120 or 416x416. I’m afraid that if I use 4K in nvstreammux, the pipeline will have to make frames up to 10 times smaller in each dimension in order to feed them to NNs. This resizing either will be performance expensive or will decrease frames’ quality that is crucial for the work of NNs.
I’m thinking about two options:
Set the maximum resolution (4K) to nvstreammux. I don’t think it’s a good idea; see comments above. Moreover, this solution will be even worse if I have even bigger resolutions, like 6K.
Attach each original frame to the corresponding gstreamer buffer as a NvDsUserMeta before nvstreammux, and get them at the end of the pipeline. I haven’t seen this approach in examples and not sure whether it’s a good idea.
Are there other ways to access full frames? What way is preferable?