Please provide complete information as applicable to your setup.
• Hardware Platform (Jetson / GPU) :- dGPU • DeepStream Version :- 6.4 • NVIDIA GPU Driver Version (valid for GPU only) :- 545 • Issue Type( questions, new requirements, bugs) :- Question • How to reproduce the issue ? (This is for bugs. Including which sample app is using, the configuration files content, the command line used and other details for reproducing) • Requirement details( This is for new requirement. Including the module name-for which plugin or for which sample application, the function description)
This is how my pipeline looks like. I am consuming RTSP stream coming from a camera.
After uridecodebin, I have nvvideoconvert element, I am using src-crop property of this element to crop required ROI.
After the nvvideoconvert element crops the ROI from full FOV image, the cropped ROI goes to the further elements in the pipeline. These things are working as expected.
what I want to do is :- I want to get the image that came from camera (the full FOV image given by camera), and attach that full FOV image as custom metadata, before the nvvideoconvert element (that is before I crop it).
My question is not how to attach the image data as custom gst metadata to the buffer. My question is, how to get this FOV image, that Is present inside a buffer.
full FOV means the image coming from the camera. also, The pipeline runs normally. I have posted a simplified diagram of the pipeline, not including some of the elements.
this is the actual pipeline graph.
To preserve the full fov image, I have added a tee element before cropping (nvvideoconvert), and obtain the full FOV image from the other branch.
The image is very fuzzy, you can compress it into a zip file and then upload it.
Judging from your requirements, it really can only be implemented with tee. But could you consider using the ROI parameter in the nvinfer or nvdspreprocess to meet your requirements?
if I were to use ROI parameter in nvinfer, I would need to pass the entire FOV image ( the image coming from camera) through streammux, and since I have 8MP cameras and the streammux resolution is set to 720p or 1080p, the entire image is scaled down.
I want to feed the images at higher resolution to the nvinfer element so that I can preserve the face recognition accuracy.
There is no update from you for a period, assuming this is not an issue anymore. Hence we are closing this topic. If need further support, please open a new one. Thanks