Please provide complete information as applicable to your setup.
• Jetson Orin NX
• DeepStream Version 6.3
• JetPack Version 5.1
I am working on a GStreamer pipeline that processes video frames for object detection. My input frames have a resolution of 1920x1200. To improve recognition accuracy, I need to add padding to these frames, enlarging them to a resolution of 2200x2000. However, the model I’m using with nvinfer
is optimized for smaller input sizes, which introduces complications.
Here’s my intended pipeline strategy:
- Increase the frame size by adding padding to reach a resolution of 2200x2000.
- Use a
tee
element to create two branches.
- Branch A: Scale down the resolution for compatibility with
nvinfer
. - Branch B: Maintain the padded frame for further processing. I
- Perform object detection on Branch A, then apply the detection coordinates of Branch A to crop the original, padded frame in Branch B (the coordinates will be scaled too).
My challenge is transferring the detection coordinates from Branch A to Branch B without causing frame desynchronization. Specifically, I need a reliable method to ensure that the coordinates extracted from the scaled-down frames in Branch A are applied to the corresponding, original frames in Branch B for cropping purposes.
Could you provide guidance on achieving this synchronization? Are there best practices or specific GStreamer elements/plugins that facilitate this kind of data transfer between branches specifically between nvidia gst plugins, ensuring that the correct metadata is applied to the appropriate frame?
Thanks for your help…