I have 4 cameras.
My goal is to be able to zoom in and provide different layout of the cameras.
I suppose that this might be accomplished using NvBufSurfTransform.
How to use NvBufSurfTransform in deepstream-app?
Is there any example?
Any help will be appreciated!
• Hardware Platform (Jetson AGX Xavier) • DeepStream Version 6.0 • JetPack Version 4.6
**• TensorRT Version 8.0.1 ** • NVIDIA GPU Driver Version (valid for GPU only) • Issue Type questions
you can use nvvideoconvert plugin, which will call NvBufSurfTransform, please refer to Gst-nvvideoconvert — DeepStream 6.1.1 Release documentation
please refer to deepstream sample deepstream-appsrc-test, and here is a commandline:
gst-launch-1.0 filesrc location=/home/nvidia/Videos/t.mp4 ! qtdemux ! h264parse ! nvv4l2decoder ! nvvideoconvert ! video/x-raw,format=I420,width=1280,height=720 ! xvimagesink
Thank you for helping!
My goal is to switch zoom in and out and do many different layouts dynamically using deepstream-app.
Is gst-nvvideoconvert plugin the way to achieve this?
Since about deepstream-appsrc-test the explanation is this “Demonstrates AppSrc and AppSink usage for consuming and giving data from non DeepStream code respectively.”. Which is something different.
Excuse my confusion but I need additional info. Could you help with that?
Is it possible to mix frames from different cameras using gst-nvvideoconvert I have 4 cameras running simultaneously? And I want to display the 4 cameras at the same time?
The question might be an easy one but I need to clarify.
There is no update from you for a period, assuming this is not an issue any more. Hence we are closing this topic. If need further support, please open a new one. Thanks