I try to show the video in UHD format (3840x2160, 30 fps) on Tegra TX1. I plan to use GStreamer. When I set the Gstreamer to show videotestsrc, the frame rate was very low (about frame in 2 seconds).
My task is to capture frames from USB camera (1920x1536, 60fps), copy each two frames to single UHD frame and send to HDMI.
What is a best way to do that?
We suggest use tegra_multimedia_api in your case. Please install it via Jetpack 3.1 and refer to
The camera producer provides his own API for capture video from camera. So I need only display part. Is tegra_multimedia_api can help me here?
Please try if xvimagesink can give you acceptable performance:
gst-launch-1.0 v4l2src device=/dev/video1 ! ‘video/x-raw,format=I420,width=1920,height=1080’ ! xvimagesink
If the performance is not good enough, we suggest capture frames into NvBuffer so that you can leverage Tegra HWs.
xvimagesink works good on 1920x1080 image, but not on UHD image. How can I allocate in my source code the NvBuffer for read the image to it directly from API? Is there any example?
tegra_multimedia_api/samples/12_camera_v4l2_cuda is a sample to capture into NvBuffer via v4l2 control.
Is that possible to install tegra_multimedia_api on offline board? I have downloaded jetpack, but I haven’t now internet connection. What is a full pass to tegra_multimedia_api?
No, it is not possible to install on offline board. Internet connection is a must.