Recording two 4K@30 into single video.

Hi Guys,

I am working on an application where I need to record two 4K streams on one Jetson TX2.

We have acquired a module from Leopard Imaging with two IMX274 sensors. The Module is called LI-JETSON-KIT-IMX274M12-D. Its two IMX274 sensors interfaced through MIPI.

(https://www.leopardimaging.com/LI-JETSONKIT-IMX274M12-X.html)

The application need the image to be available as one video stream and it is essential that the images are in sync. We are therefore examining ways to get the image into one video stream.

  1. Our first test have shown that we can record with gstreamer from both cameras simultaniously as two seperate video streams in an mkv container. This does not join the streams into one but proves that the TX2 can encode them.

  2. The next thing we tried was to stack two videos, but this exceeded the hardware buffers maximum capacity so that is a no go.

  3. The third thing we tried has been to squeeze the less important parts of the video (lower half of each video) and stack the resulting images keeping us just within the hardware buffer capacity. (3840x3070) This works but I have a performance problem as I can only achieve 1fps with the following configuration.

gst-launch-1.0 -e
nvcamerasrc sensor-id=0 fpsRange=“1.0 1.0”
! queue ! ‘video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)1/1’
! nvvidconv flip-method=0 ! ‘video/x-raw, format=I420’ ! mix.
nvcamerasrc sensor-id=2 fpsRange=“1.0 1.0”
! queue ! ‘video/x-raw(memory:NVMM), width=(int)3840, height=(int)2160, format=(string)I420, framerate=(fraction)1/1’
! nvvidconv flip-method=0 ! ‘video/x-raw, format=I420’ ! mix.
videomixer name=mix
sink_0::xpos=0 sink_0::ypos=0 sink_0::alpha=1
sink_1::xpos=0 sink_1::ypos=2160 sink_1::alpha=1
! videoconvert
! video/x-raw,width=3840,height=4320
! glupload
! glshader fragment=“"cat shader.frag"”
! gldownload
! videoconvert
! videocrop top=0 left=0 right=0 bottom=1248
! omxh265enc bitrate=16000000 iframeinterval=60
! matroskamux
! filesink location=stack.mkv

The shader.frag we are using is pasted at the end of the message.

I think my bottleneck is the stack-upload-shader-download-crop steps.

So with this this long intro my question goes.

Have I made a huge blunder in my gstreamer configuration or is there a more efficient way to join two images in a shader.

-or perhaps another way to store two 4k@30 streams in a single 4k@60 stream that plays nice with encoding.

Any ideas are welcome.

Kind regards

Jesper

-------------SHADER BEGIN-------------
#version 100
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texcoord;
uniform sampler2D tex;
uniform float time;
uniform float width;
uniform float height;

void main ()
{
vec2 p = v_texcoord;

p.x *= width;
p.y *= height;

gl_FragColor.r = 1.0;
gl_FragColor.g = 1.0;
gl_FragColor.b = 1.0;
gl_FragColor.a = 1.0;

float x = 912.0;
float y = 624.0;

if(p.y < x)
{
    //A
}
else if(p.y < x+y) 
{
    //B
    p.y = x + (p.y - (x)) * 2.0; 
}
else if(p.y < x+x+y)
{
    //C
    p.y = 2160.0 + (p.y-(x+y)); 
}
else if(p.y < x+x+y+y)
{
    //D
    p.y = 2160.0 + 912.0 + (p.y-(x+x+y))*2.0; 
}
else
{
    //E
    p *= 0.0;
}    
gl_FragColor = texture2D(tex, vec2((p.x+0.5)/width, (p.y+0.5)/height));

}

Looks similar to
[url]How to pass data between GStreamer and CUDA without memory copying? - Jetson TX1 - NVIDIA Developer Forums

We suggest use tegra_multimedia_api
[url]Finding the bottleneck in video stitching application - Jetson TX1 - NVIDIA Developer Forums

The fpsRange=“1.0 1.0” may limit sensor output 1 frame per second.