GStreamer record multiple video with multiple audio

Hi,

I have the following video inputs: 1 cam (hdmi2usb) and 1 screengrabber (hdmi2usb as well). These are present in /dev/videoN, as video0 and video1. I also have a usb microphone and the sound of the screengrabber (laptop).

I want to record 2 files, one with the cam+usb mike, and one with the laptop’s screen and sound.

I can record the two videos with this command:

gst-launch-1.0 -v v4l2src device=/dev/video0 ! 'video/x-raw,width=640, height=480, framerate=30/1, format=YUY2' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! omxh264enc ! qtmux ! filesink location=polycom_video.mp4 v4l2src device=/dev/video1 ! 'video/x-raw,width=1920, height=1080, framerate=30/1, format=MJPG' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! omxh264enc ! qtmux ! filesink location=logitech_content.mp4 -e

I can record one with audio with this:

gst-launch-1.0 -v v4l2src device=/dev/video1 ! 'video/x-raw,width=1920,height=1080,framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! omxh264enc ! queue ! mux. alsasrc ! audio/x-raw,width=16,depth=16,rate=44100,channel=1 ! queue ! audioconvert ! audioresample ! voaacenc ! aacparse ! qtmux name=mux ! filesink location=content_audiotest.mp4 sync=false -e

How can I achieve the goal mentioned earlier? How should I mux the audio with the video files properly?

Hi rapif,

You may want to check the following reply as reference: https://devtalk.nvidia.com/default/topic/1058494/jetson-agx-xavier/command-for-merging-audio-and-video-in-gstreamer/post/5368510/#5368510. In your case you probably need to replace audiotestsrc to read audio data from your usb mic, for example, you can use pulsesrc or alsasrc elements.

-Jafet

Hi Jafet,

Yes, my second code block is merging audio w/ video too, but I cannot make the double merge work.

If I’m correct the final command should look something like this:

gst-launch-1.0 -v v4l2src device=/dev/video0 ! 'video/x-raw,width=640, height=480, framerate=30/1, format=YUY2' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! omxh264enc ! queue ! mux. alsasrc ! audio/x-raw,width=16,depth=16,rate=44100,channel=1 ! queue ! audioconvert ! audioresample ! voaacenc ! aacparse ! mux. ! qtmux name=mux ! filesink location=angekis_video.mp4 v4l2src device=/dev/video1 ! 'video/x-raw,width=1920,height=1080,framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! omxh264enc ! queue ! mux2. alsasrc ! audio/x-raw,width=16,depth=16,rate=44100,channel=1 ! queue ! audioconvert ! audioresample ! voaacenc ! aacparse ! mux2. ! qtmux name=mux2 ! filesink location=logitech_content.mp4 sync=false -e

Although I’m having troubles with the two muxing. I think I cannot use 2 mux, therefore I need to use different names, hence mux2 with the second recording.

This code fails with syntax error. What am I missing here?

Do you guys have any advice on syncing the 2-2 pipes audio and video wise? How should I buffer them when it’s working?

Thanks in advance,

With debug, I get the following:

0:00:00.363509064 13164   0x55a5b75460 WARN                     omx gstomx.c:2826:plugin_init: Failed to load configuration file: Valid key file could not be found in search dirs (searched in: /home/support/.config:/etc/xdg as per GST_OMX_CONFIG_DIR environment variable, the xdg user config directory (or XDG_CONFIG_HOME) and the system config directory (or XDG_CONFIG_DIRS)
0:00:00.375105335 13164   0x55a5b75460 WARN                 default grammar.y:1137:priv_gst_parse_yyerror: Error during parsing: syntax error, unexpected LINK
0:00:00.375174972 13164   0x55a5b75460 ERROR           GST_PIPELINE grammar.y:1061:priv_gst_parse_yyparse: syntax error
0:00:00.375229869 13164   0x55a5b75460 ERROR           GST_PIPELINE grammar.y:1061:priv_gst_parse_yyparse: syntax error
0:00:00.380034704 13164   0x55a5b75460 WARN                 default grammar.y:1137:priv_gst_parse_yyerror: Error during parsing: syntax error, unexpected LINK
0:00:00.380096580 13164   0x55a5b75460 ERROR           GST_PIPELINE grammar.y:1061:priv_gst_parse_yyparse: syntax error
0:00:00.380137831 13164   0x55a5b75460 ERROR           GST_PIPELINE grammar.y:1061:priv_gst_parse_yyparse: syntax error

video0 is my usb cam, and video1 is my hdmi screen. So the problem is with my encoding formats.

For mapping the correct audio sources, I have the following information:

root@JetsonNano:~# cat /proc/asound/cards
 0 [tegrahda       ]: tegra-hda - tegra-hda
                      tegra-hda at 0x70038000 irq 83
 1 [tegrasndt210ref]: tegra-snd-t210r - tegra-snd-t210ref-mobile-rt565x
                      tegra-snd-t210ref-mobile-rt565x
 2 [SpkUAC20       ]: USB-Audio - miniDSP VocalFusion Spk (UAC2.0
                      miniDSP miniDSP VocalFusion Spk (UAC2.0 at usb-70090000.xusb-2.4, high speed
 3 [Share          ]: USB-Audio - Logitech Screen Share
                      Alpha Imaging Tech. Corp. Logitech Screen Share at usb-70090000.xusb-1.2, super

So, I should use alsasrc device=hw:2, and alsasrc device=hw:3 correspondingly?

Hi,

Can you please test these pipelines?

gst-launch-1.0 -e v4l2src device=/dev/video0 ! 'video/x-raw,width=640, height=480, framerate=30/1, format=YUY2' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! omxh264enc ! queue ! mp4mux name=mux ! filesink location="angekis_video.mp4" alsasrc ! audio/x-raw,width=16,depth=16,rate=44100,channel=1 ! queue ! audioconvert ! audioresample ! voaacenc ! aacparse ! mux.
gst-launch-1.0 -e v4l2src device=/dev/video1 ! 'video/x-raw,width=1920,height=1080,framerate=30/1' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! omxh264enc ! queue ! mp4mux name=mux2 ! filesink location="logitech_content.mp4" alsasrc ! audio/x-raw,width=16,depth=16,rate=44100,channel=1 ! queue ! audioconvert ! audioresample ! voaacenc ! aacparse ! mux2.

-Jafet

Hi,

I’ve changed the camera to another with higher resolution, but the results were the same. The only section modified is the resolution.

I’m testing the video0 input with this:

gst-launch-1.0 -e v4l2src device=/dev/video0 ! 'video/x-raw,width=1920, height=1080, framerate=30/1, format=YUY2' ! nvvidconv ! 'video/x-raw(memory:NVMM),format=NV12' ! omxh264enc ! queue ! mp4mux name=mux ! filesink location="angekis_video_sound.mp4" alsasrc ! audio/x-raw,width=16,depth=16,rate=44100,channel=1 ! queue ! audioconvert ! audioresample ! voaacenc ! aacparse ! mux.

So, about your pipelines, with the first one I can record, but get the following errors:

(gst-plugin-scanner:8930): GLib-GObject-WARNING **: 09:15:06.470: cannot register existing type 'GstInterpolationMethod'

(gst-plugin-scanner:8930): GLib-GObject-CRITICAL **: 09:15:06.470: g_param_spec_enum: assertion 'G_TYPE_IS_ENUM (enum_type)' failed

(gst-plugin-scanner:8930): GLib-GObject-CRITICAL **: 09:15:06.470: validate_pspec_to_install: assertion 'G_IS_PARAM_SPEC (pspec)' failed

Although it’s recording, but without audio, same as the second one.

It could be because of the wrong audio device selection, don’t you think?
Anyway, I feel this is a smaller problem, since my pipeline mentioned above is capable of recording audio and video. Though advices are welcomed on this matter too, of course.

The main question is merging the 2 pipelines together.

Hi rapif,

If I understand your use case correctly then you can run both pipelines (one for usb cam + mic and the other for the video capture device) in the same gst-launch command, for example, a pipeline like below is totally feasible:

gst-launch-1.0 -e videotestsrc is-live=true ! x264enc ! mp4mux name=mux ! filesink location="video_and_audio.mp4"  audiotestsrc ! lamemp3enc ! mux. videotestsrc is-live=true ! x264enc ! mp4mux name=mux2 ! filesink location="video_and_audio2.mp4"  audiotestsrc ! lamemp3enc ! mux2.

Although I recommend you first make sure both pipelines separately, its easier to debug. Then you can combine both pipelines in the same command by just putting one pipeline after the other.