Implementing rtsp streaming with audio from usb

sudo apt-get install libgstrtspserver-1.0 libgstreamer1.0-dev
wget https://gstreamer.freedesktop.org/src/gst-rtsp/gst-rtsp-server-1.14.1.tar.xz
tar -xvf gst-rtsp-server-1.14.1.tar.xz
cd  gst-rtsp-server-1.14.1
cd examples
gcc test-launch.c -o test-launch $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)
./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1280, height=720, framerate=120/1 ! omxh265enc ! rtph265pay name=pay0 pt=96 config-interval=1"

given arecord -l returns card 2 with index 0,
the question is how to modify the latter command to include the audio.
Trial1:
adding the component
“queue ! muxer.video_0 alsasrc device=“hw:2,0” ! voaacenc ! queue ! muxer.audio_0 qtmux name=muxer !”

./test-launch "nvarguscamerasrc ! video/x-raw(memory:NVMM), format=NV12, width=1280, height=720, framerate=120/1 ! omxh265enc ! rtph265pay  muxer.video_0 alsasrc device="hw:2,0" ! voaacenc ! queue ! muxer.audio_0 qtmux name=muxer name=pay0 pt=96 config-interval=1"

obviously failed;
trial2

gst-launch-1.0 -e nvarguscamerasrc ! 'video/x-raw(memory:NVMM),width=1920, height=1080,format=NV12,framerate=30/1' ! omxh264enc ! 'video/x-h264,stream-format=byte-stream' ! h264parse ! queue ! muxer.video_0 alsasrc device="hw:2,0" ! voaacenc ! queue ! muxer.audio_0 qtmux name=muxer ! udpsink host=127.0.0.1 port=3001

failed [though on agx]

v4l2-ctl --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
	Index       : 0
	Type        : Video Capture
	Pixel Format: 'BG10'
	Name        : 10-bit Bayer BGBG/GRGR
		Size: Discrete 2592x1944
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 2592x1458
			Interval: Discrete 0.033s (30.000 fps)
		Size: Discrete 1280x720
			Interval: Discrete 0.008s (120.000 fps)

trial 3
[to be continued]

Hi,
You may refer to this pipeline:


and replace audiotestsrc with alsasrc.

thanks

 ./test-launch 'nvarguscamerasrc ! nvvidconv ! nvv4l2h264enc ! h264parse ! queue ! rtph264pay name=pay0 pt=96 alsasrc device="hw:2,0" ! voaacenc ! queue ! rtpmp4apay pt=97 name=pay1'
stream ready at rtsp://127.0.0.1:8554/test

worked

will it work with deepstream?
to what extent does deepstream support audio for input and redirection?
can it get it from combined rtsp [audio+video], then process then output again rtsp [audio+video]?
probably I shall post the question to a separate thread ?

Hi,
The default deepstream-app demonstrates video processing. For audio, you may need to refer to the source code and do integration. We don’t have exiting sample for it and would need other users to share experience/guidance.

how to add latency for audio? like 4-5 seconds latency? The default pulseaudio sound volume contro sudo apt install pulseaudiol seems to have limitation 2 seconds and no impact on microphone as long as other sound services are running, as it seems to me
considering video comes from AI processor that introduces a delay?

e.g alsasrc buffer-time=32000 latency-time=16000
This will give a maximum latency for the source of 32ms and a minimum of 16ms

source http://gstreamer-devel.966125.n4.nabble.com/delay-between-speaker-and-microphone-td4675499.html

so if I need a latency 4.7 seconds for microphone I could use the followin pipeline probably?

 ./test-launch 'nvarguscamerasrc ! nvvidconv ! nvv4l2h264enc ! h264parse ! queue ! rtph264pay name=pay0 pt=96 alsasrc buffer-time=5000000 latency-time=4700000 device="hw:2,0" ! voaacenc ! queue ! rtpmp4apay pt=97 name=pay1'
stream ready at rtsp://127.0.0.1:8554/test

Hi,
There are several properties in alsasrc. Not sure but configuring certain properties might help. Since we don’t have much experience in advanced control of the plugin, please go to gstreamer forum to get suggestion/guidance.