How to play a wav file from a Docker container and output it from an HDMI-connected monitor


It’s my environment.
NVIDIA Jetson AGX Orin Developer Kit (64GB)

I have Jetson Orin connected to a monitor. At this time, I am using a Display⇔HDMI conversion cable. The Jetson side is a Display port and the monitor side is an HDMI port.

I executed the Docker run command below.

sudo docker run -it --runtime nvidia --network host --name wav --device=/dev/snd:/dev/snd --device=/dev/bus/usb:/dev/bus/usb -v /home/Desktop/wav:/wav dustynv/l4t-pytorch:r35.3.1 bash

I’m connecting a USB microphone to my Jetson, so I’m using –device=/dev/snd:/dev/snd --device=/dev/bus/usb:/dev/bus/usb.

Is there any simple sample software that plays a wav file in python from this container and outputs it from the monitor’s speakers?

Any library can be used to play wave files.
(I tried playsound and pygame, but they didn’t work.)

additional information.

I selected Sound in Settings on the GUI and changed Output to HDMI / DisplayPort-Built-in Audio.
A test sound was played from the monitor.


After testing the sound, I changed the Output Device setting back to Analog Output-Built-in Audio.

Help me!

Please refer to the post for setting up docker:
Gst-launch-1.0 -vv alsasrc ! alsasink does not work in docker

And try the gst-launch-1.0 command:
Audio playback while recording - #3 by DaneLLL

For playing wav file, the command will be like:

$ gst-launch-1.0 filesrc location=test.wav ! wavparse ! alsasink
1 Like


Thank you for your advice.
I’m sorry. I’ll try it next week.
We will report the results again.



I followed your advice to create a container and successfully played wav files.

$ export DISPLAY=:0
$ xhost +
$ sudo docker run -it --rm --net=host --runtime nvidia --device /dev/snd -e PULSE_SERVER=unix:${XDG_RUNTIME_DIR}/pulse/native -v ${XDG_RUNTIME_DIR}/pulse/native:${XDG_RUNTIME_DIR}/pulse/native -v ~/.config/pulse/cookie:/root/.config/pulse/cookie --group-add $(getent group audio | cut -d: -f3) -e DISPLAY=$DISPLAY -v /tmp/.X11-unix/:/tmp/.X11-unix dustynv/l4t-pytorch:r35.3.1 bash
$ apt update
$ apt install alsa-base alsa-utils pulseaudio

I played the wav file.

root@ubuntu:example/python# gst-launch-1.0 filesrc location=output.wav ! wavparse ! alsasink
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
Redistribute latency...
Pipeline is PREROLLED ...
Setting pipeline to PLAYING ...
New clock: GstAudioSinkClock
Got EOS from element "pipeline0".
Execution ended after 0:00:05.935090013
Setting pipeline to NULL ...
Freeing pipeline ...

One point, if you have a solution, please let me know.
When playing, the beginning of the wav file is slightly cut off.

There seems to be a delay until the wav file is first output from the monitor’s speakers.

As a countermeasure, is it possible to easily play a dummy silence, play it for about 200ms, and play a wav file using gst-launch-1.0?

Thank you!

Please run gst-launch-1.0 alsasink and you can see the properties. You can configure some properties and give it a try. The properties about latency and timestamps may help.

We don’t have much experience in setting the properties. Would need other users to share experience.



Thank you for your advice.
I’ll do some research from now on.

In order to play a wav file and play it from the monitor’s speakers connected with an HDMI cable, the following setting changes were required.

I selected Sound in Settings on the GUI and changed Output to HDMI / DisplayPort-Built-in Audio .

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.