Jetson AI Lab - Home Assistant Integration

I had been doing it that way, and unfortunately the multi-stage containers with the separate builder container aren’t scalable or totally automated, because they require me manually pushing each one to dockerhub so that other users can pull them so they don’t have to re-build themselves (and they have to pull those large builder containers just to copy out a few hundred MB of binaries).

I wish the apt PPA were also an option, but that would require each package to already be setup to build debians. So maybe the simple tarball FTP site is just easiest 🤷‍♂️


OK, here are container images up for piper, thanks @michael_gruner for the dockerfile!


The test script is working, sounds good, here are the numbers I get after discarding the warmup run:

Piper TTS model:    en_US-lessac-high
Output saved to:    /data/audio/tts/piper-en_US-lessac-high.wav
Inference duration: 1.119 sec
Audio duration:     40.537 sec
Realtime factor:    0.028
Inverse RTF (RTFX): 36.228

Piper TTS model:    en_US-lessac-medium
Output saved to:    /data/audio/tts/piper-en_US-lessac-medium.wav
Inference duration: 0.789 sec
Audio duration:     39.934 sec
Realtime factor:    0.020
Inverse RTF (RTFX): 50.627

Piper TTS model:    en_US-lessac-low
Output saved to:    /data/audio/tts/piper-en_US-lessac-low.wav
Inference duration: 0.649 sec
Audio duration:     40.904 sec
Realtime factor:    0.016
Inverse RTF (RTFX): 63.030

The memory usage for the high model only appears to be 384MB 👍
Can the Wyoming container for piper use this as the base?


It should be easy to add the Wyoming server into this container. You won’t even need to install the Python library for piper since wyoming-piper uses the compiled piper directly 🙂

I can try and add it, but I don’t have a Jetson to test it on. Will this container run on a regular Linux machine?

That error sounds like you need to run both python3 -m script.hassfest and python3 -m script.translations develop --all after installing dependencies.

1 Like

Thanks for tip @hansen.mike ! :) Will try this one soon.

Just small a WIP status update from my side – I am currently working on few jetson-containers:

  • homeassistant-base-image - jetson-containers compatible base container for Home Assistant & HA Addon containers with S6 & rootfs (autodiscovery WIP). This one could be used as a dependency for other HA related containers or add-ons.
  • homeassistant-core - Improved Dockerfile, uses homeassistant-base-image in depends
  • wyoming-openwakeword - uses homeassistant-base-image in depends, installs & run the openWakeWord

So far I have working:

  • Home Assistant Core container
  • openWakeWord container
  • openWakeWord Addon connected to HA instance running on Jetson using Wyoming protocol manually (auto-discovery still fails for some reason?)

Everything is currently in PR draft: Add `wyoming-openwakeword` addon for `homeassistant-core` container by ms1design · Pull Request #471 · dusty-nv/jetson-containers · GitHub

That’s exciting progress @narandill! I built homeassistant-core, was able to run it here on Jetson and add a test integration (AccuWeather). I have a z-wave stick coming to try it with next. Here are the images:



To run it, either pull or clone the latest jetson-containers and run the updated install first:

git clone
bash jetson-containers/
jetson-containers run $(autotag homeassistant-core)

@dusty_nv I’m using zigbee instead of z-wave, but still I can assume that it will require some kind of add-on to support this protocol. I think we could get it working using homeassistant-base-image container I am working on right now.

I’m happy to announce that wyoming-openwakeword is working together with homeassistant-core on the same Jetson Orin: PR

@hansen.mike thanks you for your support with translations! 🙏 Now after spending some time on this I wonder how possibly could we decouple the HA Supervisor from add-on Discovery process when still using bashio?

One could also potentially se some value in my developments here if there would be will to add native aarch64 builds to homeassistant-core or add-ons docker images later in the future.

Here’s the output from the add-on after pairing with Home Assistant Instance:

s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service openwakeword: starting
s6-rc: info: service openwakeword successfully started
s6-rc: info: service discovery: starting
DEBUG:root:Namespace(uri='tcp://', models_dir=PosixPath('/usr/local/lib/python3.11/dist-packages/wyoming_openwakeword/models'), custom_model_dir=['/share/openwakeword'], preload_model=['ok_nabu'], threshold=0.5, trigger_level=1, output_dir=None, debug=True, log_format='%(levelname)s:%(name)s:%(message)s', debug_probability=False, version=False, model=[])
DEBUG:root:Loading ok_nabu_v0.1 from /usr/local/lib/python3.11/dist-packages/wyoming_openwakeword/models/ok_nabu_v0.1.tflite
DEBUG:wyoming_openwakeword.handler:Started thread for ok_nabu_v0.1
DEBUG:root:Loading /usr/local/lib/python3.11/dist-packages/wyoming_openwakeword/models/melspectrogram.tflite
DEBUG:root:Loading /usr/local/lib/python3.11/dist-packages/wyoming_openwakeword/models/embedding_model.tflite
DEBUG:wyoming_openwakeword.handler:Client connected: 1264218602132616
DEBUG:wyoming_openwakeword.handler:Client disconnected: 1264218602132616
s6-rc: info: service discovery successfully started
s6-rc: info: service legacy-services: starting
s6-rc: info: service legacy-services successfully started
DEBUG:wyoming_openwakeword.handler:Client connected: 1264222638735951
DEBUG:wyoming_openwakeword.handler:Sent info to client: 1264222638735951
DEBUG:wyoming_openwakeword.handler:Client disconnected: 1264222638735951
DEBUG:wyoming_openwakeword.handler:Client connected: 1264271132755228
DEBUG:wyoming_openwakeword.handler:Sent info to client: 1264271132755228
DEBUG:wyoming_openwakeword.handler:Client disconnected: 1264271132755228
DEBUG:wyoming_openwakeword.handler:Client connected: 1264431284147158
DEBUG:wyoming_openwakeword.handler:Sent info to client: 1264431284147158
DEBUG:wyoming_openwakeword.handler:Client disconnected: 1264431284147158

Later, probably after our second Jetson AI Research Group online meeting I plan to add more add-on’s to connect everything in a working Assist Pipeline:

  • homeassistant-core
  • wyoming-openwakeword
  • wyoming-whisper
  • etc…

I totally forgot to mention, that in this PR also introduces new homeassistant-base-image container which installs & configures some common dependencies of:

  • homeassistant-core
  • wyoming-openwakeword
  • and in the future – to the other supported add-ons :)

Have fun everyone!

Awesome @narandil, so Piper and OpenWakeWord are in place. We are only pending whisper.cpp and/or FastWhisper, right? I might give whisper.cpp a go later this week, is anyone hasn’t already.

@michael_gruner we already have CUDA-enabled containers for whisper, whisperX, and faster-whisper on JetPack 5/6 (and @cyato has been looking at whisper_streaming)

(these aren’t the wyoming versions, just the base, so if you could look into the wyoming build for whisper that would be great)

@narandill looking at your updated PR and additional containers now! 👏

@michael_gruner @dusty_nv Small update - some puzzle pieces are in right places already:

Everything running on Orin:

  • homeassistant-core
  • wyoming-faster-whisper (using faster-whisper container as dependency)
  • wyoming-openwakeword
  • wyoming-assist-microphone

Unfortunately wyoming-piper based on piper-tts container is still in WIP - stuck on uploading wheels to pip repository. (Anyways thanks for covering the base piper-tts container @michael_gruner 👏 )

Next in line is connecting my secondary Anker USB Mic/Speaker device to Orin and… it’s the less fun stage - testing, fixing, testing 👀

1 Like

Awesome @narandill , that’s great! Don’t worry about the pip wheel uploading, I have that setup so that they only upload when building on my local network (for some modicum of security). So when you issue the PR and I merge it and build the containers for DockerHub, the wheels will be populated to the pip server then. That is why the twine upload commands in the build scripts will fail gracefully:

1 Like

Hey @dusty_nv! Looking for a task here. You mentioned something about Zephyr 3B + Whisper Tiny during the call, is that what’s next? Anything else needs a hand?

Hi @michael_gruner, thanks for reaching out! I’m not sure if you would be interested in these, but here is the list of supported Wyoming voice pipeline projects used in Home Assistant. We have already ported some Wyoming-enabled containers that you can check here.

We’re also looking for people to help test the new Home Assistant containers on Jetson and share their experiences with solving technical challenges like with audio inside Docker when using piper-tts or assist-microphone.

Additionally, if you have any new ideas or suggestions, feel free to propose them. We’re planning to create our own Home Assistant plugin to query LLMs APIs, later also VLMs, which will give us more flexibility and the potential to integrate new features into this project.

@dusty_nv @michael_gruner (and everyone interested) - I think we could create a roadmap to help guide others in their contributions. Since we share a similar vision for AI in smart homes, this process should be straightforward ;)

@narandill hope that audio issue is now resolved, and chalked up to audio device configuration

Agree that next action is further testing and debug of the HA and wyoming containers @michael_gruner

When PR #481 is ready to merge, I will build those and push them to dockerhub, so then a wider audience can try them. At that point, assuming everything is working as expected, we should be compatible with the existing HA voice system, and we can move onto the next phase of integrating more advanced agents and models like the VLMs.

I am currently working on function calling which should open the gate to more dynamic agents that can hook into real-world systems (like HomeAssistant), do self-learning, and generate pipelines on-the-fly. @jaybdub is currently looking at optimizing Whisper variants with TensorRT so that we can get it in the footprint of Orin Nano, while running alongside everything else (@michael_gruner if that’s something you’re interested in as well, let us know)

Hello everyone,

I’m HA user and it pro in cloud infra.

I’ve bought a jetson month ago and installed ha in docker with hope of leveraging the gpu for few idea of mine like facial recognition or voice etc… nevertheless i realised that it wasn’t yet integrated.

I do saw this post this morning and would love to test.

I also see that add-on is disabled on container and wonder if that is something that would be included at some point.

Nevertheless i saw you looked for testers so i’m happy to deploy and test.

I’ve an orin 3011

Let me know where to start.


Thanks @notben, welcome to the team! Great to have folks like yourself onboard who already have working HA setups and know HA. Keep an eye on @narandill PR #481 for the latest status on when the Wyoming containers will be ready to test.

If you haven’t already, you can also start trying the homeassistant-core container on your Jetson. If you have devices or sensors that aren’t working with it, that would be useful to know.

My understanding is that add-ons appear disabled, because we are not yet able to run HA Supervisor, and need to manually build the add-on containers (like Wyoming,ect)

1 Like

These new Wyoming container images have been pushed to DockerHub - thanks again to @narandill for doing the heavy lifting!


See here for the docker-compose.yml that Mieszko is using:

@narandill has authored some great docs to help guide us all in setting up the Wyoming containers and Home Assistant voice agent!

These have also all been added to the new Smart Home category on jetson-containers homepage. Awesome progress!

Hey everyone! I’ve been getting into Piper TTS recently and found the innovations in this thread really interesting. I want to see how fast I can get my Piper running, so I tried the “TensorRT ExecutionProvider” since it seems the fastest based on @michael_gruner’s table, but I couldn’t a proper conversion from the onnx model to TensorRT.

@michael_gruner @shahizat do you have any tips or reference to documentation about getting Piper running with TensorRT?

1 Like

dear all i’ve the setup all configured with my jetson and all the container seems to run fine.
i’ve also configure the ha assistant config.

it seems to almost work the or nabu is recognised but i see in the log it does not understand what i’m saying and also the answering speach is like super fast answer or not understanable .
on my jtop i see nothing running on the gpu …

not sure if i did something wrong.
it could be linked to my volume config as i see nothing in the folder i’ve setup in composer :

below my Ha config :


i’ll contimue to run some test … and let you know my progress