Voice Assistants

Future-proofing HA with local LLMs: Best compact, low-power hardware?
Future-proofing HA with local LLMs: Best compact, low-power hardware?
I don’t think you need HAOS to support an npu for this case. Honestly if you’re doing local inference it’s NOT running as an addon in haos (which is where it would HAVE to live) instead you’re putting haos As a guest os on the same iron running inference - not the other way around.
·community.home-assistant.io·
Future-proofing HA with local LLMs: Best compact, low-power hardware?
What LLM are you using for Home Assistant? : homeassistant
What LLM are you using for Home Assistant? : homeassistant
There is a lot of variety in the LLMs being used with Home Assistant, as well as voice pipelines. Because this tech moves pretty fast, I would be...
·reddit.com·
What LLM are you using for Home Assistant? : homeassistant
VoiceBM - LLM Voice Biometrics
VoiceBM - LLM Voice Biometrics
Local voice biometrics for Home Assistant. 60-162x faster speaker identification with Sherpa-ONNX. Intelligence over convenience.
·cybericebyte.github.io·
VoiceBM - LLM Voice Biometrics