Hacker Newsnew | past | comments | ask | show | jobs | submit | armcat's commentslogin

Not on the same extreme level, but I know that some coffee machines use a tiny CNN based model locally/embedded. There is a small super cheap camera integrated in the coffee machine, and the model does three things: (1) classifies the container type in order to select type of coffee, (2) image segmentation - to determine where the cup/hole is placed, (3) regression - to determine the volume and regulate how much coffee to pour.

Very cool, expensive machines?

This is beautifully written and visualised, well done! The KL divergence comparisons between original and different quantisation levels is on-point. I'm not sure people realize how powerful quantisation methods are and what they've done for democratising local AI. And there are some great players out there like Unsloth and Pruna.

Thank you! I was really surprised how robust models are to losing information. It seems wrong that they can be compressed so much and still function at all, never mind function quite closely to the original size.

Think we're only going to keep seeing more progress in this area on the research side, too.


You can even train in 4 & 8 bits with newer microscaled formats! From https://arxiv.org/pdf/2310.10537 to gpt-oss being trained (partially) natively in MXFP4 - https://huggingface.co/blog/RakshitAralimatti/learn-ai-with-...

To Nemotron 3 Super, which had 25T of nvfp4 native pretraining! https://docs.nvidia.com/nemotron/0.1.0/nemotron/super3/pretr...


Newer quantization approaches are even better, 4-bits gets you no meaningful loss relative to FP16: https://github.com/z-lab/paroquant

Hopefully Microsoft keeps pushing BitNet too, so only "1.58" bits are needed.

I think fractional representations are only relevant for training at this point, and bf16 is sufficient, no need for fp4 and such.


Learned rotations for INT4 are cool! Seems similar to SpinQuant? https://arxiv.org/abs/2405.16406

In my personal opinion I don’t think the 1.58 bit work is going to make it into the mainstream.

Not sure why you think fractional representations are only useful for training? Being able to natively compute in lower precisions can be a huge performance boost at inference time.


> Learned rotations for INT4 are cool! Seems similar to SpinQuant? https://arxiv.org/abs/2405.16406

Indeed, but much better! More accurate, less time and space overhead, beats AWQ on almost every bench. I hope it becomes the standard.

> In my personal opinion I don’t think the 1.58 bit work is going to make it into the mainstream.

I hope you're wrong! I'm more optimistic. Definitely a bit more work to be done, but still very promising.

> Being able to natively compute in lower precisions can be a huge performance boost at inference time.

ParoQuant is barely worse than FP16. Any less precise fractional representation is going to be worse than just using that IMO.


This is awesome, well done. Been doing lot of work with voice assistants, if you can replicate voice cloning Qwen3-TTS into this small factor, you will be absolute legends!


thanks a lot, our voice cloning model will be out by May. we're experimenting w some very cool ways of doing voice cloning at 15M but will have a range of models going upto 500M


That's sick, looking forward to it! You have my email in the profile, please let me know when you do!


Great work! Kind of reminds me of ell (https://github.com/MadcowD/ell), which had this concept of treating prompts as small individual programs and you can pipe them together. Not sure if that particular tool is being maintained anymore, but your Axe tool caters to that audience of small short-lived composable AI agents.


Thanks for checking it out! And yes the tool is indeed catering to that crowed. It's a need I have and thought others could use it as well.


I really like this, and have actually tried (unsuccessfully) to get PersonaPlex to run on my blackwell device - I will try this on Mac now as well.

There are a few caveats here, for those of you venturing in this, since I've spent considerable time looking at these voice agents. First is that a VAD->ASR->LLM->TTS pipeline can still feel real-time with sub-second RTT. For example, see my project https://github.com/acatovic/ova and also a few others here on HN (e.g. https://www.ntik.me/posts/voice-agent and https://github.com/Frikallo/parakeet.cpp).

Another aspect, after talking to peeps on PersonaPlex, is that this full duplex architecture is still a bit off in terms of giving you good accuracy/performance, and it's quite diffiult to train. On the other hand ASR->LLM->TTS gives you a composable pipeline where you can swap parts out and have a mixture of tiny and large LLMs, as well as local and API based endpoints.


I've been working on building my own voice agent as well for a while and would love to talk to you and swap notes if you have the time. I have many things id like to discuss, but mainly right now im trying to figure out how a full duplex pipeline like this could fit in to an agentic framework. Ive had no issues with the traditional route of stt > llm > tts pipeline as that naturally lends itself with any agentic behavior like tool use, advanced context managemnt systems, rag , etc... I separate the human facing agent from the subagent to reduce latency and context bloat and it works well. While I am happy with the current pipeline I do always keep an eye out for full duplex solutions as they look interesting and feel more dynamic naturally because of the architecture, but every time i visit them i cant wrap my head how you would even begin to implement that as part of a voice agent. I mean sure you have text input and output channels in some of these things but even then with its own context limitations feels like they could never bee anything then a fancy mouthpiece. But this feels like im possibly looking at this from ignorance. anyways would love to talk on discord with a like minded fella. cheers.


For my framework, since I am using it for outgoing calls, what I am thinking maybe is I will add a tool command call_full_duplex(number, persona_name) that will get personaplex warmed up and connected and then pause the streams, then connect the SIP and attach the IO audio streams to the call and return to the agent. Then send the deepgram and personaplex text in as messages during the conversation and tell it to call a hangup() command when personaplex says goodbye or gets off track, otherwise just wait(). It could also use speak() commands to take over with TTS if necessary maybe with a shutup() command first. Need a very fast and smart model for the agent monitoring the call.


+1

what's your use case and what specific LLMs are you using?

I'm using stt > post-trained models > tts for the education tool I'm building, but full STS would be the end-game. e-mail and discord username are in my profile if you want to connect!


sent!


Sure, feel free to reach out, just check my profile!


I got PersonaPlex to run on my laptop (a beefy one) just by following the step by step instruction on their github repo.

The uncanny thing is that it reacts to speech faster than a person would. It doesn't say useful stuff and there's no clear path to plugging it into smarter models, but it's worth experiencing.


+1 on this pipeline! You can use a super small model to perform an immediate response and a structured output that pipes into a tool call (which may be a call to a "more intelligent" model) or initiates skill execution. Having this async function with a fast response (TTS) to the user + tool call simultaneously is awesome.


The framing in this thread is full-duplex vs composable pipeline, but I think the real architecture is both running simultaneously — and this library is already halfway there.

The fact that qwen3-asr-swift bundles ASR, TTS, and PersonaPlex in one Swift package means you already have all the pieces. PersonaPlex handles the "mouth" — low-latency backchanneling, natural turn-taking, filler responses at RTF 0.87. Meanwhile a separate LLM with tool calling operates as the "brain", and when it returns a result you can fall back to the ASR+LLM+TTS path for the factual answer. taf2's fork (running a parallel LLM to infer when to call tools) already demonstrates this pattern. It's basically how humans work — we say "hmm, let me think about that" while our brain is actually retrieving the answer. We don't go silent for 2 seconds.

The hard unsolved part is the orchestration between the two. When does the brain override the mouth? How do you prevent PersonaPlex from confidently answering something the reasoning model hasn't verified? How do you handle the moment a tool result contradicts what the fast model already started saying?


LLM slop.


Don't be so hard on yourself :P


Which part specifically?


The part where it's in all your comments.


You are wrong but I am not going to keep going back and forth.


+ 1 , agree still prefer composable pipeline architecture for voice agents. The flexibility on switching LLM for cost optimisation or quality is great for scaled use cases.


Do you know if any of these multi-stage approaches can run on an 8gb M1 Air?


They should! If you take Parakeet (ASR), add Qwen 3.5 0.8B (LLM) and Kokoro 82M (TTS), that's about 1.2G + 1.6G + 164M so ~3.5GB (with overhead) on FP16. If you use INT8 or 4-bit versions then are getting down to 1.5-2GB RAM.

And you can always for example swap out the LLM for GPT-5 or Claude.


This is an outstanding write up, thank you! Regarding LLM latency, OpenAI introduced web sockets in their Responses client recently so it should be a bit faster. An alternative is to have a super small LLM running locally on your device. I built my own pipeline fully local and it was sub second RTT, with no streaming nor optimisations https://github.com/acatovic/ova


Very cool! starred and on my reading list. Would love to chat and share notes, if you'd like


Also consider using Cerebras' inference APIs. They released a voice demo a while back and the latency of their model inference is insane.


I tried to use Cerebras and it was unbeatable at first, but the client didn't want to pay $1300 a month and the $50/month or pay as you go was just not reliable. It would give service unavailable errors or falsely claim we were over our rate limit.

Also Groq is very fast, but the latency wasn't always consistent and I saw some very strange responses on a few calls that I had to attribute to quantization.


You may be interested in gemini-2.5-flash-preview-tts

Text in, audio out, so you can merge in a single step LLM+TTS (streamable)

https://ai.google.dev/gemini-api/docs/models/gemini-2.5-flas...


I've used itch.io before, it's great! I even made a game using some free assets and AI: https://github.com/acatovic/gothicvania-codex-demo


This is awesome, well done guys, I’m gonna try it as my ASR component on the local voice assistant I’ve been building https://github.com/acatovic/ova. The tiny streaming latencies you show look insane


I think you are spot on there, and I am not sure such things exist (yet), but I may be wrong. Some random thoughts:

1. Using the skills frontmatter to implement a more complex YAML structure, so e.g.

  requires:
     python:
       - pandas>=2.1
     skills:
       - huggingface-cli@1.x

2. Using a skills lock file ;-)

  skills.lock


uvx probably is the way to go here (fully self-contained environment for each skill), and use stdout as the I/O bridge between skills.


This is such an insane rabbit hole. AI labs distill weights from the entirety of the internet knowledge, (mostly) without anyone's consent, which (technically) amounts to theft. However the chinchilla law dictates you need to expend X amount of energy to make this knowledge useful. Then the data law dictates that you need to shift the weights to a more useful latent space by paying maths, coding and domain experts lots of money. So you have "stolen" the data, but then paid billions to make it useful. And useful it is!

Then another lab comes, and "steals" from you - that beautiful, refined dataoil - by distilling your weights using inferior equipment but with a toolbox of ingenuity and low-level hacking tricks. They reach 90% of your performance at 20x cost reduction.

What happens when another lab distills from the distilled lab?

Who is the thief? How far will the Alice go?


What would you define as 'distillation' versus 'learning'? How do you know that what a LLM is doing is 'distillation' vs a process closer to a human reading a book?

From my perspective, pretraining is pretty clearly not 'distilling', as the goal is not to replicate the pretraining data but to generalize. But what these companies are doing is clearly 'distilling' in that they want their models to exactly emulate Claude's behavior.


That's a soft distinction (distilling vs learning). If I read a chapter in a text book I am distilling the knowledge from that chapter into my own latent space - one would hope I learn something. Flipping it the other way, you could say that model from Lab Y is ALSO learning the model from Lab X. Not just "distilling". Hence my original comment - how deep does this go?


And yet nearly every machine learning engineer would disagree with you, which is a given away that your argument is rooted in ideology.


> And yet nearly every machine learning engineer would disagree with you, which is a given away that your argument is rooted in ideology.

That's a bold statement! Of course I know the difference, in one case you are learning from correct/wrong answers, and in the other from a probability distribution. But in both cases you are using some X to move the weights. We can get down and gritty on KL divergence vs cross-entropy, but the whole topic is about "theft", which is perhaps in the eye of the beholder.


> which (technically) amounts to theft

Why bother writing so many words when you lack the discipline to choose the words with correct semantics?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: