Yeah trainium and inferentia. They’re just not nearly as well supported on the software level. Google has already made sure this new generation will be supported by vllm, sglang, etc. Amazons chips barely support those and only multiple versions back. Super under invested in (at least on the open source side)
That's seems odd. I'd figure if they are going to sell it as a product in AWS that they'd have some sort of off the shelf tooling that would be available.
Yeah, I’ve been beta testing the Streamyfin tvOS app. it’s now about on par with the official tvOS client, but feels like it’s making more progress regularly
I wanna do this but with a locally running LLM. They’re getting better and better. Can’t wait for something like Taalas to ship custom LLM hardware for personal use
Thanks, really fair feedback. Honestly we're still landing on exact tiers as we work with our early customers. The model is per GPU under management, lower at scale. Happy to chat if you want to learn more — founders@usechamber.com
I/m no expert in this area, but I doubt the longer term plan is build a viable business by selling the product. In a prediction market I'd bet that coming out of AWS they saw this problem first hand and acquisition (really acquihire) is the outcome. Based on the way Amazon operates they should also know it's a. hired by someone else, or b. have your producted copied/cloned by AWS.
reply