Seconded. Graphene has spoiled me. Here's to hoping graphene's future collaboration with an OEM results in a small physical keyboard device! Not holding my breath, and will choose graphene over any other feature.
Bravo! This (and coreboot) are why I own a Starbook from Starlabs.
Worth noting that the starlabs machines are equally repairable as the frameworks as well. But am extremely happy to have multiple options in the niche, will survey again when I upgrade.
I just wish their StarFighter was out already, it has been in a weird state of limbo forever. The FW16 seems easier to beat than the FW13; the 4K screen alone would be so much nicer for 2x scaling.
Yes their releases are slow but in exchange you get extremely good support from a small devoted team. My starbook took a year after preorder but have been so content since it arrived I won't hesitate to preorder again
I can only recommend starlabs in that case, took me tons of research before discovering only starlabs has everything: repair ability, firmware, coreboot, compatibility, custom hardware.
yah, the problem is I'm pretty heavily invested in framework and already bought the newest 16" before I realized starlabs wasn't selling rebadged compal suitcase sized laptops anymore, so it'll be 3 years at least.
Yes fair enough I was on the fence between framework and starlabs since they seem to be only custom hardware Linux machines and am lucky I wasn't in a rush and could wait for the starlabs or I would have pulled the trigger on framework. Machines will hopefully only be better when the time comes!
yah, it's a small gripe really, I just really want to try out coreboot, it looks like the future. I wish windows wasn't holding back hardware development everywhere so badly.
My starbook coreboot let's me set charge limit (60%/80%/100%) as well as 3 different charging speed settings, which are incredible for a computer that remains largely docked.
~25 years from conception to maturity, millions and billions of years of brute force development... There is a lot of energy involved in typing this sentence to you. I am not sure LLMs use more.
Yes, but the inference cost of humans is extremely low. We're constantly making decisions and generating thoughts, most subconscious, while using extremely little energy. It's remarkable how energy efficient the human body and mind, and animals in general, are.
But LLMs don’t get a pass on the billions of years of front loading either. I mean, I see what you’re saying, and I get your point for sure, but my point is that we should be able to find a more clever way to accomplish this that builds off our billions of years of front loading. LLMs seem like the lazy way out in a way.
reply