What laptop has that much VRAM and RAM for $3500 with good/okay-ish Linux support? I was looking to upgrade my asus zephyrus g14 from 2021 and things were looking very expensive. Decided to just keep it chugging along for another year.
Then again, I was looking in the UK, maybe prices are extra inflated there.
Who decides what a "win" is in these cases? I get everything apart from that part. Because I would take that bet, but I'm worried what the definition of "Jesus christ" is.
You're not wrong that it does that, but that's kinda what I'd expect. Maybe because I'm used to it, but if there's a potential turn it'll say "keep right" or "keep left". So it makes sense to me that it says "second exit".
"Straight" can be ambiguous, second exit isn't. Maybe it's because I'm terrible with directions and hate driving, but I like the constant feedback that I'm going the right way.
I suspect the real issue is that they just change stuff "randomly" and the experience gets worse/better cheaper/more expensive.
Since you have no way of knowing when they change stuff, you can't really know if they did change something or it's just bias.
I've experienced that so many times in the last month that I switched to codex. The worst part is, it could be entirely in my head. It's so hard to quantify these changes, and the effort it takes isn't worth it to me. I just go by "feeling".
They don't even need to do anything. LLMs are effectively random anyway. Even ignoring temperature and inadvertent nondeterminism in inference, the change in outputs from a change in inputs is unpredictable and basically pseudorandom. That's not to say they aren't useful, just that Anthropic could make zero changes and people would still see variations that they'd attribute to malice.
The issue is business and transparency. Transparency is often in the customer's interest at the individual business's expense.
There are very, very few things that can be completely transparent without giving competitors an advantage. The nice solution solution to this is to be better and faster than your competitors, but sometimes it's easier just to remove transparency.
I expect "model transparency" to become the new "SSO" enterprise feature differentiator.
Enterprise use cases have to have it (or else pawn the YOLO off on their users), so it will be a key way to bucket customers into non-enterprise vs enterprise pricing.
I bet if you could make it interesting, YouTube/TikTok/Instagram/Whatever could make it possible to get paid to dig holes in your backyard.
You could argue that the value is in the entertaining filming/acting/story telling etc, but if the videos are about digging holes then I think it's valid to say someone is paying you to dig holes.
Yeah i hate what you are suggesting, because soon there are uninteresting people chasing every subject trying to convert it into a career. Just leave some things alone ok and quit strangling my hobby with both hands
FWIW I have an Asus Zephyrus G14 and the dual graphics cards works pretty well in Linux in hybrid mode. It's pretty cool, certain things (games) run on the dedicated Nvidia GPU. Everything else runs on the built in AMD GPU.
I'm guessing it's because the laptops are popular enough that there's a dedicated group of people that make it work [0].
I'm still on X11, dunno what the story is like with Wayland though.
Pretty much, the article assumes people didn't build the wrong thing before AI. Except that did happen all the time and it just happened slower, took longer to realize that it was the wrong thing and then building the right thing took longer.
It's funny, because you could actually take that story and use it to market AI.
> I once watched a team spend six weeks building a feature based on a Slack message from a sales rep who paraphrased what a prospect maybe said on a call. Six weeks.
Except now with AI it takes one engineer 6 hours, people realize it's the wrong thing and move on. If anything, I would say it helps prove the point that typing faster _does_ help.
Sometimes being involved in the construction process allows you to discover all the (many, overlapping) ways it's the "wrong thing" sooner.
In the long term, some of the most expensive wrong-things are the ones where the prototype gets a "looks good to me" from users, and it turns out what they were asking for was not what they needed or what could work, for reasons that aren't visually apparent.
In other words, it's important to have many people look at it from many perspectives, and optimizing for the end-user/tester perspective at the expense of the inner-working/developer perspective might backfire. Especially when the first group knows something is wrong, but the second group doesn't have a clue why it's happening or how to fix it. Worse still if every day feels like learning a new external codebase (re-)written by (LLM) strangers.
Centralized MCP server over HTTP that enables standardized doc lookup across the org, standardized skills (as MCP prompt), MCP resources (these are virtual indexes of the docs that is similar to how Vercel formatted their `AGENTS.md`), and a small set of tools.
We emit OTEL from the server and build dashboards to see how the agents and devs are using context and tools and which documents are "high signal" meaning they get hit frequently so we know that tuning these docs will yield more consistent output.
OAuth lets us see the users because every call has identity attached.
You jest but I was flabbergasted when doing some AI backed feature that the fix was adding a "The result you send back MUST be accurate." to the already pretty clear prompt.
Then again, I was looking in the UK, maybe prices are extra inflated there.
reply