Just from my subjective view and observation, I'd say yes. It feels like a lot more people (younger than 30 roughly) smoke more than people around my peer group (mid 30s).
I could be totally wrong tho, but at least that's what it feels like. It feels like "all of them" smoke. Either vape or real cigarettes and quite a few of them using cigarettes
I just subscribed this month again because I wanted to have some fun with my projects.
Tried out opus 4.6 a bit and it is really really bad. Why do people say it's so good? It cannot come up with any half-decent vhdl. No matter the prompt. I'm very disappointed. I was told it's a good model
And yet another "AI doesn't work" comment without any meaningful information. What were your exact prompts? What was the output?
This is like a user of conventional software complaining that "it crashes", without a single bit of detail, like what they did before the crash, if there was any error message, whether the program froze or completely disappeared, etc.
This is quite hostile. Yes, criticism is valid without an accompanying essay detailing every aspect of the associated environment, because these tools are still quite flawed.
I've seen a similar psychological phenomenon where people like something a lot, and then they get unreasonably angry and vocal about changes to that thing.
Usage limits are necessary but I guess people expect more subsidized inference than the company can afford. So they make very angry comments online.
> Usage limits are necessary but I guess people expect more subsidized inference than the company can afford. So they make very angry comments online
This is reductive. You're both calling people unreasonably angry but then acknowledging there's a limit in compute that is a practical reality for Anthropic. This isn't that hard. They have two choices, rate limit, or silently degrade to save compute.
I have never hit a rate limit, but I have seen it get noticeably stupider. It doesn't make me angry, but comments like these are a bit annoying to read, because you are trying to make people sound delusional while, at the same time, confirming everything they're saying.
I don't think they have turned a big knob that makes it stupider for everyone. I think they can see when a user is overtapping their $20 plan and silently degrade them. Because there's no alert for that. Which is why AI benchmark sites are irrelevant.
just my perspective: i pay $20/month and i hit usage limits regularly. have never experienced performance degradation. in fact i have been very happy with performance lately. my experience has never matched that of those saying model has been intentionally degraded. have been using claude a long time now (3 years).
i do find usage limits frustrating. should prob fork out more...
To be honest, I don't have that much fun in playing around with the recipe, but I found this one and it just works for me: https://github.com/xil-se/FreeMate
But nowadays I just drink plain yerba mate with a splash of lemon juice, no added sugar. I do the FreeMate in summer a bit.
Edit: Btw, if anyone of you bought yerba mate before and thought it didn't taste great, for me personally there are huge differences between them. I like the milder ones a lot more that don't have much powder. If you have been disappointed before maybe try again with a different brand and don't forget the splash of lemon juice, that makes a crazy big difference
I'd assume that a good portion of people working on things like that know what is going on. My (very very subjective) feeling is that they just spit out WAY more tokens than needed, so that it hits the limit as fast as possible and people buy more. And the people responsible for that are probably the evil evil PM's
> I never once felt that any AI integration in IDE's was actually worth using over a cli
That's right. I started with Claude in the CLI. But then I found a way to hook it up to my active, running instance of Emacs. And since Emacs is neither an editor, nor an IDE (in the typical sense) but rather a Lisp-REPL with embedded editor, I can now fully control virtually any aspect of my editor via LLM, running in the very same editor.
I can't even describe the feeling - it's absolutely insane. I can for example give the LLM some incident ticket number, then ask it to search for all the relevant Slack conversations, PRs, and Splunk logs, build me some investigation audit log and dump it all into an Org-mode buffer with massive data and then reduce it into a sparse tree where the entire document is folded as much as possible, but the selected information is made visible. Then I can continue the investigation, while the LLM would be adding and removing data from that buffer dynamically (or adjusting the sparse tree), depending of how the investigation proceeds. It's crazy, because I can ask it to programmatically change virtually any behavior of my editor on the fly.
I mainly use (neo)vim because it has a less distracting interface than any other full blown IDE. Not because of some fancy tooling. And because it is faster, but that might be a negligible reason by now.
I purposely try to keep my extension count as low as possible. It's just too distracting for me personally.
If I really want to use AI tools or something else, then I don't mind opening a full suite, but as of right now, I still spend most of my time in vim and use AI mainly in chat mode.
This. While I doubt that there will be a good (whatever that means) desktop risc-v CPU anytime soon, I do think that it will eventually catch up in embedded systems and special applications. Maybe even high core count servers.
It just takes time, people who believe in it and tons of money. Will see where the journey goes, but I am a big risc-v believer
If I just want to get a working product I only need the basic algorithm, but understanding "all" of it is never wrong
reply