Hacker Newsnew | past | comments | ask | show | jobs | submit | hackerInnen's commentslogin

Because if you have the time and opportunity to study something in depth, then it should be taken imo.

If I just want to get a working product I only need the basic algorithm, but understanding "all" of it is never wrong


What do you mean by this? Have the DOS features inside of templeos?

I am thinking about how hard it'd be to port this to risc-v and then run it on an FPGA dev board as real computer. That could be fun


I mean that someone take the dos source code and develop it into a templeos like os with a repl and every component editable in real time.

But I find it kinda discouraging that even Terry Davis ran templeos in an emulator.


A ban for people born before 2010 would be reasonable


Just from my subjective view and observation, I'd say yes. It feels like a lot more people (younger than 30 roughly) smoke more than people around my peer group (mid 30s).

I could be totally wrong tho, but at least that's what it feels like. It feels like "all of them" smoke. Either vape or real cigarettes and quite a few of them using cigarettes


I just subscribed this month again because I wanted to have some fun with my projects.

Tried out opus 4.6 a bit and it is really really bad. Why do people say it's so good? It cannot come up with any half-decent vhdl. No matter the prompt. I'm very disappointed. I was told it's a good model


because they’re using it for different things where it works well and that’s all they know?


And yet another "AI doesn't work" comment without any meaningful information. What were your exact prompts? What was the output?

This is like a user of conventional software complaining that "it crashes", without a single bit of detail, like what they did before the crash, if there was any error message, whether the program froze or completely disappeared, etc.


This is quite hostile. Yes, criticism is valid without an accompanying essay detailing every aspect of the associated environment, because these tools are still quite flawed.


Because it was good until January 2026, then it detoriated into a opus-3.1. Probably given much less context windows or ram.


It released in February 2026.


I don’t think I’ve ever seen otherwise reasonable people go completely unhinged over anything like they do with Opus


I've seen a similar psychological phenomenon where people like something a lot, and then they get unreasonably angry and vocal about changes to that thing.

Usage limits are necessary but I guess people expect more subsidized inference than the company can afford. So they make very angry comments online.

For example, there is no evidence that 4.6 ever degraded in quality: https://marginlab.ai/trackers/claude-code-historical-perform...


> Usage limits are necessary but I guess people expect more subsidized inference than the company can afford. So they make very angry comments online

This is reductive. You're both calling people unreasonably angry but then acknowledging there's a limit in compute that is a practical reality for Anthropic. This isn't that hard. They have two choices, rate limit, or silently degrade to save compute.

I have never hit a rate limit, but I have seen it get noticeably stupider. It doesn't make me angry, but comments like these are a bit annoying to read, because you are trying to make people sound delusional while, at the same time, confirming everything they're saying.

I don't think they have turned a big knob that makes it stupider for everyone. I think they can see when a user is overtapping their $20 plan and silently degrade them. Because there's no alert for that. Which is why AI benchmark sites are irrelevant.


just my perspective: i pay $20/month and i hit usage limits regularly. have never experienced performance degradation. in fact i have been very happy with performance lately. my experience has never matched that of those saying model has been intentionally degraded. have been using claude a long time now (3 years).

i do find usage limits frustrating. should prob fork out more...


That's what I thought today reading the comments in the Mozilla Thunderbolt thread today. Something about Mozilla absolutely sets people off.


[flagged]


I recognize the sarcasm. The data I can find says it's performing at baseline however?

https://marginlab.ai/trackers/claude-code/


Yeah, that's my point. Humans are not reliable LLM evaluators. "Secret model nerfs" happen in "vibes" far more often than they do in any reality.


This but unironically.

"I reject your reality, and substitute my own".

It worked for cheeto in chief, and it worked for Elon, so why not do it in our normal daily lives?


To be honest, I don't have that much fun in playing around with the recipe, but I found this one and it just works for me: https://github.com/xil-se/FreeMate

But nowadays I just drink plain yerba mate with a splash of lemon juice, no added sugar. I do the FreeMate in summer a bit.

Edit: Btw, if anyone of you bought yerba mate before and thought it didn't taste great, for me personally there are huge differences between them. I like the milder ones a lot more that don't have much powder. If you have been disappointed before maybe try again with a different brand and don't forget the splash of lemon juice, that makes a crazy big difference


Tereré is also a milder alternative: https://en.wikipedia.org/wiki/Terer%C3%A9


I love terere - note you will get less caffeine and antioxidants from cold brewing.


You can also buy mate with mint mixed in. I accidentally bought a kilo of it a while ago and I've been working through it slowly.


You are absolutely right! That is exactly the reason why more lines of code always produce a better program. Straight on, m8!


This might be not so far from the truth, if you count total loc written and rewritten during the development cycle, not just the final number.

Not everybody is Dijkstra.


> They don't know what the * is going on

I'd assume that a good portion of people working on things like that know what is going on. My (very very subjective) feeling is that they just spit out WAY more tokens than needed, so that it hits the limit as fast as possible and people buy more. And the people responsible for that are probably the evil evil PM's


I don't know I never once felt that any AI integration in IDE's was actually worth using over a cli and that's a very low bar compared to a tui.


> I never once felt that any AI integration in IDE's was actually worth using over a cli

That's right. I started with Claude in the CLI. But then I found a way to hook it up to my active, running instance of Emacs. And since Emacs is neither an editor, nor an IDE (in the typical sense) but rather a Lisp-REPL with embedded editor, I can now fully control virtually any aspect of my editor via LLM, running in the very same editor.

I can't even describe the feeling - it's absolutely insane. I can for example give the LLM some incident ticket number, then ask it to search for all the relevant Slack conversations, PRs, and Splunk logs, build me some investigation audit log and dump it all into an Org-mode buffer with massive data and then reduce it into a sparse tree where the entire document is folded as much as possible, but the selected information is made visible. Then I can continue the investigation, while the LLM would be adding and removing data from that buffer dynamically (or adjusting the sparse tree), depending of how the investigation proceeds. It's crazy, because I can ask it to programmatically change virtually any behavior of my editor on the fly.


I mainly use (neo)vim because it has a less distracting interface than any other full blown IDE. Not because of some fancy tooling. And because it is faster, but that might be a negligible reason by now.

I purposely try to keep my extension count as low as possible. It's just too distracting for me personally.

If I really want to use AI tools or something else, then I don't mind opening a full suite, but as of right now, I still spend most of my time in vim and use AI mainly in chat mode.


This. While I doubt that there will be a good (whatever that means) desktop risc-v CPU anytime soon, I do think that it will eventually catch up in embedded systems and special applications. Maybe even high core count servers.

It just takes time, people who believe in it and tons of money. Will see where the journey goes, but I am a big risc-v believer


Why? They have yet to show anything to believe in except perhaps the embedded space.


You think Meta bought Rivos to work on embedded?

You think the Alibaba C930 CPU is for embedded? 15 SPECint2006 / GHz

Or that the Tenstorrent Ascaclon will be? 18 SPECint2006 / GHz

Even the SpacemiT K3 has better AI performance than an Apple Silicon M4.

And RISC-V chips released this year are 2-4 times faster than last year. RISC-V is not the fastest ISA but it is improving the fastest.

With so many companies backing RISC-V, why would I bet against it?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: