Hacker Newsnew | past | comments | ask | show | jobs | submit | beej71's commentslogin

Good job, and everything, but this ranks really, really low on my list of threats to the United States.

Do you think this is an isolated incident?

I guess I haven't seen neighborhoods get particularly poorer for some time, unless we're counting the tent cities.


> I'm getting old and I value my remaining time on the planet.

It's an interesting sentiment. I, too, am getting old and value my remaining time on the planet, and so I code by hand every chance I get. :) Luckily I'm in a position to be able to do that.


/usr/bin/xfce4-terminal 64176

I like xfce4-terminal as a compromise. Good bang for the memory buck.


Speaking for myself, I feel a difference if I stop using AI for a week and just rely on regular web searches. And I have a fair amount of professional experience

Also, speaking for academia, AI is basically all we talk about now when it comes to curriculum and instruction. That's not to say that we only rely on AI, or something like that, but we talk a lot about how to get basically anything done now. It's the biggest learning experience we've ever had as instructors, and I suspect we'll be trying to figure it out for a long time to come.


Help me think through and analyze this [ctrl-v]

> I've never liked or trusted Canvas's gradebook, and so although I do upload grades to Canvas so students can see them, my primary gradebook is always a spreadsheet I maintain locally.

That makes you one better than me. :( One thing's for sure--I'm never trusting it again.

I already had almost all my materials outside of Canvas and just used their API to upload it. So at least that's safe. But the grades... dang. Luckily we're only halfway through our quarter and it's not finals week.

Our instance is still down, but your update gives me hope.


It'd be interesting to see how lobste.rs fares with all this.

Believe me when I say that I want to run local models, and I do. But in my testing, 24 GB doesn't get you much brainpower.

Have you tried the latest qwen3.6 models?

For most of my questions and 8-9b model works great. Upshot is not having chatgpt/meta sell my data or target me with random thoughts later.


We're in the same boat. I would rather have NO llm, than an llm that collects my data (which you should assume is all of them, unless you've been asleep for the last 20 years).

Fortunately, I don't have to pick one or the other - instead I run Qwen 3.6 35B A3B. It's a bit slow with my 8gb GPU (I'm in the process of getting a bigger one) but again, to me the choice isn't "what's the best I can get", it's "what's the best local I can get".


I let Qwen3.6-27B chew on a bug all last night. It choked at some point and stopped responding (probably a context overflow before pi-coding-agent could compact it). Claude Sonnet 4.6 found and fixed the bug in under 10 minutes.

Qwen3.6 is pretty amazing for a 27B model, but it's not hard to run into its limits. With a Radeon R9700 and unsloth's 6-bit quantization, I get ~20 TPS and 110k context, so it can do a fair bit quickly.


You definitely need to watch it more than a model 100 times larger. But the fact that it runs one 1 GPU and does what it does is insane. Imagine what a 30b model looks like in 6 months or 1 year?

Inference speed is still slow in a meaningfully different way. The models are good, but not great, and much slower, which for coding means a 2-3 minute task with claude code and opus takes an hour and has a higher chance of being wrong.

It's only slow if you can't afford to run it properly. A lot of people are getting 70-100 tokens per second on 1 gpu.

Not sure what Claude opus or sonnet run at. I know when it goes offline it's 0 tokens per second


I agree as well. And now they can make slipshod products at 10x speed.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: