Hacker Newsnew | past | comments | ask | show | jobs | submit | achierius's commentslogin

We aren't building dozens of new datacenters to host more webapps.

> Assembly to Python creates a lot of Intent & Cognitive debt by his definition, because you didn't think through how to manipulate the bits on the hardware, you just allowed the interpereter to do it

I agree! You often see this realized when projects slowly migrate to using more and more ctypes code to try and back out of that pit.

In a previous job, a project was spun up using Python because it was easier and the performance requirements weren't understood at that time. A year or two later it had become a bottleneck for tapeout, and when it was rewritten most of the abstract architecture was thrown out with it, since it was all Pythonic in a way that required a different approach in C++


GP meant moving the driver into userspace, which is much less painful due to the stable userspace APIs.

I’m not sure the GP did mean that, but I agree it’s a much better solution than maintaining an out-of-tree kernel module, which is generally a really bad idea

> And the fact that having outline calls to methods of value objects is so expensive

Is this tied to unions? Or otherwise, when does this happen? I don't see the connection w/ invisicaps or &c


Clearly they don't. They don't tend to occupy other countries, not outside of immediate territorial claims like Tibet (if you think that constitutes an "other" country)

> whether ... the concept of "bad actors coming in and ruining it for everybody by taking without giving back" even makes sense.

This is pretty clearly answered by the GPL: yes, it does, and this concept has been around since the very beginning.

> The information is still there

True

> as is the community that you've built

Untrue. At this point it's well understood that AI is substitutionary for many of the services that would have once afforded people a way to monetize their production for the community. Without the ability to make a living by doing so, even a small one, people will be limited to doing only what they can in the little free time they get outside of work.

That's the whole problem -- that AI, as it exists today, is taking away from the public, and hurting it at the same time. That's closer to robbery than it is to "sharing in the community".


> The last time a property class was removed was _slaves_.

Easy counterexample: titles of nobility. Also perpetual bonds, delegated taxation rights, the ability to mint currency. The list goes on.

If you're going to use history to support your AI bull agenda, you should at least pre-fly it with the AI first -- it would have pointed this out.

> Arguing that copyright is good because a subset of big tech doesn't want it around is as stupid as arguing that slavery is good because the robber barons don't like it.

Sorry, who's saying it's good? You are, actually, insofar as you're willing to support the right of AI companies to take people's information and use it to create copyrighted model weights. Why do you care less about the intellectual property of billionaires than that of the common man? Do you really think they're on your side?


Are you serious? In what world did we agree "someone may train incredibly important systems on our every utterance, without any compensation, and we will do what we can to not impede them"?

Can you not see how there's a difference?


I can tell you that it's not that he's setting aside speed -- the fact that it's as fast as it is is an achievement. But there is a degree of unavoidable overhead -- IIRC his goal is to get it down to 20-30% for most workloads, but beyond that you're running into the realities of runtime bounds checks, materializing the flight ptrs, etc.

20% to 30% slower would be amazing for all the extra runtime work that is required in my limited understanding. This would be good enough for a whole lot of serious applications.

Are they? I'm sure they're vulnerable to certain jailbreaks, but many common ones were demonstrably fixed.

I retract that.

I think what I meant to say was, they're as simple to jailbreak as they were three years ago.

Different methods, still simple. Working with researchers that are able to get very explicit things out of them. Again, it feels much worse than before, given the capability of these models.

There's basically guardrails encoded into the fine-tuned layers that you can essentially weave through (prompting). These 'guardrails' are where they work hard for benevolent alignment, yet where it falls short (but enables exceptional capability alignment). Again, nothing really different than it was three years ago.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: