Exactly. This happens in every aspect of life. Something convenient comes along and people will accommodate it despite it being worse, because people don’t care.
Not to mention the impact these tool and technology have on children. The future generations will be made into intellectual invalids before they have a chance to think.
For me, the whole goal is to achieve Understanding: understanding a complex system, which is the computer and how it works. The beauty of this Understanding is what drives me.
When I write a program, I understand the architecture of the computer, I understand the assembly, I understand the compiler, and I understand the code. There are things that I don't understand, and as I push to understand them, I am rewarded by being able to do more things. In other words, Understanding is both beautiful and incentivized.
When making something with an LLM, I am disincentivized from actually understanding what is going on, because understanding is very slow, and the whole point of using AI is speed. The only time when I need to really understand something is when something goes wrong, and as the tool improves, this need will shrink. In the normal and intended usage, I only need to express a desire to achieve a result. Now, I can push against the incentives of the system. But for one, most people will not do that at all; and for two, the tools we use inevitably shape us. I don't like the shape into which these tools are forming me - the shape of an incurious, dull, impotent person who can only ask for someone else to make something happen for me. Remember, The Medium Is The Message, and the Medium here is, Ask, and ye shall receive.
The fact that AI use leads to a reduction in Understanding is not only obvious, but also studies have shown the same. People who can't see this are refusing to acknowledge the obvious, in my opinion. They wouldn't disagree that having someone else do your homework for you would mean that you didn't learn anything. But somehow when an LLM tool enters the picture, it's different. They're a manager now instead of a lowly worker. The problem with this thinking is that, in your example, moving from say Assembly to C automates tedium to allow us to reason on a higher level. But LLMs are automating reasoning itself. There is no higher level to move to. The reasoning you do now while using AI is merely a temporary deficiency in the tool. It's not likely that you or I are the .01% of people who can create something truly novel that is not already sufficiently compressed into the model. So enjoy that bit of reasoning while you can, o thou Man of the Gaps.
They say that writing is God's way of showing you how sloppy your thinking is. AI tools discourage one from writing. They encourage us to prompt, read, and critique. But this does not result in the same Understanding as writing does. And so our thinking will be, become, and remain vapid, sloppy, inarticulate, invalid, impotent. Welcome to the future.
Thank you. I don't understand how people don't see that this is the universe's most perfect gift to corporations, and what a disaster it is for labor. There won't be a middle class. Future generations will be intellectual invalids. Baffling to see people celebrating.
even if you can be a prompt engineer (or whatever it's called this week) today
well, with the feedback you're providing: you're training it to do that too
you are LITERALLY training the newly hired outsourced personnel to do your job
but this time you won't be able to get a job anywhere else, because your fellow class traitors are doing exactly the same thing at every other company in the world
You're replying to me, but I don't agree with your take - if you simulate the universe precisely enough, presumably it must be indistinguishable from our experienced reality (otherwise what... magic?).
My objection was:
1. I don't personally think anything similar is happening right now with LLMs.
2. I object to the OP's implication that it is obvious such a phenomenon is occurring.
Your response is at the level of a thought terminating cliche. You gain no insight on the operation of the machine with your line of thought. You can't make future predictions on behavior. You can't make sense of past responses.
It's even funnier in the sense of humans and feeling wetness... you don't. You only feel temperature change.
Gemini is very paranoid in its reasoning chain, that I can say for sure. That's a direct consequence of the nature of its training. However the reasoning chain is not entirely in human language.
None of the studies of this kind are valid unless backed by mechinterp, and even then interpreting transformer hidden states as human emotions is pretty dubious as there's no objective reference point. Labeling this state as that emotion doesn't mean the shoggoth really feels that way. It's just too alien and incompatible with our state, even with a huge smiley face on top.
I'm genuinely ignorant of how those red teaming attempts are incorporated into training, but I'd guess that this kind of dialogue is fed in something like normal training data? Which is interesting to think about: they might not even be red-team dialogue from the model under training, but still useful as an example or counter-example of what abusive attempts look like and how to handle them.
Not the point of the article, but just responding to the headline: LLMs are a space to think in the same way that watching television is a space to think. Some people may think while watching TV, maybe many of the people reading this watch TV this way, but most will not. In fact, television mostly acts as a soporific, killing intelligent thought and numbing the mind against any sort of complex cognition. Same with LLMs. They're a space to sleepily watch someone else think.
> The Everdeck is designed with a ruthless combinatorial efficiency. Beneath its minimalist pen-and-ink design lies layers of mathematical and linguistic patterns. This isn’t just a deck with haphazardly placed extra glyphs; rather, it aims to be both beautiful and practical.
I think it's to make a clean break. The true full synax of C is horrendous and has 1000 edge cases that nobody is aware of until you have to write a C compiler. If you're making a new language, you're not going to support all that. But which subset do you support? Whichever subset you choose, you'll violate someone's expectations, because they thought it was like C.
By breaking with C syntax completely, you can start without expectations.
I'm just talking about the basic grammar of C. Scoping with curly braces, statements delimited using semicolons, the basic syntax for defining a function or a struct.
I'm just talking the same level of C familiarity that Java or Javascript went with.
> would likely be more amiable to creating shared libraries.
Why's that? There's a gc/no-gc barrier to cross, and also being able to use other features in an implementation doesn't make creating a C interface harder.
I was thinking more along the lines of compiling Tomo code, then being able to link against that pre-compiled binary from other Tomo code. Basically being able to substitute a source file module for a binary module.
I don't know if Tomo supports anything like that, but not having generics would make it easier/simpler to implement (e.g. no need to mangle symbol names). Note "easier/simpler", Nim can also "precompile Nim libraries for Nim code", but the resulting libraries are brittle (API-wise), and only really useful for specific use cases like creating binary Nim plugins to import to another Nim program, where you'll want to split out the standard runtime library [0] and share it across multiple images. It's not useful for e.g. precompiling the standard library to save time.
I know Nim has been working on incremental compilation, too, but I don't know what the state of that is. I think it might have been punted to the rewritten compiler/runtime "Nim 3".
reply