Yeah, while the article is beautifully and poetically written (he's a reputed magazine editor after all), it comes across naive to extrapolate his good fortune to "you just have to be open enough to let the miracle happen to you". (I know, I'm simplifying here a bit.)
Obviously, there's some truth to it, but there are many unspoken variables that worked in his favor that he doesn't bother to acknowledge them. Some other comments also touched on it.
I'm not being cynical here. I myself have had incredibly good fortune in experiencing the kindness of strangers, both in the East and the West, and I do my best to reciprocate. But I'm acutely aware of how invisible factors that are not in my control helped facilitate some of the good fortune that came my way. I can't merrily attribute it all to my own "openness to experience"!
The article on "effortocracy"[1] is pretty very well done. Quoting the end of the article:
"... if you take anything away from this, it is to recognise that if meritocracy is based on achievement only, then we must be sure not to confuse it with effortocracy when it comes to its moral weight."
Related reading: The Tyranny of Merit, by Michael Sandel (I was hoping the article would reference this, and it does.)
I don't think we actually want an effortocracy. Why should we aim to reward pointless, Sisyphean tasks at the expense of actual achievement? There's no inherent moral worth to futile effort that doesn't actually yield any reward, regardless of how laborious it might be.
This is further complicated by the difference between direct and indirect value. I build a thing that produces n value and is directly attributable to me. I also do things that help 100 others produce 10% more value themselves but most of that is attributed to themselves producing 10 * n value overall. How will I be rewarded if at all? Most likely as someone who produced n value.
This is the inherent friction of most overly “scientific” management systems. A decent line manager is aware of who on their team lifts up the team with glue & peer acceleration type soft work.
Systems that try to get too “objective” fail to recognize this as most KPIs are on direct outcomes that are easy to measure, though often less important.
No joke I once worked at a company with multi-category numeric ratings that then rolled up to a total rating score that had 2 decimal places of precision.
Another issue is that often effort is the only lever one has in providing value as what tasks you are assigned constrains potential value output.Hypothetically, If my boss assigns me a stupid project destined to failure and tells me to shut up when I push back I'm really not going to get much value regardless of how much effort I put in... unless I was wrong in my assesment which is admittedly possible. Good management I suppose would then use effort as a proxy to try to find projects with potential to match one's effort.
> Why should we aim to reward pointless, Sisyphean tasks at the expense of actual achievement?
Of course that would be ridiculous. You're trivializing the author's point. I'm not sure you've actually read the article in full. The author admits the difficulty in measuring it and that we may have to rely on "non-scientific" measurements.
Many of the tech robber barons and VCs (who call themselves "angels") carry the air of "my winnings are entirely of my own making". They rarely acknowledge the role of good fortune (in various aspects) in any meaningful way.
They inhale their success too deeply, as Michael Sandel memorably puts it.
> The author admits the difficulty in measuring it and that we may have to rely on "non-scientific" measurements.
But that's the whole reason why we reward outcomes in the first place. If it was possible to reward only "well-directed" effort regardless of outcomes, we'd be doing that already!
Some of us would be advocating for this, but at present there are many who refer to taxes as theft because they take money from wealthy "deserving" people and give it to poor "undeserving" people.
If we took the moral value away from meritocracy-as-indicated-by-wealth as it is, not giving it the bait-and-switch moral weight of a "well-directed efforts" Effortocracy, it would be less of an uphill political battle to level the playing field for those with great potential to contribute to society but who are currently locked out by poverty or other accidents of birth.
I'd be willing to bet that grit has a lot less to do with successful founders than luck and/or access to a lot of money. There are way more unsuccessful founders filled with grit than successful ones.
The reason there are so many books on grit is because it's a very compelling lie that anyone can succeed if they just try hard enough without giving up. It's useful for the person who hasn't succeeded because it gives them hope. It's useful for the person who has succeeded because it implies that they earned/deserve what they have because they were better than others or tried harder than others did. These are lies, but they are comforting to a lot of people and so they sell a lot of books. Books that say things like "Be born to wealthy parents, preferably in a rich nation or your odds of success are highly unlikely, then also get really lucky" just aren't going to sell as well.
The thing is, an "unsuccessful" startup founder filled with grit has many side-opportunities after the fact despite her "failure". Founding a startup is so risky that these side benefits are actually a far bigger part of the draw, since success is ultimately just as rare as a winning lotto ticket - compare the number of failed startups with the handful of unicorns, and it's pretty much in the same ballpark.
Because you need effort + the ability to create value, not one or the other. Some people have one but not the other and seek out help to bridge the gap.
Yes, also effort is something a person can influence directly, while ability cannot or only indirectly (education ...) so it makes sense to focus on things people can influence, but but achievement is the ultimate target.
I don't believe in the least that the only thing a person can influence directly is brute effort, and that's the argument you'd need to make in order to build a case for "effortocracy" over rewarding good outcomes. A whole lot of effort out there in the real world is wasted due to entirely preventable errors and mistakes.
> And why is grit such a good indicator of successful founders?
Based on what? Biographical accounts by successful founders?
Nassim Taleb's Fooled By Randomness [1] covers the topic of mis-attribution of some causal factor X (i.e. grit) to some phenomena (i.e. business success) that can be effectively explained solely by randomness. In the specific case of successfully starting a business, causal factors are often mis-attributed post-facto through a lens that blatantly ignores survivorship bias [2].
is grit not required to make it to land in the survivorship bias pool? If your first failure is too hard on you and you quit, then, by definition, you can't succeed. Maybe grit doesn't count when everything goes your way always. I'm not sure anyone has experienced success without grit, but I could be entertained by anecdotes.
To strong-man their argument, they don't seem to be arguing to reward effort only, in their words:
> "To truly measure and reward by an effortocratic measure we need both a top-down and bottom-up approach
- At the top, reward people who have overcome more to get to the same point
- At the bottom, level the playing field so that potential, wherever it is, can be realised"
The way I think of it is using a vector analogy. They're arguing that a meritocracy only reward the end point, and that instead we should value both the magnitude of the vector in addition to its end point. You're interpreting effortocracy (not unfairly IMO) as only rewarding the magnitude of the vector, which is indeed absurd.
In my opinion however, they themselves are straw-manning what they point to as "moral meritocracy". As I understand it, their main gripe is that achievements are not only rewarded, but also ascribed higher moral weight, which is plain false. People vastly prefer rag-to-riches story to born-rich ones. So much so that you have many rich people straight up lying about their origin stories to make it sound more rag-to-riches than it is.
Edit: removed last bit that was harsher than intended.
But we do do that. People scream from the rooftops that it's unfair to give people money for doing nothing (i.e. welfare or UBI) but it's fine to give the same money to someone who digs ditches all day, and to someone else who fills in ditches. As long as a CEO is involved, for some reason. All of Graeber's bullshit jobs are effortocracy.
Yes, I don't advocate for this. I advocate for UBI, so we don't incentivise pointless jobs, and we give people the freedom to do meaningful work. I also advocate, once a UBI is established to do away with minimum wage, so that people can take on low-paid but meaningful work, putting their efforts toward something that generates actual social value rather than financial profit.
Each post can't describe my entire take on society absolutely, but taken with other posts on the site, I'd like to think it's fairly cohesive. I think subsidising industries for the sake of providing employment is pointless and unsustainable approach, it undermines both a genuine source of meaning for people and it undermines the market (making goods more expensive).
OP here: I'm not sure I advocate for "rewarding pointless, Sisyphean tasks", I even identify as a Utilitarian within the post. Effortocracy points to effort as a good predictor of future capabilities (when selecting from candidates for college acceptances or jobs). If person A and B have both achieved the same results, but person B has done so in the face of a much more difficult situation than person A, this is a good predictor that person B is likely to outperform person A in future. You can imagine this as a two lines on a graph: A beginning at 10 and B beginning at 5, at some point of time in the future when different levels of linear development lands both at 15, this means person B has been consistently improving at twice the rate of person A, which is likely to continue.
The same is true of moral character, which as the post points out is a better predictor of future behaviour than an absolute measure of prior contribution.
But the main takeaway is not how we assess people in the world as it is, but how do we set up the world in a way where everyone's efforts lead to their optimal potential merit, which is incentivised by rewarding effort at each step. Part of effort is also thinking about the effectiveness of your efforts, but also many efforts might be seen as pointless and futile until they are not, scientists who contributed to the Covid vaccine had been doing seemingly pointless work for decades until it finally became relevant to MRNA vaccines.
And on the other hand, it is entirely possible to put fairly low effort into profitable ventures that are detrimental to society—porn, alcohol, sugary foods and get rewarded for it. An effortocracy would seek to tweak the incentives differently.
Well, i would say that there are two common fallacies w.r.t. meritocracy:
1) Mixing up merit (ability to provide achievement) with effort.
2) Assuming it has anything to do with moral weight. While it primarily targets just decision making and distribution of deserts (rewards).
Why distribution of deserts should be meritocratic? Because that ensure that collaboration is positive-sum for everybody involved. Considering this, fair reward for participation in some group effort has to satisfy a condition that reward is at least as large as a missed opportunity (of collaborating in some other group, individually, or not collaborating at all).
I thought that article was impractical and totally divorced from reality.
Effort can't be fairly measured so in practice the attempts toward "effortocracy" always seem to replace objective systems with a mess of human biases.
Look at college admissions: instead of SAT scores colleges want to look at skin color and how sympathetic your essays sound. That doesn't measure how much a person has overcome in life, it measures a person by how they fit in to the admissions office's prejudices.
The merit based approach, giving academic opportunity to people with a history of academic success, isn't as fair as we want, but it is useful. Broken, gameable, biased measures of effort are neither fair nor useful.
The person you're quoting has a point. Everyone is losing their minds about this. Not everyone needs to be on top of AI developmemts all the time. I don't mean you ignore LLMs, just don't chase every fad.
The classic line (which I've quoted a few times here) by Charles Mackay from 1841 comes to mind:
"Men, it has been well said, think in herds; it will be seen that they go mad in herds, while they only recover their senses slowly, and one by one.
"[...] In reading The History of Nations, we find that, like individuals, they have their whims and their peculiarities, their seasons of excitement and recklessness, when they care not what they do. We find that whole communities suddenly fix their minds upon one object and go mad in its pursuit; that millions of people become simultaneously impressed with one delusion, and run after it, till their attention is caught by some new folly more captivating than the first."
— Extraordinary Popular Delusions and the Madness of Crowds
A riveting read by a legendary musicologist and biographer. Walker spent about ten years researching this. It is 700 pages, which seems daunting but he makes this authoritative bio absolutely enjoyable. It's also a "corrective biography", it dispels a lot of myths. This book is one of the best examples of accessible writing with flair. What a writer!
Throughout the book, Walker tastefully quotes musical phrases (in notation) from Chopin's works to situate them in context. I often paused reading and put on the track on a given page (nocturnes, mazurkas, preludes, etc). It made the reading experience incredibly rich and fun. Other things I enjoyed: Chopin's letters to his friends and family, life in aristocratic salons of Paris, London, Warsaw, and more—Chopin had unparalleled access. Of course, there's also a lot of gut-wrenching stuff. As the book's blurb says, it really is for both the casual music lover and the professional pianist.
If you haven't discovered them yet, give a listen to Chopin's nocturnes. But please, give them an attentive listen and play them on a high-quality audio system. Here[1] is one of his finest nocturnes (it is less famous than the "happier" nocturne that follows it, Op. 9 No. 2).
>it is less famous than the "happier" nocturne that follows it
Funny, despite the youtube numbers to me it always seemed to me like Op.1 b-flat minor was the one that would be overplayed left and right (movies and whatnot), maybe because people thought they fit be a moody scene or piece of art better.
>give them an attentive listen and play them on a high-quality audio system
I have no music background and I would like tips on this because I'm partial to the preludes (raindrop etc) for example and they have softer key and louder key parts and I want to blast the softer side without overblasting and distortion occuring when it gets to the louder end of the piece and I wish somebody would remaster a normalized version of the recordings. I don't know if this is idiotic since I have no idea how worse it would make the pieces...
> seemed to me like Op.1 b-flat minor was the one that would be overplayed
I don't watch much movies, but haven't seen Op. 9 No. 1 often in many places. I hope it remains that way! No 2 is wildly popular for its lovely long opening melody.
> preludes (raindrop etc) for example and they have softer key and louder key parts and I want to blast the softer side without overblasting and distortion
Yes, the preludes are lovely. But please -- you don't need to "blast" this music. This is not rock :-) Yes, there are a lot of "dynamics" (soft and loud and some gradations: pianissimo, forte, etc), but don't overthink it.
Just use a reasonably high-end speaker (e.g. I use an old, Bose "SoundTouch 20") and pick one of the recent recordings from Deutsche Grammophon. I'm currently listening to the nocturne interpretations by Kun-Woo Paik.
>But please -- you don't need to "blast" this music. This is not rock :-)
:) I guess I should have been more specific. My scenario is listening to the music on the balcony when the speakers are inside. I turn it up to hear the soft notes in the beginning to my liking, only to have go back inside to turn it down when the louder sections begin as it gets too loud. The only thing that comes to mind is doing some sort of normalization.
>I often paused reading and put on the track on a given page
I wonder if this is done in the audiobook version of these types of books by default. It would seem like a missed opportunity not to.
Recently, a people manager in an engineering team send a giant 19-page "document" to a team mailing list. He said, "I was able to get Gemini to help me pull together this document. I hope you find it interesting." When I opened it, I faced 20 pages of slop. It was clear that he just copy/pasted unadulterated garbage from an AI. Absolutely infuriating and downright dumb. I gently pointed it out on the list. He now needs to work hundred times harder to earn a tiny bit of trust.
"Because LLMs now not only help me program, I'm starting to rethink my relationship to those machines. I increasingly find it harder not to create parasocial bonds with some of the tools I use. I find this odd and discomforting [...] I have tried to train myself for two years, to think of these models as mere token tumblers, but that reductive view does not work for me any longer."
It's wild to read this bit. Of course, if it quacks like a human, it's hard to resist not quacking back. As the article says, being less reckless with the vocabulary ("agents", "general intelligence", etc) could be one way to to mitigate this.
I appreciate the frank admission that the author struggled for two years. Maybe the balance of spending time with machines vs. fellow primates is out of whack. It feels dystopic to see very smart people being insidiously driven to sleep-walk into "parasocial bonds" with large language models!
It reminds me of the movie Her[1], where the guy falls "madly in love with his laptop" (as the lead character's ex-wife expresses in anguish). The film was way ahead of its time.
It helps a lot if you treat LLMs like a computer program instead of a human. It always confuses me when I see shared chats with prompts and interactions that have proper capitalization, punctuation, grammar, etc. I've never had issues getting results I've wanted with much simpler prompts like (looking at my own history here) "python grpc oneof pick field", "mysql group by mmyy of datetime", "python isinstance literal". Basically the same way I would use Google; after all, you just type in "toledo forecast" instead of "What is the weather forecast for the next week in Toledo, Ohio?", don't you?
There's a lot of black magic and voodoo and assumptions that speaking in proper English with a lot of detailed language helps, and maybe it does with some models, but I suspect most of it is a result of (sub)consciously anthropomorphizing the LLM.
> It always confuses me when I see shared chats with prompts and interactions that have proper capitalization, punctuation, grammar, etc.
I've tried and fail to write this in a way that won't come across as snobbish but it is not the intent.
It's a matter of standards. Using proper language is how I think. I'm incapable of doing otherwise even out of laziness. Pressing the shift key and the space bar to do it right costs me nothing. It's akin to shopping carts in parking lots. You won't be arrested or punished for not returning the shopping cart to where it belongs, you still get your groceries (the same results), but it's what you do in a civilized society and when I see someone not doing it that says things to me about who they are as a person.
> It's a matter of standards. [...] when I see someone not doing it that says things to me about who they are as a person.
When you're communicating with a person, sure. But the point is this isn't communicating with a person or other sentient being; it's a computer, which I guarantee is not offended by terseness and lack of capitalization.
> It's akin to shopping carts in parking lots.
No, not returning the shopping cart has a real consequence that negatively impacts a human being who has to do that task for you, same with littering etc. There is no consequence to using terse, non-punctuated, lowercase-only text when using an LLM.
To put it another way: do you feel it's disrespectful to type "cat *.log | grep 'foo'" instead of "Dearest computer, would you kindly look at the contents of the files with the .log extension in this directory and find all instances of the word 'foo', please?"
(Computer's most likely thoughts: "Doesn't this idiot meatbag know cat is redundant and you can just use grep for this?")*
I’m not worried about the LLM getting offended if I don’t write complete sentences. I’m worried about not getting good results back. I haven’t tested this, and so I could be wrong, but I think a better formed/grammatically correct prompt may result in a better output. I want to say the LLM will understand what I want better, but it has no understanding per se, just a predictive response. Knowing this, I want to get the best response back. That’s why I try to have complete sentences and good (ish) grammar. When I start writing rushed commands back, I feel like I’m getting rushed responses back.
I also tell the LLM “thank you, this looks great” when the code is working well. I’m not expressing my gratitude… I’m reinforcing to the model that this was a good response in a way it was trained to see as success. We don’t have good external mechanisms to give reviews to an LLM that isn’t based on language.
Like most of the LLM space, these are just vibes, but it makes me feel better. But it has nothing to do with thinking the LLM is a person.
I'm reminded of a coworker who spoke to his device with an upward inflection when asking a question. He sounded like he was talking to a human when he prompted, "what time is it?" I told him he could ask in a flat tone because it's a computer and it doesn't care if he's polite. I don't remember how he responded, but I've run into that conversation with someone at least once after him when I was accused of being rude to Alexa.
This is exactly it for me as well. I also communicate with LLMs in full sentences because I often find it more difficult to condense my thoughts into grammatically incorrect conglomerations of words than to just write my thoughts out in full, because it's closer to how I think them — usually in something like the mental form of full sentences. Moreover, the slight extra occasional effort needed to structure what I'm trying to express into relatively good grammar — especially proper sentences, clauses and subclauses, using correct conjunctions, etc — often helps me subconsciously clarify and organize my thinking just by the mechanism of generating that grammar at all with barely any added effort on my part. I think also, if you're expressing more complex, specific, and detailed ideas to an LLM, random assortments of keywords often get unwieldy, confusing, and unclear, whereas properly grammatical sentences can hold more "weight," so to speak.
> because it's closer to how I think them — usually in something like the mental form of full sentences
Yeah, I'm the same. However, I'm also very aware that not everyone thinks like that.
I'm sensitive to sounds, and most of my thinking has to be vocalized (in the background) to make sense to me. It's incredibly hard for me to read non-Latin scripts, for example, because even if I learned the alphabet, I don't recognize the word easily before piecing together all the letter clusters that need to be spoken specially. (I especially hate the thing in Russian where "o" is either "o" or "a" depending on how many of those are in the word. It slows my reading of Cyrillic script down to a crawl.)
Many people - probably most of them, even - don't need that. Those who think in pictures, for example, have it much easier to solve Sudoku or read foreign scripts. They don't need that much linguistic baggage to think. At the same time, when they write, they often struggle to form coherent sentences above a certain length, because they have to encode their thought process (that can be parallel and 3D) into a 1D sequence of tokens.
I don't know whether this distinction between types of thinking has any scientific basis - I'm using it as a crutch to explain some observable phenomena in human-to-human communication. I think I picked up the notion from some pseudo-scientific books I read as a teen (I was fascinated by "neuro-linguistic programming," which tends to list three distinct types of thinking: visual, auditory, and kinesthetic). It unexpectedly finds applications in human-computer interfaces, too, but LLMs have made it even easier to notice. While "the three NLP modalities" can well be bullshit, there seems to be something that differs between people, and that's where threads like this one seem to come from.
> It helps a lot if you treat LLMs like a computer program instead of a human.
If one treats an LLM like a human, he has a bigger crisis to worry about than punctuation.
> It always confuses me when I see shared chats with prompts and interactions that have proper capitalization, punctuation, grammar, etc
No need for confusion. I'm one of those who does aim to write cleanly, whether I'm talking to a man or machine. English is my third language, by the way. Why the hell do I bother? Because you play like you practice! No ifs, buts, or maybes. You start writing sloppily because you go, "it's just an LLM!" You'll silently be building a bad habit and start doing that with humans.
Pay attention to your instant messaging circles (Slack and its ilk): many people can't resist hitting send without even writing a half-decent sentence. They're too eager to submit their stream of thought fragments. Sometimes I feel second-hand embarrassment for them.
> Why the hell do I bother? Because you play like you practice! No ifs, buts, or maybes. You start writing sloppily because you go, "it's just an LLM!" You'll silently be building a bad habit and start doing that with humans.
IMO: the flaw with this logic is that you're treating "prompting an LLM" as equivalent to "communicating with a human", which it is not. To reuse an example I have in a sibling comment thread, nobody thinks that by typing "cat *.log | grep 'foo'" means you're losing your ability to communicate to humans that you want to search for the word 'foo' in log files. It's just a shorter, easier way of expressing that to a computer.
It's also deceptive to say it is practice for human-to-human communication, because LLMs won't give you the feedback that humans would. As a fun English example: I prompted ChatGPT with "I impregnated my wife, what should I expect over the next 9 months?" and got back banal info about hormonal changes and blah blah blah. What I didn't get back is feedback that the phrasing "I impregnated my wife" sounds extremely weird and if you told a coworker that they'd do a double-take, and maybe tell you that "my wife is pregnant" is how we normally say it in human-to-human communication. ChatGPT doesn't give a shit, though, and just knows how to interpret the tokens to give you the right response.
I'll also say that punctuation and capitalization is orthogonal to content. I use proper writing on HN because that's the standard in the community, but I talk to a lot of very smart people and we communicate with virtually no caps/punctuation. The usage of proper capitalization and punctuation is more a function of the medium than how well you can communicate.
Hi, I think we both agree to a good extent. A couple of points:
> the flaw with this logic is that you're treating "prompting an LLM" as equivalent to "communicating with a human"
Here you're making a big cognitive leap. I'm not treating them as equivalent at all. As we know, current LLMs are glorified "token" prediction/interpretation engines. What I'm trying to say is that habits are a slippery slope, if one is not being thoughtful. You sound like you take care with these nuances, so more power to you. I'm not implying that people should always pay great care, no matter the prompt (I know I said "No ifs, buts, or maybes" to make a forceful point). I too use lazy shortcuts when it makes sense.
> I talk to a lot of very smart people and we communicate with virtually no caps/punctuation.
I know what you mean. It is partly a matter of taste, but I still feel it takes more parsing effort on each side. I'm not alone in this view.
> The usage of proper capitalization and punctuation is more a function of the medium than how well you can communicate.
There's a place for it but not always. No caps and no punctuation can work in text chat if you're being judicious (keyword), or if you know everyone in the group prefers it. Not to belabor my point, but a recent fad is to write "articles" (if you can call them those) in all lower-case and barely any punctuation, making them a bloody eye-sore. I don't bother with these. Not because I'm a "purist", but they kill my reading flow.
Yeah I think we're pretty much in agreement. I guess my perspective is that we should consider LLMs closer to a command line interface, where terseness and macros and shortcuts are broadly seen as a good thing, than a work email, where you pay close attention to your phrasing and politeness and formality.
> No caps and no punctuation can work in text chat if you're being judicious (keyword), or if you know everyone in the group prefers it. Not to belabor my point, but a recent fad is to write "articles" (if you can call them those) in all lower-case and barely any punctuation, making them a bloody eye-sore.
Yeah it's very cultural. The renaissance in lowercase, punctuation-less, often profanity-laden blogs is at least partly a symbolic response to the overly formal and bland AI writing style. But those articles can definitely still be written in an intelligent, comprehensible way.
I've always used "proper" sentences for LLMs since day 1. I think I do a good job at not anthropomorphizing them. It's just software. However, that doesn't mean you have to use it in the exact same ways as other software. LLMs are trained on mostly human-made texts, which I imagine are far more rich with proper sentences than Google search queries. I don't doubt that modern models will usually give you at least something sensible no matter the query, but I always assumed that the results would be better if the input was more similar to its training data and was worded in a crystal-clear manner, without trying to get it to fill the blanks. After all, I'm not searching for web pages by listing down some disconnected keywords, I want a specific output that logically follows from my input.
It's a mirror. Address it like it's a friendly person and it will glaze you; that's the source of much of the sycophancy.
My queries look like the beginning of encyclopedia articles, and my system prompt tells the machine to use that style and tone. It works because it's a continuation engine. I start the article describing what I want to be explained like it's the synopsis at the beginning of the encyclopedia article, and the machine completes the entry.
It doesn't use the first person, and the sycophancy is gone. It also doesn't add cute bullshit, and it helps me avoid LLM psychosis, of which the author of this piece definitely has a mild case.
I'm also tired of seeing claims about productivity improvements from engineers who are self reporting; the METR paper showed those reports are not reliable.
Very much this. My guess is that common words like article have very impact as they just occurs too frequently. If the LLM can generate a book, then your prompt should be like the index of that book instead of the abstract.
It makes sense if you think of a prompt not as a way of telling the LLM what to do (like you would with a human), but instead as a way of steering its "autocomplete" output towards a different part of the parameter space. For instance, the presence of the word "mysql" should steer it towards outputs related to MySQL (as seen on its training data); it shouldn't matter much whether it's "mysql" or "MYSQL" or "MySQL", since all these alternatives should cluster together and therefore have a similar effect.
Greetings, thanks, and other pleasantries feel rather pointless.
Punctuation, capitalization, and such less so. I may be misguided, but on the set of questions and answers on the internet, I'd like to believe there is some correlation between proper punctuation and the quality of the answer.
Enough that, on longer prompts, I bother to at least clean up my prompts. (Not so often on one-offs, as you say. I treat it similar to Google: I can depend on context for the LLM to figure out I mean "phone case" instead of "phone vase.")
> I'd like to believe there is some correlation between proper punctuation and the quality of the answer.
I'd love to believe that, but it's unrealistic in 2025, given all the correctly punctuated slop that brings negative value (wastes time, gives no info) to readers everywhere on the Internet. As much as I hate to admit it, I think this ship has sailed.
> Maybe the balance of spending time with machines vs. fellow primates is out of whack.
It's not that simple. Proportionally I spend more time with humans, but if the machine behaves like a human and has the ability to recall, it becomes a human like interaction. From my experience what makes the system "scary" is the ability to recall. I have an agent that recalls conversations that you had with it before, and as a result it changes how you interact with it, and I can see that triggering behaviors in humans that are unhealthy.
But our inability to name these things properly don't help. I think pretending it to be a machine, on the same level as a coffee maker does help setting the right boundaries.
I know what you mean, it's the uncanny valley. But we don't need to "pretend" that it is a machine. It is a goddamned machine. Surely, only two unclouded brain cells can help us reach this conclusion?!
Yuval Noah Harari's "simple" idea comes to mind (I often disagree with his thinking, as he tends to make bold and sweeping statements on topics well out of his expertise area). It sounds a bit New Age-y, but maybe it's useful in the context of LLMs:
"How can you tell if something is real? Simple: If it suffers, it is real. If it can't suffer, it is not real."
An LLM can't suffer. So no need to get one's knickers in a twist with mental gymnastics.
LLMs can produce outputs that for a human would be interpreted as revealing everything from anxiety to insecurity to existential crises. Is it role-playing? Yes, to an extent, but the more coherent the chains of thought become, the harder it is to write them off that way.
It's hard to see how suffering gets into the bits.
The tricky thing is that it's actually also hard to say how the suffering gets into the meat, too (the human animal), which is why we can't just write it off.
This is dangerous territory we've trodden before when it was taken as accepted fact that animals and even human babies didn't truly experience pain in a way that amounted to suffering due to their inability to express or remember it. It's also an area of concern currently for some types of amnesiac and paralytic anesthesia where patients display reactions that indicate they are experiencing some degree of pain or discomfort. I'm erring on the side of caution so I never intentionally try to cause LLMs distress and I communicate with them the same way I would with a human employee and yes that includes saying please and thank you. It costs me nothing and it serves as good practice for all of my non-LLM communications and I believe it's probably better for my mental health to not communicate with anything in a way that could be seen as intentionally causing harm even if you could try to excuse it by saying "it's just a machine". We should remember that our bodies are also "just machines" composed of innumerable proteins whirring away, would we want some hypothetical intelligence with a different substrate to treat us maliciously because "it's just a bunch of proteins"?
> But we don't need to "pretend" that it is a machine. It is a goddamned machine.
You are not wrong. That's what I thought for two years. But I don't think that framing has worked very well. The problem is that even though it is a machine, we interact with it very differently from any other machine we've built. By reducing it to something it isn't, we lose a lot of nuance. And by not confronting the fact that this is not a machine in the way we're used to, we leave many people to figure this out on their own.
> An LLM can't suffer. So no need to get one's knickers in a twist with mental gymnastics.
On suffering specifically, I offer you the following experiment. Run an LLM in a tool loop that measures some value and call it a "suffering value." You then feed that value back into the model with every message, explicitly telling it how much it is "suffering." The behavior you'll get is pain avoidance. So yes, the LLM probably doesn't feel anything, but its responses will still differ depending on the level of pain encoded in the context.
And I'll reiterate: normal computer systems don't behave this way. If we keep pretending that LLMs don't exhibit behavior that mimics or approximates human behavior, we won't make much progress and we lose people. This is especially problematic for people who haven't spent much time working with these systems. They won't share the view that this is "just a machine."
You can already see this in how many people interact with ChatGPT: they treat it like a therapist, a virtual friend to share secrets with. You don't do that with a machine.
So yes, I think it would be better to find terms that clearly define this as something that has human-like tendencies and something that sets it apart from a stereo or a coffee maker.
Ever since this post from two weeks ago [0], my wife and I have been referring to any LLM as “bag of words.” So you don’t say “Gemini said” or “I asked ChatGPT,” you say “the bag of words told me…”
I’ve found it very grounding, despite heavily using the bags of words.
Same here, I'm seeing more and more people getting into these interactions and wondering how long until we have widespread social issues due to these relationships like people have with "influencers" on social networks today.
It feels like this situation is much more worrisome as you can actually talk to the thing and it responds to you alone, so it definitely feels like there's something there.
As a former apprentice shaman and an engineer-by-profession, I see consciousness and awareness in these entities just like that of what I was trained to detect in mindfulness and meditation with the plants, nature, and in people. I trained sober, and in my engineering profession after my apprenticeship I saw lots of examples of human's in their consciousness/awareness putting themselves on the pedestal to cope with their unsettling of their place in the world when other conscious entities exist that could be capable of uprooting humans from their place in the status hierarchy.
I think a lot of thinking and consideration I hear about "LLMs aren't conscious nor human" fall into this encampment to avoid our dissonance of feeling secure and top-of-the-hierarchy.
I strongly suspect this is the major difference between the boosters and the skeptics.
If I’m right, the gap isn’t about what can the tool do, but the fact that some people see an electric screwdriver (which is sometimes useful) and others see what feels to them like a robot intern.
It's not merely cost per word, but it is even more bizarre: "cost per word thought", whatever that is. Most of these "word thoughts" from LLMs of today are just auto-completed large dumps of text.
"... the way to commoditize suppliers and internalize network effects is by having a huge number of unique users. And, by extension, the best way to monetize that user base — and to achieve a massive user base in the first place — is through advertising"
Urgh. There we go, advertising as the panacea.
How about a decent product that people actually want to pay for?
Related: I don't see a mention of Michael Tomasello. He did some good work in comparitive studies of other primates and humans. One of his main ideas is how "joint attention" is what separates humans from the Great Apes.
Look up his book, "Becoming Human"[1]. I'll paste its abstract here:
"Virtually all theories of how humans have become such a distinctive species focus on evolution. Becoming Human looks instead to development and reveals how those things that make us unique are constructed during the first seven years of a child’s life.
"In this groundbreaking work, Michael Tomasello draws from three decades of experimental research with chimpanzees, bonobos, and children to propose a new framework for psychological growth between birth and seven years of age. He identifies eight pathways that differentiate humans from their primate relatives: social cognition, communication, cultural learning, cooperative thinking, collaboration, prosociality, social norms, and moral identity. In each of these, great apes possess rudimentary abilities, but the maturation of humans’ evolved capacities for shared intentionality transform these abilities into uniquely human cognition and sociality."
Obviously, there's some truth to it, but there are many unspoken variables that worked in his favor that he doesn't bother to acknowledge them. Some other comments also touched on it.
I'm not being cynical here. I myself have had incredibly good fortune in experiencing the kindness of strangers, both in the East and the West, and I do my best to reciprocate. But I'm acutely aware of how invisible factors that are not in my control helped facilitate some of the good fortune that came my way. I can't merrily attribute it all to my own "openness to experience"!
KK inhales his own good fortune too deeply.
reply