"This is a new form of social science. It is qualitative research at a massive scale, and we’re in the early stages of learning how to do it. Surveys and usage analysis tell us what people are doing with AI, but the open-ended interview format helps us get at why. "
-------------
Who is doing the research matters. What is presented here is not the product of academia. It's the product of a company that produces AI agents. The picture this web page paints may appear rosy and have just enough thorns to be convincing, but it's the equivalent of a tobacco company telling you that their product is neither addictive or carcinogenic.
I fully expect actual research will be done on the impact of AI and our hopes for it. This page, however, is marketing.
Anthropic are masters in marketing to make people think they’re here to do good. A few weeks ago, they got great visibility on HN promising Claude Max 20x accounts to people who are active in open source repositories with at least 5k stars on GitHub [1]. My main project [2] has more than double the minimum requirements, and I’m still waiting.
I just checked your projects, it looks just like something I was looking for. And I hope in a few weeks the guys from Anthropic will give you what they promised.
However, since we're frank here, I'd say I'll download the most recent release and be very careful about upgrading because I don't put much trust in projects co-created with LLMs. I know there is a full spectrum but I've seen enough and I don't have the resources to check where on the spectrum your project ends up. LLMs are a powerful drug and terribly hard to stop once you start.
Humans are complex. It's possible for someone to want to do good and at the same time want to promote/market their product and make a profit. I don't see a contradiction there.
How do you call a marketing campaign that does not deliver on what it promised? I have no problem with anthropic trying to create good will around their products but this particular campain aiming to find good will around people doing open source was an outright lie that did not deliver what it promised and this was all done on HN.
When a company lies for something that trivial, it does not inspire trust
It's an outright lie because they haven't greenlit your personal project after two weeks? Did it occur to you that maybe they just got a lot of applications and are prioritizing other projects or still working through a backlog?
> "This is a new form of social science. It is qualitative research at a massive scale, and we’re in the early stages of learning how to do it. Surveys and usage analysis tell us what people are doing with AI, but the open-ended interview format helps us get at why. "
Also AI written, but I suppose that's expected. The big AI companies seem to want to make all their blog posts and communications have the AI tells so you know they didn't actually bother writing them
I'd love to be able to actually articulate what makes AI writing read like AI writing. A few of the common tells come to mind (contrast construction, hyperbole, overuse / wrongly used em-dashes, etc). The above quote doesn't have any of that, and yet it certainly feels AI. The first sentence (both what it says and where it's placed) suggest AI to me. But, I couldn't quite tell you why.
Before AI this style of prose was called "thank you for coming to my TED talk", with a little bit of "LinkedIn broetry". Confident assertions and pat explanations about truths that will make you a better person upon internalization; a pop psychologist convincing you of an unintuitive and surprising new idea about how the universe works that catches you off guard but then turns your perception on its head and revolutionizes the way you see the world. Contemporary marketing speak of a particular "coolly subverting your expectations and injecting the truth straight into your veins" flavor.
It is a style that AI (intentionally?) emulates for sure, though the "regression to the mean" and general vagueness seems to be what really separates the classic TED talk/puffy blog from AI. Humans like specific examples and anecdotes, AI fails at making those.
I think the main tell is that it says basically nothing, it reads like a human that is paid per word. Humans prefer easy to read articles that doesn't hide the point behind such fluff, so there is no reason to do it except just to spam words.
that's essentially it. But not only that, we learned to distinguish things written by humans for humans, and things written by humans (paid by the word) for SEO. LLMs tend to produce text that would be great for SEO, so it stands out as not for humans
Wikipedia has an excellent article about exactly this [1], in their editor information section. There's a section called "Undue emphasis on significance, legacy, and broader trends" that provides some examples:
>Words to watch: stands/serves as, is a testament/reminder, a vital/significant/crucial/pivotal/key role/moment, underscores/highlights its importance/significance, reflects broader, symbolizing its ongoing/enduring/lasting, contributing to the, setting the stage for, marking/shaping the, represents/marks a shift, key turning point, evolving landscape, focal point, indelible mark, deeply rooted, ...
Once I read this, it started sticking out to me all the time.
I like the take on "undue emphasis on significance." To me, that's such an obvious tell. That's actually an old pre-LLM tell, we just used to call it "pretension." Once we get into long lists of specific words, it feels like we're getting into rules. You can't use this or that word cuz LLMs do. That's crazy problematic. It has to be about the way the emphasis and the overuse of certain words in a single piece reflects inauthenticity. But, eff if I'm gonna stop using "significance" cuz some LLM does.
I can not stand that I'm expected to adjust my use of em-dashes because LLMs use them (incorrectly, typically). It brings up all these feelings from my younger punk / indie days when normies would get into a band we were into, and then we were expected to not like that band anymore. Since then I've tried to abide by what I call the Farting Billion Principle. People shouldn't have to change their ways everytime a billionaire farts.
> The big AI companies seem to want to make all their blog posts and communications have the AI tells so you know they didn't actually bother writing them
Investors want to see you use your own product, if they themselves don't feel the product is good enough to write their own announcement then investors would worry about their future.
And AI is still a product primarily aimed at investors and not consumers.
I think it's still nice that they do this kind of research on the side. Hopefully people will take it for what it is: a research done by a company being in a clear conflict of interest about the subject.
-------------
Who is doing the research matters. What is presented here is not the product of academia. It's the product of a company that produces AI agents. The picture this web page paints may appear rosy and have just enough thorns to be convincing, but it's the equivalent of a tobacco company telling you that their product is neither addictive or carcinogenic.
I fully expect actual research will be done on the impact of AI and our hopes for it. This page, however, is marketing.