Hacker Newsnew | past | comments | ask | show | jobs | submit | chunkmonke99's commentslogin

I don't understand this line of reasoning. Like genuinely. So with AI coding (let's just limit ourselves to coding); are you saying that the Agent is going to prompt itself? Like it exists only to read your mind and create precisely the code you wanted or didn't even know you wanted? Or will you have to explain and verify that it did what you asked? At some point we run into magical thinking and absurdities.

Programming or math are not like Chess or Go. There is no endgame to win. And the human/input/judgement/whatever and where that begins or ends isn't a technical issue but a political one.

So my question: are you expecting that at some time N that models are so good that they can read your mind? Or are you saying that you will just be able to "speak" into existence any type of software? And how are you going to specify this if you can't already point to something similar?


Isn't that what a well run company does when creating a process? Bureaucracy and process, reduces the penalty of weak domain context and in fact is designed to obviate that need. It "diffuses" the domain knowledge to a set of specifications, documents, and processes. AI may be able to accelerate it, or subsume that bureaucracy. But since when has the limiting factor been "finding someone locally who knows the process?" Once you document a process, the power of computing means you can outsource any of that you want no? Again, AI may subsume, all the back office or bureaucratic office work. Perhaps it will totally restructure the way humans organize labor, run companies, and coordinate. But that system will have to select for a different set of skills than "filling out n forms quickly and accurately." The wage stagnation etc etc. predates AI and might be due to other structural factors.


> Isn't that what a well run company does

How many of those do you see around?


I bet we're about to see a lot of 10-person $100M+ ARR companies emerge. That's a scale where teams can be tight and excel.


If you can build that with AI, then 9 people with AI can probably wipe out that company, only to be wiped out by 8 people with AI…and so on.


Not necessarily. That's the old "I made Twitter in a weekend" joke.

That's not because you can technically replicate a product that your company will be successful. What makes a company successful are sales forces, internal processes and luck. Both are extremely difficult to replicate because sales forces are based on a human network you have to build, internal processes are either organic or kept secret, and luck can only be provoked by staying alive long enough, which means you need money.


massively underrated comment detected.


when.

people have been saying that since 2022.

when and how. hmm??

show your work.

or is this just more slype being spewed...


I think something around that scale (say maybe 20 employees, but definitely not hundreds) was possible even before LLM got popular, but the people involved needed to be talented and focused. I'm not sure if AI will really change that though.


In 2014, Facebook acquired WhatsApp for $19B and they had 55 employees


Correction: 55 grossly underpaid employees!


Good stuff. I hope Noah is ok, couldn't read the rest of the article ... I really don't know what to say anymore tbh.


I don't disagree, writing code will probably no longer be a thing in the near future. This is probably also true for all knowledge work (math, design, etc etc.) which is literally anything that can be "reduced" to mechanical transformations on symbols. Including music gear design, design of plumbing fixtures, tooling jigs (CAD work) etc etc. It is all basically transforming a specific set of discrete symbols into other ones or stringing them together or re-combining them etc etc. I wouldn't call that a doomer take either. But yes, the "Claude 4.5 still makes mistakes" thing is played out "remainder humanism"/ "John Henry vs. the machine". I fully see the value of Software products to go to zero with a whole bunch of money being funneled into one of the AI companies. It is a scary time to be around. I would stop learning coding and/or any framework or specific technology.


Wait you used Claude Code to recreate patents and schematics? Are the schematics for this easily available somewhere? Was Claude just able to one-shot this?


I use Claude more as a learning tool in this context. It's kind of funny actually got the idea because I heard that in China they're basically replacing teachers with AI, where we're trying to get AI out of our school systems in the United States. So I went into it with that mindset instead of trying to have Claude do the whole thing to teach me how to do it so I understand it and I'm still learning, trying to recreate things with Max so I can have a lot more control and really play with it. I'm learning that reverb creation is a real craft.

It's not able to one-shot it yet but I'm sure that's coming this year sometime. I did the UI a hundred percent by myself and I went in there and tweaked it and tried to rebuild it and just try to understand how reverb works etc. I also did a lot of the software licensing just because I have experience with that.


I am not seeing any evidence of China "replacing teachers with AI" anywhere (did some googling/geminiing). Are there any sources on this? Seems like they are trying to introduce students to GenAI/ML principles and creating "AI literacy guidelines" without just "replacing teachers with AI". Their current guidelines outright prohibit the use of AI to replace teachers' responsibilities.

What is the point of asking it to teach you something to "understand it" if Claude can just do it for you? This is the real question everyone should be asking beyond just employment (employment will definitely change in the coming months, no doubt). I would pivot away from programming personally.


more pointedly: the commenter presumes that the friends are unhappy with their lives. Also that some them would be better served performing back-breaking menial low-wage labor while otherwise being illiterate. Any PhD (even one in specializing in Plankton and especially nuclear physics/engineering) would equip you with a bunch transferable skills that normally would be valued in a modern society ... 1) public speaking 2) initiative 3) resourcefulness 4) analysis and communication etc etc. If I was being uncharitable I would say finance and law are actually worse for society: at least the subset of those that get paid the highest with respect to their impact on the broader society (but that is debatable).


Would you think it would be better for "society" to allow more people to go into finance and law? Or that advanced knowledge should be gate kept by only the select cognitive elite that are most adept at playing the "glass bead game" by age 18? Would you change any of your opinions if AI renders most High IQ practical/technical tracts obsolete? Perhaps, a more sane society would be one where curious people could develop themselves in whichever way they so choose: if they want to study the mating habits of marmots in the Central Asian steppe then so be it.


No one really knows. But a few things I think about myself.

1). There are many many people there couldn't probably already write more lines of code than me and work for much much cheaper (in India or wherever). Same is true for you (probably). Yet you still have a job.

2). I have a friend that works as a Software and System Engineer for a complicated product that interfaces with the real world. He has to use Natural Language to create requirements that gets turns into code by natural agents down in the "Supply Chain". There are also integration engineers that work with the naturally intelligent agents that create the prompts/requirements to make sure things don't fail (then triage and root cause when they do)

3. Why not diversify your skills beyond code but also hardware, systems, soft skills, business etc etc.

No one knows the future sadly.


I don't think there is anything more than the standard advice. Just stay curious, make friends/build a community, keep learning, stay healthy. Why not get the AoE? you can also, check out "Practical Electronics for Inventors": AoE assumes you have some Electronics background imo. But seriously, I don't get the doom/gloom: things are going to be rough ... but maybe they won't? Many things I learned I did for their own sake! Things have always been uncertain and absurd I guess we might as well embrace it!


(also get age of empires)


Hahah absolutely!! Man that brings back memories.


Are we sure that is what is happening? Can you really do any meaningful "science" when the subject understudy is a black box that is under a shroud of secrecy? What has been learned from LLMs regarding human cognition and is there broad convergence on that view?


It's not the main driver of what's happening but it's an aspect of it that goes back a way. For example Turing writing in 1946:

>I am more interested in the possibility of producing models of the action of the brain than in the applications to practical computing...although the brain may in fact operate by changing its neuron circuits by the growth of axons and dendrites, we could nevertheless make a model... https://en.wikipedia.org/wiki/Unorganized_machine


Oh man I was not aware of this aspect of Turing's work thank you for sharing!!

Honestly, trying to reverse engineering something to understand how it works is interesting and potentially worthwhile! To me it's obvious that "broadly mechanistic" or causal explanations of specific cognitive functions can be created. I am not doubting that a "machine" can mimic human cognitive abilities -insofar as we can state them or "tokenize" them precisely. I am pretty sure that is the whole basis of Cognitive Science.

But just because we can mimic those capacities: does that imply that those are the same mechanisms that exist in nature? Herbert Simon made a distinction between "Natural" and "artificial" system: an LLM's function is to model language (and they do a damn good job of that!) does the brain have one function and what is it? If you build a submarine does that mean it tells you something about how fish swim? Even if it swims faster than any of the fish?


Building models can help you understand things. Maybe not so much submarines but building model aircraft and studying aerodynamics definitely helps understand how birds fly.

Artificial neural networks are already helping some understanding of brains for example there was a lot of debate about "universal grammar":

>humans possess an innate, biological predisposition for language acquisition, including a "Language Acquisition Device"...

and it now seems to be demonstrated that LLM like neural networks are quite good at picking up language without an 'acquisition device' beyond the general network.


That is a fair point. I do not disagree that building (tenuous at best) models of Neurons can help inform science and engineering and vice-versa. Much of "classic" digital signal processing and image processing was an interplay between psychologist, engineers, neuroscientists etc.. So that is very useful! But what it we have here is mistaking the airplane for the bird! My pet Parrot doesn't have an engine! The map is not the territory as it is said.

The point of this thread and the paper isn't that cognition is not an important goal to understand nor that it isn't computational (computation seems to be the best model we currently have). But that AGI is (as the previous comment mentioned) a Marketing term of little scientific value. It is too vague and has the baggage of some religious belief than cold hard scientific inquiry. It used to just be called "AI" or as was being debated at the infancy of the field just "complex information processing". The current for-profit (let's be clear OpenAI is not really a charity) companies don't really actually care about understanding anything ... to an outsider they appear to maximize hype to drum up investment so that they could build a God, while some people get very very rich. To many in these communities, intelligence is some magical quantity that can "solve everything!" I am not sure which part of those beliefs are scientific? Why are we ear marking $100s of billions (some of which is public money) to benefit these companies?

>humans possess an innate, biological predisposition for language acquisition, including a "Language Acquisition Device"...

Would you say that one day someone just happened to find an LLM chilling under the sun and we spoke some words to it for like a few years by pointing to things and one day it was speaking full sentences and asking about the world? Or is it that a lot of engineering work was put into specifically design something for the purpose of generating text ... Do you think humans were designed to speak or to be intelligent and by whom? Can Dolphins, Gorilla's, and Elephants also speak language? They have complex brains with a lot of neurons. Chomsky’s point was just that “If Human then can speak language” so “not human can speak language” doesn’t refute the central point. I am no expert on Chomsky you may know much more about that. But again doesn’t seem relevant to the actual thread.


So TLDR: I am not sure we learned a lot about how humans learn language with LLMs: all we learned as that it can be done by "something" but we already knew that. These specific technologies are Products designed to sell things and they need that hype for that. But it doesn't take away from the fact that they are freaking cool!

https://leon.bottou.org/news/two_lessons_from_iclr_2025


I'm not sure that there haven't been some things we learnt about cognition or (some) cognition-having entities in general. Whether LLMs inner diction overlaps with how humans do it, we now know more about the subject itself.


>Can you really do any meaningful "science" when the subject understudy is a black box that is under a shroud of secrecy?

Are you saying it's impossible to understand human brains?


No. I am saying that the broader scientific community probably cannot run experiments on ChatGPT, Claude, or Gemini as they would be able to on say a mouse's brain or even on human subjects with carefully controlled experiments and that can be replicated by 3rd parties.

As for "understanding" you have to be more precise about what you mean: we created LLMs and Transformer based ANNs (and ANNs themselves) and it appears we are all mystified by what they can do ... as though they are magic ... and will lead to Super-intelligence (an even more poorly defined term than regular-ass intelligence).

I'm not trying to be difficult: but I sometimes wonder if all of us were to take a step back and really try and understand this tech before jumping to conclusions! "The thing that was designed to be a universal function approximator approximates the function we trained it to approximate! HOLY CRAP WE MAY HAVE MADE GOD!" It's clear that the the technologies we currently have are miraculous and do amazing things! But are they really doing exactly what humans do? Is it possible to converge at similar destinations without taking the same route? Are we even at the exact same destination?


People are trying to run experiments on Claude - see https://news.ycombinator.com/item?id=43495617


Yes I know of this "study" AFAIK it has not been subjected to peer-review and uses a lot of suggestive language. Other studies have shown that these things use large bags of heuristics which isn't surprising given that are trained on unimaginably large amounts of tokens.

I am not an expert ... but to me anything that is associated with these companies is marketing. I understand that makes me a "stick in the mud" but it's not a crime to be skeptical! THAT SHOULD BE THE DEFAULT ... we used to believe in gods, demons, and monsters. Given that Anthropic is very very closely related to EA and Longtermism and given that this is the "slickest" paper I have ever read ...

If I had the mental capacity to have read a good amount of the internet and millions of pirated books ... I wouldn't be confused by perturbations in questions I have already previously seen.

I am sure there are lots of cogent rebuttals to what I am saying and hey maybe I'm just a sack of meat that is miffed about being replaced by a "superior intelligence" that is "more evolved". But that isn't how evolution works either and it's troubling to see that sentiment becoming to prevalent.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: