Hacker Newsnew | past | comments | ask | show | jobs | submit | naveen99's commentslogin

What’s your favorite school of mental gymnastics?

I mean how did you get an expert programmer before ? Surely it can’t be harder to learn to program with ai than without ai. It’s written in the book of resnet.

You could swap out ai with google or stackoverflow or documentation or unix…


Yes, people underwrite their own debt with their future labor. Economists don’t count this leverage on future labor as wealth for the poor but for some reason count it for the rich after renaming it to bonds.


It’s not like humans are standing still. Humans are still improving faster than ai.


Without usa the way it is, Australia would be much less prosperous. From the perspective of employers and consumers, labor costs are the same. It’s just that in Europe and Australia, taxes are a larger percentage of cost of labor.


CDC also says 74% of Americans are overweight. https://www.cdc.gov/nchs/fastats/obesity-overweight.htm

I guess it’s not as bad as women rating 80% of men as unattractive.

Some people just don’t believe in normal distributions or binary search. I don’t believe disabilities, obesity, or attractiveness follow a power law.


No, because the only thing keeping the fed from lowering interest rates and juicing real estate and everything else is a strong labor market.


You have it backwards. Layoffs these days increase stock value because everyone is hedging that bad job numbers will force the feds to lower interest rates. Something Powell has hesitated to do in order to keep inflation in check.

It's a very screwed up incentive to be rewarded for breaking the system, but that's 2025 in a nutshell.


> Layoffs these days increase stock value because everyone is hedging that bad job numbers will force the feds to lower interest rates.

Layoffs generally cause stock prices to go up because of anticipated cost reduction/efficiency: https://doi.org/10.1093/ser/mwab046

If you have some source to make the case that layoff-stock price change is correlated for a different reason these days, it would be interesting to read it. But I doubt anything has changed


>If you have some source to make the case that layoff-stock price change is correlated for a different reason these days, it would be interesting to read it.

The phenomenon is pretty recent so there won't truly be any studies on it in a while. But look up "Jobless Boom". Here's a piece of what I'm talking about:

https://www.cbsnews.com/news/jobless-boom-ai-economy-labor-m...

>For much of 2025, the job market was described by economists as "no hire, no fire," meaning an environment where workers could count on job security even as hiring around the U.S. cooled. But conditions have changed, and the Federal Reserve cut its benchmark interest rate in both September and October, citing increasing risks to employment growth and with Fed Chair Jerome Powell noting that policymakers are closely watching layoff announcements by big employers.

personally, I think the AI efficiencies are a smokescreen, but the point of how this job contraction is forcing he fed's hands is hard to ignore after some 2 years of holding rates steady.


And the fact that everybody knows real inflation is still high.


How can inflation not be high in the US given increased tariffs, deportations and uncertainty


The fed has cut rates 5 times in the past 14 months


I don’t understand why intel doesn’t build a fab in Taiwan or another lower cost location ?


No state infrastructure available to protect them, among many other considerations.


Intel's most advanced production Fab is in Ireland, which is higher income than Taiwan but much lower than the US.


Intel has announced that Intel 18A manufacturing will take place in Arizona. Salaries are a relatively small amount of the total costs of running a fab.

https://newsroom.intel.com/client-computing/intel-unveils-pa...


lack of skilled labor, probably


Taiwan surely has some poachable labor.


When it comes to machine learning, research has consistently shown, that pretty much the only thing that matters is scaling.

Ilya should just enjoy his billions raised with no strings.


> When it comes to machine learning, research has consistently shown, that pretty much the only thing that matters is scaling.

Yes, indeed, that is why all we have done since the 90s is scale up the 'expert systems' we invented ...

That's such an a-historic take it's crazy.

* 1966: failure of machine translation

* 1969: criticism of perceptrons (early, single-layer artificial neural networks)

* 1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University

* 1973: large decrease in AI research in the United Kingdom in response to the Lighthill report

* 1973–74: DARPA's cutbacks to academic AI research in general

* 1987: collapse of the LISP machine market

* 1988: cancellation of new spending on AI by the Strategic Computing Initiative

* 1990s: many expert systems were abandoned

* 1990s: end of the Fifth Generation computer project's original goals

Time and time again, we have seen that each academic research begets a degree of progress, improved by the application of hardware and money, but ultimately only a step towards AGI, which ends with a realisation that there's a missing congitive ability that can't be overcome by absurd compute.

LLMs are not the final step.


Well, expert systems aren’t machine learning, they’re symbolic. You mention perceptrons, but that timeline is proof for the power of scaling, not against — they didn’t start to really work until we built giant computers in the ~90s, and have been revolutionizing the field ever since.


If you think scaling is all that matters, you need to learn more about ML.

Read about the the No Free Lunch Theorem. Basically, the reason we need to "scale" so hard is because we're building models that we want to be good at everything. We could build models that are as good at LLMs at a narrow fraction of tasks we ask of them to do, at probably 1/10th the parameters.


Are reranker models an example of this? Do they still underperform compared to LLMs?


Indeed. This is the "bitter lesson".

https://en.wikipedia.org/wiki/Bitter_lesson


Didn’t OpenAI themselves publish a papers years ago that scaling parameters has diminishing returns?


Time limit on patents is supportive of GPL in the limit.

Public trading of most trade secrets along with their owner corporations is also GPLish.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: