No, but this is irrelevant. Of course, people don't believe that the billionaire class is aligned with the working class, which is why billionaires buy up media outlets to align public opinions with their goals. I think that a comparison with historical fascism is indeed appropriate here. Historical fascism saw itself as a workers' movement, but only insofar as workers were more easily exploited for the goals of an counter-cultural elite than the educated middle classes. Karp and Thiel may well have come to the same conclusion by observing the current US politics. And just as historical fascists recognized the disruptive force of the new mass media at their time, so do Karp and Thiel.
> It's laziness because they have little CS fundamentals to base such claims on
So, what CS fundamentals do you need to evaluate if AI is the real thing, or will disappoint in the future? Until a few months ago, coding agents were met with skepticism, until Anthropic introduced their new model and, with it, a hype train that cannot be rationally justified. Look, SOTA LLMs, and coding agents in particular, are impressive. However, current predictions about the future of software development (and the world in general) are speculative. There is little to no data showing whether AI can deliver on its promises. How could there be in this short time frame? No one knows what the future will hold, no one knows how coding agents will be integrated into our work life and everyday life in the long run, or what hard limitations they will reveal. No one can tell you how professions will change in the coming years; every prediction is purely speculative, and anyone making prophecies is either trying to cope with the uncertainty themselves or has some stakes in the AI bet. It would be nice if people were actually humble enough to admit that they have no idea what will happen in the future, instead of writing the hundredth doom and gloom post.
> However, current predictions about the future of software development (and the world in general) are speculative.
It's amazing to me how those willing to seize on the speculative nature of any ANY uncertainty cannot recognize the inherent uncertainty of the inverse.
And a lot of exposure to deductive reasoning, vague ideas of automated theorem proving and formalization.
I won't pretend its easy, but let's be clear, a small fraction of people who know things are being forced to entertain the hysteria of a vast majority who are unwilling to know things and just go around beating their chests and will continue doing so until the train hits them.
There are 2-3 minor architectural changes in between now and what I would identify as a completely unbounded AGI with clearly discernible dynamic, self-defined objective functions and self-defined procedures for training and inference. It can be done in megabytes. Oh god. Get me out of this forum. I wish to return to my code editor.
What exactly are you claiming here? That a handful of theorems about the limits of mathematics and provability somehow combine to show that the current LLM-based AI developments will inevitably live up to what is expected of them? And that this is obvious to a select few? That all seems unlikely, to say the least.
> a small fraction of people who know things are being forced to entertain the hysteria of a vast majority who are unwilling to know things and just go around beating their chests and will continue doing so
> Most people treat higher education as a pass to good paying job and I think it's unrealistic to think otherwise.
Yes, and that's a problem. If the advent of coding agents leads to people that are only in it for the money staying away from higher education - good. Those people are the reason why higher education turned to shit anyway and maybe it will be a nice change when people go into higher ed out of curiosity and not because they smell money.
On one hand I agree, since I see way too many dispassionate people working in this profession, on the other hand this requires businesses to understand, that a software developer is more than a code monkey. I am not sure we are heading there. Currently, it seems more like many non-IT people think that their monkey imitations are the same that software developers have been doing for years and that they now don't need any good developers any longer. For some CRUD businesses this might even be true.
a lot of reforms happened post-World War(s). after beating Jerry thousands of soldiers demanded better healthcare, for example, and promptly voted Churchill out
it just wasn't a civil war but boy howdy was there bloodshed in WW2
Given comparative advantage gives a offramp to this for a lot of what we currently understand as "economics", if the author is positing that we will be beyond this, then your response is missing the forest from the trees.
There is no indication that the surplus extracted by automated labour will be distributed to the advantage of the population. If we look at how things are going at the moment and in the present, there will be a further concentration of power and capital. And I don't see any reasons why the billionaire class should give this up. You could, of course, give an argument why things are will be different this time.
If comparative advantage will not hold then that's really something, no one understands what happens in that future, proposing some random solution at this point is unbelievably premature.
I think Nick Land will suffer the same fate as Friedrich Nietzsche. People vibe with the aesthetics of his philosophy without having read or understood his works. People will take ideas and quotes out of context in order to better rationalize their own actions, which can be already observed in Silicion Valley celebrities like Andreessen.
Nick Lands works are nuanced and may be one of the most important thinkers of this century. I can highly recommend some of his interviews because they are less obfuscated than his written works.
reply