>But one thing the world can do for the Big 5 is to try to provide them with military stability. While Pakistan and Tanzania are largely stable (despite Pakistan’s frequent coups), the DRC, Nigeria, and Ethiopia are plagued by near-continuous warfare between fragmented ethnic groups, with the occasional religious movement thrown in.
This is the mindset that caused the rise of the far right. People like this have their heads so far up their asses that they can't see the problems in their own land and need to meddle in others.
“Basket case countries” who “can’t manage themselves” and immediately points fingers at global south countries and goes on to talk about everyone providing military stability…
I would love to be so confident as to write this kind of weird stuff with a straight face.
Exactly, this seems like a fear mongering hit piece bashing mostly African countries.
India already has far more people than these countries he mentioned, and I fail to see how it can be described as a well run country; it has more poor people than the whole of Africa combined.
Initially the Malthusian panic was about a world running out of standing room. When that did not pan out, now it is about a fear of brown/black people multiplying with abandon and overrunning supposedly well run countries. The majority of people who have migrated to, or sought asylum in, Europe in recent years (since the upheavals of the Arab spring) have been middle Eastern, not African.
Yet people persist in promoting a narrative of a supposed invasion of the West by Africans. so much so that its become the main geopolitical issue in the minds of common people there, fanning the flames of extremist far right sentiment.
He does have a point about trade though. More trade and less "aid" will help. Aid is a form of control, and the amounts given are a pittance anyways.
Almost no rebuttals on the internet are intellectually honest these days. Take the same exact action by a President of the alternative party, and it's considered "decisive", "shows our enemies we mean business". But since it's not coming from your political party, it's "oh no, what is this guy doing. He's going to get us all unalived."
People like Eliezer and Nick Bostrom are living proof that if you say enough and sound smart enough people will listen to you and think you have credibility.
Meanwhile you won't find anyone on here who is an author for Attention is All You Need. You know the thing that actually is the driving force behind LLMs.
The context is that rwaksmunski implied that people have been saying "AGI is 10 years away" for ages, and I was pointing out that the sort of people who say "AGI is X years away" have not in fact been setting X=10 until very recently.
I wasn't claiming that the people on that list are the smartest or best-informed people thinking about artificial intelligence.
But, FWIW, from about 13:20 in https://www.youtube.com/watch?v=_sbFi5gGdRA Ashish Vaswani (lead author on that paper) being asked what will happen in 3-5 years and if I'm understanding him right he thinks AI systems might be solving some of the Millennium Prize Problems in mathematics by then; from about 17:10 he's asked about how scientists will work ~5 years in the future and he says AI systems will be apprentices or collaborators; at any rate he's not not saying that human-level AI is likely to come in the near future. From about 1:12:40 in https://www.youtube.com/watch?v=v0gjI__RyCY Noam Shazeer (second author on that paper), in response to a question about "fast takeoff", says that he does expect a very rapid improvement in AI capabilities; he's not explicit about when he expects that to happen or how far he expects it to go, but my impression from the other bits of that discussion I watched is that he too is not not saying that AI systems won't be at or beyond human level in the near future. From about 49:00 in https://www.youtube.com/watch?v=v0beJQZQIGA he's asked: if hardware progress stopped, would we still get to AGI? and he says he thinks yes, which in particular suggests that he does think AGI is in the foreseeable future though it doesn't say much about when.
That's all fairly vague, but I very much don't get the impression that either of these people thinks that AI systems are just dumb stochastic parrots or that genuinely human-level AI systems are terribly far off.
You know LLMs are regurgitating when they will contradict their statements just by clicking 'redo' on a prompt. I doubt if you were the ask the same question that they would suddenly say the complete opposite of what they just said.
Comparing LLMs trained on reddit comments and people who learn to speak as a byproduct of actually interacting with people and the world is nuts.
Computers being good/fast at automating/calculating things that people find difficult is not a new phenomenon. By your standards we have had general intelligence decades ago.
2. Does something
3. Un(intended) consequences of doing something (problem is now worse or different)
4. Back to number 1