My site has both subscription and one-time donations. The subscriptions bring in 90% of the revenue even though more people have Paypal accounts than accounts with specific crowdfunding services.
A lot of people expect social media to serve them things to read, rather than following specific sites, and bloggers have a much keener sense of what will be rewarded by subscribers. In the old days, you could make a bit of money just from views, and there were many more places to make money from writing and speaking offline. There were also more long-form musings about academic life which today would be snarky posts on Bluesky. As posting on microblog sites became sometimes professionally useful, academics put their energy into that and let their longform blogs fade (or just got older and busier and were not replaced by younger academic bloggers).
Its worse than that. Someone wants to set him up with a lab in Austin TX. Its the CCP which thinks "maybe we should not let the mad scientist out where someone will let him continue his experiments." (A later story says that he will direct assistants in Texas over the Internet). https://www.scmp.com/news/china/science/article/3271952/chin...
You didn't have to punish athletes to make them wear Nike and Adidas shoes, because they were obviously better than plain sneakers. You didn't have to punish graphic artists to make them use tablets because they are so convenient for digital art. But a lot of bosses are convinced that if their staff don't find these tools useful for their tasks, its the line workers who are wrong.
Sure. There are however probably also plenty of examples where the opposite is true (people being hesitant to use newer better technologies) like not everyone wanting to use computers early on ("the old lady in accounting" etc), people not trusting new medications, people being slow in adopting tractors, people being afraid of electricity (yes!) etc. Change is hard, and people generally don't really want to change. Makes it even harder if you fear (which ~25% of people do, depending on where you are in the world) that AI can take your job (or a large part of it) in the future
I use AI and it makes me a lot more productive. I have coworkers who don’t use AI, and are still productive and valued. I also have coworkers who use AI and are useless. Using AI use as a criteria to do layoffs seems dumb, unless you have no other way to measure productivity
AI helps most for low-value tasks as well. The real valuable problems are the ones that can’t be solved easily, and AI is usually much less help with those problems (e.g., system design, kernel optimisation, making business decisions). I’ve seen many people say how AI helps them complete more low-value tasks in less time, which is great but not as meaningful as other work that AI is not that good at yet.
You have to get quite sophisticated to use AI for most higher-value tasks, and the ROI is much less clear than for just helping you write boilerplate. For example, using AI to help optimise GPU kernels by having it try lots of options autonomously is interesting to me, but not trivial to actually implement. Copilot is not gonna cut it.
If something is really clearly better, people come around. Some people never will but their children and apprentices adopt the new ways. A whole community of practice experimenting is very powerful. Everyone does not move at once, but people on this site know how often the cool new thing turns out to be a time bomb.
People wouldn't keep using old shoes, and I am old enough to remember graphic artists who wouldn't use computers. It takes time. At some point, it will be a no-brainer. Yet, it will not be simply because method A is so much better than method B. It will be because people using method B change, retire, or are fired.
On the other hand, if you have ever been in corporate, you could notice, that some people absolutely refuse to learn how to use Excel. I.e. just simple column filters are beyond capacity of most of Excel users.
For some reason, big companies often tolerate people being horribly inefficient doing their job. Maybe it is starting to change?
If people found this useful for putting out "good" work instead of slop they would use it. I promise you that it's the employees who are right, the output is the same AI slop we see everywhere. If you want to turn your company into an AI slop farm that is questionable logic.
Writers move to sites like Substack (or 15 years ago blogspot) funded by other people's money like a software developer gets into an AI startup (or 5 years ago crypto). You can make bank in the short term even if you should know it will not last. Substack subsidizes individual creators and markets their blog as cooler than old blogs, Google subsidized web ads and upranked blogs in search results. Yes, it is no fun if you like stability, and its not a game I play.
The party line has shifted comrade. This year with posts on Richard Lynn etc. Scott Alexander is now saying in public the same thing he said in a private email: that he thinks race pseudoscientists and neoreactionaries are brilliant and precious and as many people as possible need to read the best 1% of their ideas. He is no longer pretending that he thinks they have nothing to offer and just has them in his blogroll because ?
The private email from 2014 explained how he hoped people would respond to the anti-neoreactionary FAQ, and his posts this year are 100% consistent with that.
GiveWell is an example of the short-termist end of EA. At the long-termist end people pay their friends to fantasize about Skynet at 'independent research institutes' like MIRI and Apollo Research. At the "trendy way to get rich people to donate" end you get buying a retreat center in Berkley, a stately home in England, and a castle in Czechia so Effective Altruists can relax and network.
Its important to know which type of EA organization you are supporting before you donate, because the movement includes all three.
I assume that GiveWell is the most popular of them. I mean, if you donate to MIRI, it is because you know about MIRI and because you specifically believe in their cause. But if you are just "hey, I have some money I want to donate, show me a list of effective charities", then GiveWell is that list.
(And I assume that GiveWell top charities receive orders of magnitude more money, but I haven't actually checked the numbers.)
Even GiveWell partnered with the long-termist/hypothetical risk type of EA by funding something called Open Philanthropy. And there are EA organizations which talk about "animal welfare" and mean "what if we replaced the biosphere with something where nothing with a spinal cord ever gets eaten?" So you can't trust "if it calls itself EA, it must be highly efficient at turning donations into measurable good." EA orgs have literally hired personal assistants and bought stately homes for the use of the people running the orgs!
In that essay Scott Alexander more or less says "so Richard Lynn made up numbers about how stupid black and brown people are, but we all know he was right if those mean scientists just let us collect the data to prove it." The level of thinking most of us moved past in high school, and he is a MD who sees himself as a Public Intellectual! More evidence that thinking too much about IQ makes people stupid.