I agree, we should play fair and form opinions using principles more, but I think there is a caveat to that. If what you are defending actually causes a considerate amount of harm or violence then I think you need to start to think in a more nuanced way and weighting the pro and cons
That' true, but now everything depends on "what is economically feasable" and unless we are experts ourselves we can't really know.
We need to rely on experts to tell us what is economically feasable, but those experts are the ones under pressure from lobbyists to say one thing or the other.
Some parties says that it's economically feasable and that will actually save money, other parties say that it's not feasable and it would cost too much.
Oil companies and countries that sell oil will say it's not feasable and companies that produce panels says that it is.
We cannot rely on "what is economically feasable" because unless you are and expert you will have to get that info from one side or the other, and even independent bodies will be under lobbying pressures.
>> The creator/perpetrator of the unethical process should be held accountable and all benefits taken back as to kill any perceived incentive to perform the actions, but once the damage is done why let it happen in vain?
That's very similar to other unethical processes(for example child labour), and we see that government is often either too slow to move or just not interested, and that's why people try to influence the market by changing what they buy.
It's similar for AI, some people don't use it so that they don't pay the creators (in money or in personal data) to train the next model, and at the same time signal to the companies that they wouldn't be future customers of the next model.
(I'm not necessarely in the group of people avoiding to use AI, but I can see their point)
I get that, it's just not the proper way to deliver justice and imo is unethical if the item you are boycotting has the ability to improve people's lives. Whether AI is at the level where we should consider not using it to be unethical is a different discussion (i dont think so yet, but see a future where it could be the case). i more so am on the point that I dont think it is unethical to continue using it even if the boycott method is effective at its goals. It is vigilante justice
In my experience I saw the complete opposite of "juniors looking like savants", there are a few pieces of code made by some juniors and som mid engineers in my company(one also involving a senior) that were clearly made with AI, and they are such a mess that they haven't been touched ever since because it's just impossible to understand, and this wasn't caught in the PR because the size of it was so large that people didn't actually bother reading it.
I did see a few good senior engineers using AI and producing good code, but for junior and mid engineers I have witnessed the complete opposite.
Platforms that have the useful stuff from social media without the addictive part already exist:
Forums, micro-blogging, blogs, news aggregators, messaging apps, platforms for image portfolios, video sharing platforms.
And most of them have existed before the boom of social media, but they just don't get as huge because they are not addictive.
The useful part of a social media is so small that if you put it on it's own you don't get a famous app, you have something that people use for a small part of their day and otherwise carry on with their life.
A social media essentially leverages the huge and constant need that umans have to socialize, and claims that you can do it better and more through the platform instead of doing it in real life, and they do so by making sure that enough people in your social circle prioritise the platform over getting together in real life.
And I believe this is also the main harmlful part of them, people not getting actual real social time with their peers and then struggling with mental health.
Independently on the hypotesys/conspiracy that the big investors and big tech don't actually believe in AI, the measurable outcome the OP is making remains the same: very few people will end up owning a big chunk of the natural resources, and it will not matter if they will be used to power AI or for anything else.
Perhaps govrenments should add clauses on the contracts they make to avoid this big power imbalance from happening.
> In a normal market there would be no incentive to side load because legitimate app owners would have no incentive not to have users load apps outside of the secure channel of the official app store, and users would have no incentive to go outside of it.
> Solve the platform tax, solve the side loading issue.
I think maybe for a large part of legitimate app owners there would be no incentive, but there are other reasons/incetives for legitimate app owners to go outside the official app store even in the case of no tax, a few that pop to mind are:
- open source devs might have the preference to publish their app on a community-led store.
- users trying to keep an old phone functioning using an unofficial custom android, with no support for the store.
- developers creating apps for themselves and their friends not needing to publish the app publicly.
- companies creating apps just for work phones wanting to keep them private outside of any store.
- A company providing "build-your-app-with-AI" service preferring to just provide a final apk file.
I think it's important to remember that there are loads of other reasons outside the financial one to keep the ability to install what you want on your phone.
If google dropped any tax they put on their store now, the problem with these new changes would still be there
I might be wrong but I guess it might also be easier for leadership to put pressure and influence personal communications than to avoid processing official reportings from their own website.
An article reading "they ignored emails from amnesty international" sounds different from "they are not acting on this report made on their official website"
I wonder if this could cause atrophy also to the abilityto understand long detailed text among the large part of the population, probably all those that doesn't work in an "information heavy" field