The natural progression of this technology is probably miniaturized transducer arrays on a chip, which would enable non-invasive write access to the entire brain.
This kind of tech should be developed as open-source projects, even for the firmware and hardware. A sufficiently advanced version of this, if widely deployed as proprietary blackboxes like smartphones are, would allow one consciousness to take over multiple bodies without their original owners knowing.
If someone puts a donate button beside their name or in the corner of their webpage, and that button leads to a payment page, I think that's good enough.
The point of paying creators is so that they can focus on creating content instead of making other things. Giving money to a creator is basically saying "you're so good at what you do, and it has so much cultural/intellectual value, I'd rather have you make content instead of stocking shelves or making food". But this should be reserved for people that publish good content because they can and are passionate about it, not just anyone putting out slop with the instrumental goal of paying their bills. If the friction of clicking a button and filling in payment details is enough to deter people from paying them, then maybe their content isn't worth paying for and they should find some other way to make a living instead.
There's some chance LLMs contain representations of whatever's in the brain that's responsible for consciousness. The text it's trained on was written by humans, and all humans have one thing in common if nothing else. A good text compressor will notice and make use of that. As you train an LLM, it approaches the ideal text compressor.
Could that create consciousness? I don't know. Maybe consciousness can't be faithfully reproduced on a computer. But if it can, then an LLM would be like a brain that's been cut off from all sensory organs, and it probably experiences a single stream of thought in an eternal void.
There's some chance LLMs contain representations of whatever's in the brain that's responsible for consciousness. The text it's trained on was written by humans, and all humans have one thing in common if nothing else. A good text compressor will notice and make use of that.
That said, digital programs may have fundamental limitations that prevent them from faithfully representing all aspects of reality. Maybe consciousness is just not computable.
Conscious or not, there's a much more pressing problem of capability. It's not like human society operates on the principle that conscious beings are valuable, despite that being a commonly advertised virtue. We still kill animals en masse because they can't retaliate. But AGIs with comparable if not greater intelligence will soon walk among us, so we should be ready to welcome them.
This is a valid concern. But I think we should reject this law on the basis that it should be a recommendation rather than a mandate, without rejecting its premise entirely, because it actually has some merit.
Filtering on client-side is a good idea and lets parents parent without affecting anyone else, provided the filters work completely offline. And we should make sure governments and parents know this so they don't try to push any more Internet-wide censorship laws.
And instead of mandating it, it should work like movie ratings: OSes that implement parental control features get a "PG-capable" label, then make it illegal for minors to use a non-PG-capable OS. This should not affect adults, and parents can choose to not use it because it's a feature you have to manually turn on.
Yeah, this is the right direction for moderation features in general assuming it's implemented offline on-device and works without contacting a remote server. It eliminates excuses to implement age verification online.
And it's correct in principle: each parent should be able to decide what their child sees, but not what anyone else's child sees. Parenting a child is the responsibility of that child's parents, but it is not the responsibility of governments or other people.
Though I do have some gripes with it being a mandate rather than a recommendation, it is a much better proposal than age verification or censoring the entire Internet.
We have mandates for all kinds of things, like movie ratings etc. I think it’s appropriate here. It just makes it easy.
I don’t understand the pushback from tech companies either; all OSes already have a kiosk mode (incl the major Linux DEs). Should be very low effort to implement.
The concern is that OSes which don't implement the feature will be outlawed.
Movie ratings don't outlaw movies and actually provides a good framework: instead of mandating that OSes implement this, publish a client-side filter spec that OS devs can choose to implement. And if they implement it, their OS gets a label like "PG-capable". Then make it illegal for minors to possess a non-PG-capable device.
Movie ratings are not mandatory, at least not in free countries. MPAA ratings like “R” and “PG” are a voluntary classification system and films are free to opt out, though many theater chains may be less likely to show your film. But small theaters and streaming platforms don’t usually care.
Authoritarian states like China and the UK do require classification/certification of films before release. Imagine requiring a painter to have their paintings reviewed by the state before exhibition!
There's no single mastermind. This current wave of authoritarianism around the world is a consequence of not designing the Internet with democratic principles in mind. Online content discovery and moderation mechanisms are centralized and authoritarian in nature. And since most communication nowadays happens on the Internet on large platforms with millions of users (this is especially true after smartphones and social media were invented), the structure of human society in the real world is mirroring the Internet.
This can be solved, though. We have to move moderation and ranking mechanisms to the client-side, especially for search engines and social media. Each person should be able to decide what they post and see, but not what anyone else posts or sees.
You only need to make two changes to make your native app a better choice than your web portal, even for privacy:
1) Make your app open-source, and remove all the tracking.
2) Don't make a web portal. Your website should just be a website that displays information, not 5 MB of JS+WASM with a load of security issues.