Does OpenAI have an incentive to get age prediction "wrong" so that more people "verify" their ages by uploading an ID or scanning their face, allowing "OpenAI" to collect more demographic data just in time to enable ads?
I have worked in this space, and my experience was that usually age / identity verification is driven by regulatory or fraud requirements. Usually externally imposed.
Product managers hate this, they want _minimum_ clicks for onboarding and to get value, any benefit or value that could be derived from the data is miniscule compared to the detrimental effect on signups or retention when this stuff is put in place. It's also surprisingly expensive per verification and wastes a lot of development and support bandwidth. Unless you successfully outsource the risk you end up with additional audit and security requirements due to handling radioactive data. The whole thing is usually an unwanted tarpit.
Depends on what product they manage, at least if they're good at their job. A product manager for social media company know it's not just about "least clicks to X", but about a lot of other things along the way.
Surely the product managers at OpenAI are briefed on the potential upsides with having the concrete ID for all users.
Making someone produce an identity document or turn on their camera for a selfie absolutely tanks your funnel. It's dire.
The effect is strong enough that a service which doesn't require that will outcompete a service which does. Which leads to nobody doing it in competitive industries unless a regulator forces it for everybody.
Companies that must verify will resort to every possible dark pattern to try to get you over this massive "hump" in their funnel; making you do all the other signup before demanding the docs, promising you free stuff or credit on successful completion of signup, etc. There is a lot of alpha in being able to figure out ways to defer it, reduce the impact or make the process simpler.
There is usually a fair bit of ceremony and regulation of how verification data is used and audits around what happens to it are always a possibility. Sensible companies keep idv data segregated from product data.
> Making someone produce an identity document or turn on their camera for a selfie absolutely tanks your funnel. It's dire.
Yes, but again, a good product manager wouldn't just eyeball the success percentage of a specific funnel and call it a day.
If your platform makes money by subtle including hints to what products to prefer, and forcing people to upload IDs as a part of the signup process, and you have the benefit of being the current market leader, then it might make sense for the company to actually make that sacrifice.
> No one wants to upload an ID and instead is moving to a competitor!
Comments on the internet is rarely proof of anything, even so here.
If no one wants to upload an ID, we'd see ChatGPT closing in a couple of weeks, or they'll remove the ID verification. Personally, I don't see either of those happening, but lets wait and see if you're right or not. Email in the profile if you want to later brag about being right, I'll be happy to be corrected then :)
The average HN user maybe, but elsewhere, I see people uploading their IDs without a second thought. Especially those in the "chromebooks and google docs in school" generation who've been conditioned against personal data privacy their whole lives
There is no way that the likes of OpenAI can make a credible case for this. What fraud angle would there be? If they were a bank then I can see the point.
Regulatory risk around child safety. DSA article 28 and stuff like that. Age prediction is actually the "soft" version; i.e, try not to bother most users with verification, but do enough to reasonably claim you meet requirements. They also get to control the parameters around how sensitive it is in response to the political / regulatory environment.
Do you expect the data collected for age verification will be completely separate from the advertising apparatus? I would expect the incentives would align for this to enhance their advertising options.
I'll almost certainly get used for both, but I believe "adult content" is the primary motivator. If it was just for ads they wouldn't even bother announcing it as a "feature", they'd just do it.
Also:
"Users can check if safeguards have been added to their account and start this process at any time by going to Settings > Account."
It would be a pretty massive GDPR breach if it wasn't, wouldn't it? All the biometrics is "special category" data which you can't play face and loose with.
Interesting. Do you believe OpenAI has earned user trust and will be good stewards of the enhanced data (biometric, demographic, etc) they are collecting?
To me, this feels nefarious with the recent push into advertising. Not only are people dating these chat bots, but they are more trusting of these AI systems than people in their own life. Now, OpenAI is using this "relationship" to influence user's buying behavior.
The unfortunate reality is this isn't just corporations acting against user's interests, governments around the world are pushing for these surveillance systems as well. It's all about centralizing power and control.
Facebook made journalists a lot less relevant, so anything that hurts Meta (and hence anything that hurts tech in general) is a good story that helps journalists get revenge and revenue.
"Think of the children", as much as it is hated on HN, is a great way to get the population riled up. If there's something that happens which involves a tech company and a child, even if this is an anecdote that should have no bearing on policy, the media goes into a frenzy. As we all know, the media (and their consumers) love anecdotes and hate statistics, and because of how many users most tech products have, there are plenty of anecdotes to go around, no matter how good the company's intentions.
Politicians still read the NY Times, who had reporters admit on record that they were explicitly asked to put tech in an unfavorable light[1], so if the NYT says something is a problem, legislators will try to legislate that problem away, no matter how harebrained, ineffective and ultimately harmful the solution is.
For context though, people have been screaming lately at OpenAI and other AI companies about not doing enough to protect the children. Almost like there is no winning, and one should just make everything 18+ to actually make people happy.
What a coincidence: "protect the children" narrative got amplified right about when implementing profiling became needed for openai profits. Pure magic
I get why you're questioning motives, I'm sure it's convenient for them at this time.
But age verification is all over the place. Entire countries (see Australia) have either passed laws, or have laws moving through legislative bodies.
Many platforms have voluntarily complied. I expect by 2030, there won't be a place on Earth where not just age verification, but identity is required to access online platforms. If it wasn't for all the massive attempts to subvert our democracies by state actors, and even political movements within democratic societies, it wouldn't be so pushed.
But with AI generated videos, chats, audio, images, I don't think anyone will be able to post anything on major platforms without their ID being verified. Not a chat, not an upload, nothing.
I think consumption will be age vetted, not ID vetted.
But any form of publishing, linked to ID. Posting on X. Anything.
I've fought for freedom on the Internet, grew up when IRC was a thing, knew more freedom on the net than most using it today. But when 95% of what is posted on the net, is placed there with the aim to harm? Harm our societies, our peoples?
Well, something's got to give.
Then conjoin that with the great mental harm that smart phones and social media do to youth, and.. well, anonymity on the net is over. Like I said at the start, likely by 2030.
(Note: having your ID known doesn't mean it's public. You can be registered, with ID, on X, on youtube, so the platform knows who you are. You can still be MrDude as an alias...)
Every "protect the children" measure that involves increased surveillance is balanced by an equal and opposing "now the criminal pedophiles in positions of power have more information on targets".
Exactly. The more data they can collect, the better.
Is it not in OpenAI's best interested for them to accidentally flag adults as teens so they have to "verify" their age by handing over their biometric data (Persona face scan) or their government ID? Certainly that level of granularity will enhance the product they offer for their advertisers.
reply