Hacker Newsnew | past | comments | ask | show | jobs | submit | buzer's commentslogin

No one is doing it that way though. Also to be truly privacy-preserving you cannot rely on anything that requires any specific OS (especially Android or iOS) as every single OS requires some compromises to privacy.

The only privacy-preserving (effective) age verification is asking user if they are over 18 and requiring that they answer truthfully under penalty of perjury. Then prosecute the kids who claim they are over 18. For reason or another no one seems to be pushing for that option.


> No one is doing it that way though

Well it exists in Privacy Pass, which is deployed in production. And there are countries that are currently actively looking into privacy-preserving age verification. I don't think that "I keep saying that age verification fundamentally leaks your ID, which is wrong, but it's still valid because nobody will notice" is a good argument.

> The only privacy-preserving (effective) age verification is asking user if they are over 18 and requiring that they answer truthfully under penalty of perjury.

I disagree, I think that there could be a sane debate around ZK age verification, if we could elevate it to that.


> Illegal immigration is a misdemeanor not a felony.

My understanding is that entering without getting inspected is misdemeanor (or felony in some cases), but that's often not the case. Usually people just overstay and that's civil case. And because it's treated as a civil matter a lot constitutional protections do not apply (to clarify: some still do).


It's both your data and that person's data.

(copied from my earlier comment) I think it's very close to C-579/21 which was about audit logs. In that one CJEU ruled that audit logs are personal data of you and the person who performed the action. They did allow censoring the person's name in that case (and exact timestamp), but given that in this case LI is selling this information to same person then "protecting others" rings pretty hollow.


Personal data is data that relates to to you. What relates to you is the list of users who viewed your profile.

I think it's very close to C-579/21 which was about audit logs. In that one CJEU ruled that audit logs are personal data of you and the person who performed the action. They did allow censoring the person's name in that case (and exact timestamp), but given that in this case LI is selling this information to same person then "protecting others" rings pretty hollow.


GPC must be honored in California. https://oag.ca.gov/privacy/ccpa/gpc

According to https://www.didomi.io/blog/global-privacy-control-gpc-2026 it must also be honored in 11 other states but I'm not familiar enough with the specifics regarding those.


How do you exactly read that article in that way?

50(1) states that AI systems which interact directly with public must inform that they are interacting with AI system.

50(2) states that AI generated synthetic audio, image, video or text content must be marked as such. However this requirement applies to "providers" of AI systems. And according Article 1(3) that is:

> ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;

So it sounds like it would apply to e.g. Anthropic via Claude Code, not to users of Claude Code.

It's also unclear if this would apply to the compiled output or not.


> It would be excellent to see some progress, in expanding & respecting our human rights to privacy.

There are many laws in place in EU which forbids many kind of practices which infringe on privacy, but the issue is that governments don't really enforce them proactively. And in some cases where they are the ones breaking them (e.g. by enacting law that is not compatible with EU Charter or ECHR) it will take long time to get judgement which forbids the practice.

Often the path is that you complain to DPA, you appeal to court, you appeal to higher court, (repeat last step X times), during court appeals you may need to wait for CJEU ruling and finally you might be able to file appeal to ECtHR.

In one "recent" case from Finland the original DPA decision was issued in 8/2020. I'm not sure how long this exact case took, but there are some recent decisions which took 5 years to issue. It was appealed to administrative court and court made request to CJEU on 11/2021. CJEU gave ruling on 6/2023. Administrative court gave ruling on 12/2023. It was appealed and higher administrative court gave ruling on 6/2025.

So it could take 10 years to annul an illegal law or practice.


But there was a period of time when using Openclaw via Claude Code (via -p) was not allowed and it even gave an error message in that case. That's why people find the constantly changing messaging confusing.

https://x.com/steipete/status/2040811558427648357


What new requirements can be set by the board? As far as I understand EDPB can only issue guidelines, recommendations and best practices. All of these are just guidelines on how to interpret GDPR. Courts are the ones who ultimately decide if are complying with GDPR. Local DPA likely won't harshly punish you if you follow EDPB's recommendations if they end up getting overturned by court.

DPA won't punish you for not following EDPB's recommendations, they will punish you for breaking GDPR. You are free to ignore EDPB if you think your legal position is strong, but you carry the risk if you are wrong.


Google has specifically said that certain API keys like Firebase are not secrets (since people will find them)... though Gemini then ended up changing stuff. https://news.ycombinator.com/item?id=47156925


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: