Hacker Newsnew | past | comments | ask | show | jobs | submit | HelloImSteven's commentslogin

But this is for apps outside the Play store, so the DSA isn’t at play here insofar as Google needs to be concerned. I don’t think there’s any solid decision on whether third-party app distribution is subject to the trader requirements, but if/when there is, it’d presumably be on the alternative distribution platform to enforce, not Google. Plus, Google already adjusted its policies to comply with the DSA.

For the record, Apple notes that the DSA requirements only impact developers distributing through the App Store, not through alternative distribution [1].

[1]: https://developer.apple.com/help/app-store-connect/manage-co...


But this isn't a problem on one system, it's potentially a problem in any system with Copilot enabled. It's akin to a vulnerability in a software library (which often means a separate CVE for every affected product, not just one for the library). CVEs also limited to issues impacting multiple systems; even if a vulnerability only affects one product, ideally a CVE should get made. The 'common' aspect is the shared reporting standard. See my other comment on this thread for more on that, or Redhat's explanation here: https://www.redhat.com/en/topics/security/what-is-cve


CVEs aren’t just for common dependencies. The “Common” part of the name is about having standardized reporting that over time helps reveal common issues occurring across multiple CVEs. Individually they’re just a way to catalog known vulnerabilities and indicate their severity to anyone impacted, whether that’s a hundred people or billions. There are high severity CVEs for individual niche IoT thermostats and light strips with obscure weaknesses.

Technically, CVEs are meant to only affect one codebase, so a vulnerability in a shared library often means a separate CVE for each affected product. It’s only when there’s no way to use the library without being vulnerable that they’d generally make just one CVE covering all affected products. [1]

Even ignoring all that, people are incorporating Copilot into their development process, which makes it a common dependency.

[1]: https://www.redhat.com/en/topics/security/what-is-cve


I’ve lived on both sides of this in different areas of the US. Overall I’d say there’s a lot of places that have what you’ve described, but there are many that don’t, even in more urban locations. Sometimes roads lack sidewalks, parks/skateparks/etc close for repairs but never reopen, local events stop getting funded for one reason or another, or high crime rates make people weary about leaving patio furniture out. All of those contribute to a lack of stable third spaces and associated connections with people.

Other countries have similar issues, of course, but often (not always) they have more cultural factors keeping third spaces alive. In my experience traveling Europe and Africa, community and familial ties generally have a more active role, so there’s just more opportunities for stable third places to develop. It’s not that the spaces are different, imo, but they do seem more common.


Lambda has 1mil free requests per month, so there’s a chance it would be free depending on your usage. But still, it’s not straightforward at all, so I get it.

Perhaps requiring support for bill capping is the right way to go, but honestly I don’t see why providers don’t compete at all here. Customers would flock to any platform with something like “You set a budget and uptime requirements, we’ll figure out what needs to be done”, with some sort of managed auto-adjustment and a guarantee of no overage charges.

Ah well, one can only dream.


> but honestly I don’t see why providers don’t compete at all here

Because the types of customers that make them the most money don't care about any of this stuff. They'll happily pay whatever AWS (or other cloud provider) charges them, either because "scale" or because the decision makers don't realize there are better options for them. (And depending on the use case, sometimes there aren't.)


I also don’t know for certain, but I’d assume they only cache AI responses at an (at most) regional level, and only for a fairly short timeframe depending on the kind of site. They already had mechanisms for detecting changes and updating their global search index quickly. The AI stuff likely relies mostly on that existing system.

This seems more like a model-specific issue, where it’s consistently generating flawed output every time the cache gets invalid. If that’s the case, there’s not much Google can do on a case-by-case level, but we should see improvements over time as the model gets incrementally better / it becomes more financially viable to run better models at this scale.


In the U.S. at least (obviously not the same everywhere), fair use doesn’t necessarily require your work to be transformative. It’s one of several aspects that gets considered, albeit a fairly significant one in many cases. Downloading books/research articles/pirated works in general wouldn’t be fair use as the purpose of the act (obtaining a book to read) directly impacts the market for the work (selling books). There could still exceptions in some cases, mostly related to teaching I’d imagine.


WebScript is trademarked by Apple [1], but not sure how enforceable it is at this point.

[1]: https://www.apple.com/legal/intellectual-property/trademark/...



I took their meaning to be that we should keep looking into the whole matter since, either way, there might be more evidence to find. I don’t think they were dismissing this theory or its implications for political/ideological reasons—since they mentioned it seems plausible—but I could be naive.

In any case, clearly the prevailing understanding is wrong in one way or another, and that should be reflected in curriculums alongside this new evidence.


Yeah, although again, the error bars are too large to say with certainty this was pre-Columbus.

But it really wouldn’t surprise me. As others pointed out, the Inuit traveled across the Bering Strait into what is now Russia many times pre-Columbus, so the idea they may have brought beads back with them is plausible.


They used the official MIT-licensed dataset published by Y Combinator on BigQuery, so it’s not necessarily fair to blame OP here.


“This is not relevant to your point but I want to say that's an entirely third party project and we didn't even know about it for a long time. We don't publish data to them except in the sense that we publish it to everybody: https://github.com/HackerNews/API. I think their page gives a misleading impression that the project is somehow official, when it's not (https://news.ycombinator.com/item?id=43850991).” --dang at https://news.ycombinator.com/item?id=44022318

There is no such thing as an offical y combinator data


Thanks, good to know. Their page on BigQuery is very misleading.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: