Hacker Newsnew | past | comments | ask | show | jobs | submit | tedivm's commentslogin

I think opt out is stupid, but the notice is on every page of github using their banner display right now. They've also blasted out emails.

At least they are being very upfront with it (I guess?), most companies just slickly add the clause on their routinely TOS update.

If they were being honest they would ask explicitly for permission instead of advertising opt-out. Now you might ask: who will explicitly give Microsoft permission to train on their private works? No one will -- and that's the point: this is a form of theft.

And how many people who use git on github go to the website? I only do when my token has expired and I need to grab a new one to push again. Which is every 90 days. Github.com is mostly invisible infrastructure to me.

This problem is solved by not having a token. Github and PyPI both support OIDC based workflows. Grant only the publish job access to OIDC endpoint, then the Trivy job has nothing it can steal.

You should be using build artifacts, not relying on `uv run` to install packages on the fly. Besides the massive security risk, it also means that you're dependent on a bunch of external infrastructure every time you launch. PyPI going down should not bring down your systems.

This is the right answer. Unfortunately, this is very rarely practiced.

More strangely (to me), this is often addressed by adding loads of fallible/partial caching (in e.g. CICD or deployment infrastructure) for package managers rather than building and publishing temporary/per-user/per-feature ephemeral packages for dev/testing to an internal registry. Since the latter's usually less complex and more reliable, it's odd that it's so rarely practiced.


There are so many advantages to deployable artifacts, including audibility and fast roll-back. Also you can block so many risky endpoints from your compute outbound networks, which means even if you are compromised, it doesn't do the attacker any good if their C&C is not allow listed.

The Lego Mindstorm robotics kit that powered the whole thing was discontinued in 2022. Since they're no longer making the robotics kits they have nothing to donate to the competition (or run the competition on).

The LEGO Education version of MINDSTORMS Robot Inventor, SPIKE Prime, is still available and a new robot kit, Computer Science and AI, is being released this year. After next season, LEGO will be continuing on with their own K-8 robotics program (as will FIRST).

> The LEGO Education version of MINDSTORMS Robot Inventor, SPIKE Prime, is still available

Well, the Spike line is being discontinued also: https://education.lego.com/en-us/spike-update-2026/

But you’re right in that they’ll have another new line—“Lego Education Computer Science & AI”, which is different in a way I don’t really understand and doesn’t fill me with a ton of confidence.


Which is a shame in itself

This is embarrassing. Trivy is a product I've recommended to a lot of people, and have even included it in my book on Terraform, but it's going to be very difficult recommending it going forward if they are going to continue to fail to protect their own artifacts and distribution chains.

I don't expect my security tools to introduce back doors to my own build processes, and I especially don't expect to see it twice in three weeks.


I interviewed with Clockwise years ago and was offered a position, but ultimately I decided to pass for this exact reason. This system is great, and I actually used it and loved it, but it was a feature rather than a product.

While I understand why people want to skip code reviews, I think it is an absolute mistake at this point in time. I think AI coding assistants are great, but I've seen them fail or go down the wrong path enough times (even with things like spec driven development) where I don't think it's reasonable to not review code. Everything from development paths in production code, improper implementations, security risks: all of those are just as likely to happen with an AI as a Human, and any team that let's humans push to production without a review would absolutely be ridiculed for it.

Again, I'm not opposed to AI coding. I know a lot of people are. I have multiple open source projects that were 100% created with AI assistants, and wrote a blog post about it you can see in my post history. I'm not anti-ai, but I do think that developers have some responsibility for the code they create with those tools.


I agree that it would be a mistake to use something like this in something where people depend upon specific behaviour of the software. The only way we will get to the point where we can do this is by building things that don't quite work and then start fixing the problems. Like AI models themselves, where they fail is on problems that they couldn't even begin to attempt a short time ago. That loses track of the fact that we are still developing this technology. Premature deployment will always be fighting against people seeking a first mover advantage. People need to stay aware of that without critisising the field itself.

There are a subset of things that it would be ok to do this right now. Instances where the cost of utter failure is relatively low. For visual results the benchmark is often 'does it look right?' rather than 'Is it strictly accurate?"


Well, guess I was wrong about that.


that is ... not at all how that works. RAM is a separate chip, that is placed on top of the substrate that holds the main dies. It is bought from normal ram manufacturers like micron. it is not "embedded in the chip" by any possible meanings of those words.


No, that's not how it works at all. They still source all their RAM from Samsung, Hynix, Micron, etc.


Half the time they literally say it in the email. I just looked in my spam folder and just a few hours ago got an email titled "Your profile: Github", that started with:

> I came across your profile on GitHub. Given you're based in the US, I thought it might be relevant to reach out. > > Profile: https://github.com/tedivm

They aren't doing anything to hide it.


But hold on.

They could have git cloned your repo, used or otherwise analyzed your code which follows TOS then used the local git repo to pull your email address.

How is GitHub responsible here?


They could have, but it seems unlikely they targeted one or two repos and probably cloned thousands or more.


That identifies the company that sent the email not the github account that scraped it


Hard to talk about what models are doing without comparing them to what other models are doing. There are only a handful of groups in the frontier model space, much less who also open source their models, so eventually some conversations are going to head in this direction.

I also think it is interesting that the models in China are censored but openly admit it, while the US has companies like xAI who try to hide their censorship and biases as being the real truth.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: