Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That would require stealing the model weights and the code as OpenAI has been hiding what they are doing. Running models properly is still quite artistic.

Meanwhile, they have access to Meta models and Qwen. And Meta models are very easy to run and there's plenty of published work on them. Occam's Razor.



How hard it is, if you have someone inside with the access of the code? If you have 100s of people with full access, not hard to have someone that is willing to sell it or do some industrial espionage...


Lots of if's here. They need specific US employee contacts at a company thars quickly growing and one of those needs to be willing to breach their contracts to share it. That contact also needs to trust that Deepseek can properly utilize such code and completely undercut their own work.

Lot of hoops when there's simply other models to utilize publicly


How big are the weights for the full model? If it's on the scale of a large operating system image then it might be easy to sneak, but if it's an entire data lake, not so much.


devil's advocate says that we know that foreign (hell even national) intelligence attempt to infiltrate agents by having them become employees at any company they are interested. So the idea isn't just pulled from thin air as a concept. I do agree that it is a big if with no corroborating evidence for the specific claim.


I doubt that many people have full access to OpenAI's code. Their team is pretty small.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: