Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I kind of get the sentiment about openness but I think it's way more nuanced than you are making out.

There are very good reasons for withholding SOTA models, primarily from the info hazard angle and avoiding escalating the capabilities race which is basically the biggest risk we have right now.

Google / Deepmind have actually made some good decisions to try and slow down the race (such as waiting to publish).



They're not slowing down anything. The cat's out of the bag.

What good does a few months lag do when nobody is bracing for impact?


I'm not saying they are doing a good enough job, but that doesn't mean their approach isn't entirely without merit.

Even ignoring the infohazard angle if they published everything immediately that would escalate the race. By sitting on their capabilities and waiting for others to publish (e.g. PaLM, Imagen vs GPT-3, DALL-E) they are at least only playing catch up.


Capabilities race, seriously? This is not nuclear warfare my guy. It's mathematics.


Nuclear warfare is much less concerning than misaligned AI.

Take a look into scaling laws and alignment concerns, this is a very real challenge and existential risk not some crackpot theory.


In the same sense that deep learning is just linear regression with a steroid problem.


Information warfare is pretty dangerous too!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: