Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

s/predicts/attempts to predict


AlphaFold has been widely validated- it's now appreciated that its predictions are pretty damn good, with a few important exceptions, instances of which are addressed with the newer implementation.


"pretty damn good"

So... what percentage of the time? If you made an AI to pilot an airplane, how would you verify its edge conditions, you know, like plummeting out of the sky because it thought it had to nosedive?

Because these AIs are black box neural networks, how do you know they are predicting things correctly for things that aren't in the training dataset?

AI has so many weasel words.


As mentioned elsewhere and this thread and trivially determinable by reading, AF2 is constantly being evaluated in blind predictions where the known structure is hidden until after the prediction. There's no weasel here; the process is well-understood and accepted by the larger community.


A prediction is a prediction; it's not necessarily a correct prediction.

The weatherman predicts the weather, even if he's sometimes wrong, we don't say "he attempts to predict" the weather.


The title OP gave accurately reflects the title of Google's blog post. Title should not be editorialized.


Unless the title is clickbait, which it appears this is…


Syntax error


Legal without the trailing slash in vi!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: