I wish the article would explain the criteria for winning the competition. If it's simply percentage of actual earthquakes that were predicted I could easily write an algorithm that would score 100% by always returning the value `true`.
For me, its quite clear from the article, that always returning `true` is not what they did:
"[the ai] predicted 14 earthquakes within about 200 miles of where it estimated they would happen and at almost exactly the calculated strength. It missed one earthquake and gave eight false warnings."
So its:
14 Positives,
8 False Positives and
1 False Negative