You should reread the first section of TFA, titled "Communication safety in Messages." This goes beyond the scope of CSAM: they're scanning all Messages photos sent to or from a minor's account for any possible nudity, not just CSAM hash-matching.
It sure seems like this is two different techniques/technologies at work. There is a CSAM deterctor using a known database and then, it looks like, there is a separate model in messages for detecting pornographic content without a known database.
It doesn't sound like it can "detect pornographic content" since the difference between pornography and nudity is not going to be reducible to a coefficient matrix.