You seem to be raising this as a "just so" kind of argument and absurdum, but we have extant examples of databases and information technology enabling villainy like oppression and genocide by making correlations easier to track, making tracking more efficient, and less cost prohibitive.
Honestly. To me that is starting to sound like very very good idea. Regulate what you can store, how you store, it how you modify it, who can access it, how is access controlled, what sort of trail should be left, how can mistakes be corrected, require that those whose information is stored can get full log on actions done on data relating to them.
Sounds like over regulation to many. But it is pretty clear companies and developers have failed. So maybe strict regulation is needed.
We absolutely should, some companies cannot ever be trusted with certain information. There is no reason why companies like Meta or Google should be entrusted with so much user data. The government should force a divestment from it and allow the public to own it (which should include public job guarantees that allow the public to maintain said data) or allow for smaller companies to be the handlers of such data.
Google, Meta, and the rest of big tech have proven they should never be trusted.
It is a pity we never regulated the consumer surveillance industry out of existence.
See, the original question isn't really about the technology per se. Rather it's about how it will be used. Do you have confidence in the track record (and trajectory) of our current regulatory approach when it comes to reigning in the scaling up of novel types of harm?
The way I see it, the American approach has been to simply write off those who end up on the business end of the technological chainsaws as losers, and tell them they should have tried harder to be on the other side doing the damage. So why would we think "AI" will be any different?