As I said, you can't patch over culture problems with a re-org.
If product people don't care, putting a person on the team that works against their interest will not make things better. Same thing with security/diversity/code quality/reliability. If the team doesn't care, putting an enforcer on the team will not fix the situation.
There is an implicit assumption that the teams are wrong. What about the wisdom of the crowds? If most teams think it's not needed, why do we think they are wrong?
I don't see it as an implicit assumption that the teams are wrong, but a recognition that the teams have a fairly narrow goal that can easily blind them to larger implications of design decisions as they "move fast and break things".
Having some sort of outside check on that seems wise to me.
Google, Microsoft and Facebook had those teams, all they achieved is to get a new competitor by slowing down the natural pace of getting things into production by dumbing things totally down instead of finding technical solutions to ethical problems that keep most of the power of the AI systems.
Perhaps so, but that doesn't negate the idea or the need. It only implies they were doing it wrong (and, I speculate, they were doing it wrong because they weren't really on-board with the idea. They were just after improving their PR a bit).
As I said, you can't patch over culture problems with a re-org.
If product people don't care, putting a person on the team that works against their interest will not make things better. Same thing with security/diversity/code quality/reliability. If the team doesn't care, putting an enforcer on the team will not fix the situation.