No doubt. The question is, is that the explicit goal, and if so, why? And if not, don't they consider the effects of their actions, if they aren't, why?
You actually don’t. Technologists have more leverage than most workers. There’s no shortage of jobs that don’t require building surveillance states or engagement addiction engines.
At this point, the path from what these teams of people are building to dystopian outcomes is well-mapped. Whether it’s an explicit goal is irrelevant because if you can reasonably foresee the harm and proceed anyway, you’re making a conscious choice to enable it.