Strawman on the WYSIWIG editor vs text editor question. That is not an "automated decision-making system"
> Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct.
Note this 1998 NASA paper to further refute the IDE/WYSIWYG editor claims.
> This study clarified that automation bias is something unique to automated decision making contexts, and is not the result of a general tendency toward complacency.
The problems with automation bias has been known for decades and the studies in the human factors field is quite robust.
While we are still way to early in the code assistant world to have much data IMHO, there is evidence called out even in studies that edge towards positive results in coding assistants that point out issues with complacency and automation bias.
> On the other hand, our eye tracking results of RQ2 suggest that
programmers make fewer fixations and spend less time reading
code during the Copilot trial. This might be an indicator of less
inspection or over-reliance on AI (automation bias), as we have
observed some participants accept Copilot suggestions with little to
no inspection. This has been reported by another paper that studied
Copilot [24].
Some decisions like “how do I mock a default export in jest again?” are low stakes. While other decisions like “how should I modify our legacy codebase to use the new grant type” are high stakes.
Deciding what parts of your workflow to automate is whats important.
> Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct.
Note this 1998 NASA paper to further refute the IDE/WYSIWYG editor claims.
https://ntrs.nasa.gov/citations/19980048379
> This study clarified that automation bias is something unique to automated decision making contexts, and is not the result of a general tendency toward complacency.
The problems with automation bias has been known for decades and the studies in the human factors field is quite robust.
While we are still way to early in the code assistant world to have much data IMHO, there is evidence called out even in studies that edge towards positive results in coding assistants that point out issues with complacency and automation bias.
https://arxiv.org/abs/2208.14613
> On the other hand, our eye tracking results of RQ2 suggest that programmers make fewer fixations and spend less time reading code during the Copilot trial. This might be an indicator of less inspection or over-reliance on AI (automation bias), as we have observed some participants accept Copilot suggestions with little to no inspection. This has been reported by another paper that studied Copilot [24].