I think perhaps you are looking at a different part of the funnel; disparate impact seems to be around the sort of requirements you are allowed to put in a job description. Like “must have a college degree”.
However the sort of insidious discrimination at the margin I was imagining are things like “equally-good resumes (meets all requirements), but one had a female/stereotypically-black name”. Interpreting resumes is not a science and humans apply judgement to pick which ones feel good, which leaves a lot of room for hidden bias to creep in.
My point was that I think algorithmic processes are more testable for these sorts of bias; do you feel that existing disparate impact regulations are good at catching/preventing this kind of thing? (I’m aware of some large-scale research on name-bias on resumes but it seems hard to do in the context of a single company.)
>disparate impact seems to be around the sort of requirements you are allowed to put in a job description.
That is a common example, but it is much broader than what goes on a job ad. For example, I have heard occasional rumblings about how whiteboard interviews are a hiring practice that would not stand up to these laws (IANAL).
>My point was that I think algorithmic processes are more testable for these sorts of bias
Yes, this is true, but that doesn't really matter. If there is consistent discrimination happening at the margins, that will be evident holistically. If that is evident holistically and there is no justification for it, that is all we need. We don't need to run resumes through an algorithm to show that discrimination is happening at an individual level. We just need to show that a policy negatively impacts a protected group and that the policy is not related to job performance.
>do you feel that existing disparate impact regulations are good at catching/preventing this kind of thing?
I think the bigger problem than the regulations is that there is an inherent bias against these type of cases actually being pursued. First, it is difficult to identify this as an individual so people don't know when it is happening. Additionally, people fear the retribution that would come from pursuing this legally. People don't want to be viewed as a pariah by future employers so they often will simply move on even if their accusations are valid.
However the sort of insidious discrimination at the margin I was imagining are things like “equally-good resumes (meets all requirements), but one had a female/stereotypically-black name”. Interpreting resumes is not a science and humans apply judgement to pick which ones feel good, which leaves a lot of room for hidden bias to creep in.
My point was that I think algorithmic processes are more testable for these sorts of bias; do you feel that existing disparate impact regulations are good at catching/preventing this kind of thing? (I’m aware of some large-scale research on name-bias on resumes but it seems hard to do in the context of a single company.)