This sort of reminds me of the story about an HR algorithm that ended up being discriminatory because it was trained using existing/past hiring data.. so it was biased toward white men.
Was it Amazon?
Anyway, this feels different to me, IIRC you can't ask disability related questions in hiring aside from the "self identify" types at the end? So how would a ML model find applicants with any kind of disability unless it was freely volunteered in a resume/CV?
how would a ML model find applicants with any kind of disability unless it was freely volunteered in a resume/CV?
A few off the top of my head:
(1) Signals gained from ways that a CV is formatted or written (e.g. indicating dyslexia or other neurological variances, especially those comorbid with other physiological disabilities)
(2) If a CV reports short tenure at companies with long breaks in between (e.g. chronic illnesses or flare-ups leading to burnout or medical leave)
(3) There are probably many unintuitive correlates irt interests, roles acquired, skillsets. Consider what experiences, institutions, skillsets and roles are more or less accessible to disabled folk than others.
(4) Most importantly: Disability is associated with lower education and lower economic opportunity, therefore supposed markers of success ("merit") in CVs may only reflect existing societal inequities. *
* This is one of the reasons meritocratic "blind" hiring processes are not as equitable as they might seem; they can reflect + re-entrench the current inequitable distribution of "merit".
>If a CV reports short tenure at companies with long breaks in between (e.g. chronic illnesses or flare-ups leading to burnout or medical leave)
This is a case where it may benefit a candidate to disclose any disabilities leading to such an erratic employment pattern. I don’t proceed with candidates who cannot explain excessively frequent job hops because it signals that they can’t hold a job due to factors I’d want to avoid hiring, like incompetence or a difficult personality. It’s a totally different matter if the candidate justified their erratic employment due to past medical issues that have since been treated.
And what if they haven't been? Disability isn't usually a temporary thing or even necessarily medical in nature (crucial to see disability as a distinct axis from illness!). Hiring with biases against career fluctuations is, I'm afraid to point out, inherently ableist. And it should not be beholden on the individual to map their experienced inequities and difficulties across to every single employer.
I think the point of this guidance is that "hiring AI" is not actually intelligent and will not be able to read and understand a note about disability on a resume. It will just dumbly match date ranges to an ideal profile and throw out resumes that are too far off.
>* This is one of the reasons meritocratic "blind" hiring processes are not as equitable as they might seem; they can reflect + re-entrench the current inequitable distribution of "merit".
they are not meant to be "equitable". they're meant to provide equality of opportunity, not equality of outcome
Oh agreed! Sorry about mixed terminology. Though they don't really provide "equality of opportunity" either :/ People w/ more privelege, at the starting line, will have more supposed 'merit' and therefore the CV-blindness only reflects existing inequalities from wider society. A different approach might be quotas and affirmative action.
I think the poster is arguing that the things we call merit reflects the ability to do the job well. Any system of hiring has to consider the ability to hire the best person for the job. Quotas are an open-admission we can no longer do this. Affirmative action is trickier as some affirmative action can be useful in correcting bias and can actually improve hiring. Too much once again steers us away from the best person for the job.
This is important and tricky as if we have across the board decreases in hiring the best person for the job we end up with a less productive economy. This means our hiring practices directly compete against other aims like solving poverty.
> So how would a ML model find applicants with any kind of disability unless it was freely volunteered in a resume/CV?
In machine learning this happens all the time! Stopping models from learning this from the most surprising sources is an active area of research. Models are far more creative in finding these patterns than we are.
It can learn that people with disabilities tend to also work with accessibility teams. It can learn that you're more likely to have a disability if you went to certain schools (like a school for the blind, even if you and I wouldn't recognize the name). Or if you work at certain companies or colleges who specialize in this. Or if you publish an article and put it on your CV. Or if you link to your github and the software looks there as well for some keywords. Or if among the keywords and skills that you have you list something that is more likely to be related to accessibility. I'm sure these days software also looks at your linkedin, if you are connected with people who are disability advocates you are far more likely to have a disability.
> Or is that the advisory? "Don't do this?"
Not so easy. Algorithms learn this information internally and then use it in subtle ways. Like they might decide someone isn't a good fit and that decision may in part be correlated with disability. Disability need not exist anywhere in the system, but the system has still learned to discriminate against disabled people.
> For example, some hiring technologies try to predict who will be a good employee by comparing applicants to current successful employees. Because people with disabilities have historically been excluded from many jobs and may not be a part of the employer’s current staff, this may result in discrimination.
> For example, if a county government uses facial and voice analysis technologies to evaluate applicants’ skills and abilities, people with disabilities like autism or speech impairments may be screened out, even if they are qualified for the job.
> For example, an applicant to a school district with a vision impairment may get passed over for a staff assistant job because they do poorly on a computer-based test that requires them to see, even though that applicant is able to do the job.
> For example, if a city government uses an online interview program that does not work with a blind applicant’s computer screen-reader program, the government must provide a reasonable accommodation for the interview, such as an accessible version of the program, unless it would create an undue hardship for the city government.
Was it Amazon?
Anyway, this feels different to me, IIRC you can't ask disability related questions in hiring aside from the "self identify" types at the end? So how would a ML model find applicants with any kind of disability unless it was freely volunteered in a resume/CV?
Or is that the advisory? "Don't do this?"