Recommending content, powering chatbots, trading stocks, detecting medical conditions, and driving cars. These are only a small handful of the most well-known uses of artificial intelligence, yet there is one that, despite being on the margins for much of AI’s recent history, is now threatening to grow significantly in prominence. This is AI’s ability to classify and rank people, to separate them according to whether they’re “good” or “bad” in relation to certain purposes.
This risk was highlighted at the end of September, when it emerged that an AI-powered system was being used to screen job candidates in the U.K. for the first time. Developed by the U.S.-based HireVue, it harnesses machine learning to evaluate the facial expressions, language and tone of voice of job applicants, who are filmed via smartphone or laptop and quizzed with an identical set of interview questions. HireVue’s platform then filters out the “best” applicants by comparing the 25,000 pieces of data taken from each applicant’s video against those collected from the interviews of existing “model” employees.
So this was an curious quote from this piece – “It is going to favour people who are good at doing interviews on video and any data set will have biases in it which will rule out people who actually would have been great at the job,” – I believe that is a real world bias too – examinations and tests will select the people who are good at taking examinations and tests and will not necessarily include people who might be good at the vocation. Same for interviews, nervous stumbling interviewees don’t present well, although they might have been the perfect candidate. I think we might be stepping too far on our fear of bias in A.I.
Artificial Intelligence Has Become A Tool For Classifying And Ranking People by Simon Chandler at Forbes