MarketWatch photo illustration/iStockphoto
The data, numbers, and strings of complex code that make up algorithms are indecipherable to those not fluent in coding languages. It is easy to forget, however, that algorithms are not immaculately conceived.
Layered in pockets of complex codes fueling algorithms is human bias.
A nationally representative Pew Research survey published Thursday on how the public perceives computer algorithms found that “58% of Americans feel that computer programs will always reflect some level of human bias – although 40% think these programs can be designed in a way that is bias-free.”
Aaron Smith, the author of the report who is the associate director of research on internet and technology issues at Pew, said that the degree of skepticism varies largely depending on the context.
For instance, the report finds that 50% of Americans think it is fair to use an algorithm to determine whether or not a convicted felon should be eligible for parole. Meanwhile, when it comes to determining an individual’s personal finance score, which can be used to determine loan interest rates, only 37% of American deem the usage of the algorithms fair.
“If you dig beyond the skepticism there is a nuance that affects people’s attitude towards algorithms,” Smith said. “Particularly with fairness, it really depends on the specific context.” He added that with a personal finance score, more people “can picture themselves in that process and contrast that with different experiences not everyone has been on parole that’s something that people might not even think of having.”
On top of the ability of algorithms screen potential applicants out, they can also be used to gauge the behavior of recruiter reviewing job applications, said Miranda Bogen, a senior policy analyst at Upturn, a non-profit organization based in Washington, D.C. which advocates for policy to promote equal opportunities for individuals. “Some tools are looking for whether a recruiter gives a thumbs up or a thumbs down and they’ll show more candidates like the ones they gave the thumbs up to,” she said.
“The danger behind relying on algorithms in hiring is they can reflect human biases.”
To mitigate the potential for human biases to influence algorithms, Bogen encourages more companies to look into the tools they are using and testing them to see whether or not they are treating candidates differently. “It really requires careful attention to how these algorithms are built and deployed,” she said. “I think we are missing external validation to ensure whether there is bias.”
Recently, Reuters reported that an experimental recruiting algorithm Amazon AMZN, -1.34% created favored male candidates over female candidates. By penalizing resumes that included the word “women’s”, male resumes had a better chance of being selected for an interview.
Amazon says that their experimental algorithm was never actually used to screen candidates, according to the report. Amazon declined to respond to request for comment.