On gender-neutrality of machine learning recruiting engines

On October 8th, 2018, Reuters published an article about Amazon scraping secret AI recruiting tool which shows bias against women. This stories quickly trended on HackerNews and was pushed into the spotlight.

Reuters reported that Amazon’s machine learning were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most of these submissions came from men, which reflected the male-dominance of this technical industry. “This system taught itself that men are preferable and penalized penalized resumes that included the word women’s, as in women’s chess club captain.” Though the programs were edited to be neutral to gender-specific terms, there was no guarantee that the machine learning algorithm won’t devise other discriminatory ways. This project was later cancelled.

In retrospect, these patterns of men-preference learned by Amazon’s learning algorithm are only a testimony on how our world is (subtly) showing bias against women. This bias is, yet, not entirely unmotivated. While I was at my senior years studying at McGill, some of my graduate-level Computer Science classes just felt odd for some reason. It took me a week or two to realize that it’s because we were 100% guys in a classroom. The lack of women taking graduate-level Computer Science courses may be a factor of them being less preferred.

In a world where emphasizing gender-neutrality is politically correct, having this bias pointed out by a machine is an irony that Amazon just wasn’t ready to accept.

Today, gender-bias is present in major NLP systems (Rudinger et. al. 2018); after all, they are trained from gender-biased inputs. It will be very interesting to see how we tackle this problem in the next few years.