With growing applications of Artificial Intelligence (AI) there is a risk that existing biases and equity gaps could be magnified as algorithms often reflect the implicit biases of their creators.
The quality and quantity of the inputs we provide to AI are directly related to the outputs. The inputs provided are constrained by human choices regarding what data to include. The implications of not having a robust and inclusive data set can lead to the exasperation of inequities as AI becomes more prevalent.
This is exemplified by Amazon’s use of AI in the creation of a resume revision algorithm. The software received inputs about the resumes of Amazon’s existing engineers whom were predominately male. The use of this software recruited candidates that replicated the existing workforce. This replication was caused by penalizing resumes that included words such as women, as in “women’s soccer team.” Words predominately found on the resumes of men such as “captured” and “executed” were awarded additional points. This example illustrates that AI is only as robust as the data provided, and the implications of not providing an inclusive data set can have major repercussions for the inclusion of diverse genders, race and sexual orientations. Amazon programmers made many failed attempts to solve this issue, but the software was discarded. Recruiting tools with similar fundamental issues are currently being used by hundreds or organizations.
AI algorithms will replicate and extenuate the biases prevalent in society; if the inputs are not carefully curated and monitored, prejudices will be magnified, leading to a lack of opportunity and marginalization of minority groups.
By: Jasleen Grewal