Moving beyond “algorithmic bias is a data problem”
A surprisingly sticky belief is that a machine learning model merely reflects existing
algorithmic bias in the dataset and does not itself contribute to harm. Why, despite
clear evidence to the contrary, does the myth of the impartial model still hold allure
for so many within our research community? Algorithms are not impartial, and some
design choices are better than others. Recognizing how model design impacts harm opens
up new mitigation techniques that are less burdensome than comprehensive data collection.