These systems have the potential to help government be more efficient by processing large volumes of information — like, say, DNA samples at crime scenes — rapidly. But if these systems are implemented poorly, they can also introduce bias across racial, gender, and class lines to exacerbate societal inequalities. And while researchers have proven AI can be biased at an aggregate level, the victims of these biases don’t know when it’s happening to them.
How do we even begin to imagine alternative conceptions of AI that are geared towards reparation, as opposed to bias?
Read more at New York City wanted to make sure the algorithms are fair on Recode