Bias in AI and New York City’s governance

Sunny reflection in pond
Stanley Zimmy:

Though many people might not realize it, complicated algorithms, including AI in some cases, help cities like New York make decisions every day about everything from where kids should go to school to who should receive extra screening from police and which neighborhoods should have more fire stations.

These systems have the potential to help government be more efficient by processing large volumes of information — like, say, DNA samples at crime scenes — rapidly. But if these systems are implemented poorly, they can also introduce bias across racial, gender, and class lines to exacerbate societal inequalities. And while researchers have proven AI can be biased at an aggregate level, the victims of these biases don’t know when it’s happening to them.

How do we even begin to imagine alternative conceptions of AI that are geared towards reparation, as opposed to bias?

Read more at New York City wanted to make sure the algorithms are fair on Recode