Are AI investors shorting Black lives?

Artificial intelligence often doesn’t work the same for Black people as it does for white people. Sometimes it’s a matter of vastly different user experiences, like when voice assistants struggle to understand words from Black voices. Other times, such as when cancer detection systems don’t account for race, it’s a matter of life and death. So who’s fault is it? Setting aside intentionally malicious uses of AI software, such as facial recognition and crime prediction systems for law enforcement, we can assume the problem is with bias. When we think about bias in AI, we’re usually reminded of incidents such as…

This story continues at The Next Web

Read more on The Next Web

You may also like

Comments are closed.

More in BIAS