Solutions to Biased Artificial Intelligence May Include Establishing a New Social Network

by Vins
AI network

During a November 2021 episode of the Electronic Frontier Foundation podcast, hosts Cindy Cohn and Danny O’Brien spoke with the director of Stanford’s Program on Platform Regulation​​, Daphne Keller. Keller shared how faulty content moderation is transforming platforms like Facebook, Twitter, and Google. For instance, artificial intelligence may flag counter-speech, or speech which directly opposes hate-speech, as potentially harmful content while allowing misinformation campaigns to flourish.

“The sheer scale of moderation on Facebook for example means that they have to adopt the most reductive, non-nuanced rules they can in order to communicate them to a distributed global workforce,” said Keller.

Increasingly, researchers are seeing content moderation become not only an issue of censorship but also inequality. According to a 2019 study highlighted by Shirin Ghaffary for Vox, tweets written by Black people were 1.5 times more likely to be identified as “hateful” or “offensive” by the top hate-speech detection technologies powered by artificial intelligence (AI). Tweets written in African-American Vernacular English were 2.2 times more likely to be flagged.

Moreover, while artificial intelligence has made remarkable strides, bots still cannot understand context. So many of the errors made by these content moderation systems, Keller said, can be attributed to duplicate detection. For example, an image that appears in an internet space for terrorist recruitment might get quickly flagged. But if that same image is then found somewhere in a community dedicated to counter-speech, it will still get flagged and so, too, will the user. There’s just no way for AI to see the difference.

Cohn believes a potential way to bypass these machine errors entirely is by developing something called adversarial interoperability or competitive compatibility (ComCom).

“(ComCom) is the idea that users can have systems that operate across platforms,” Cohn said. “So for example you could use a social network of your choosing to communicate with your friends on Facebook without you having to join Facebook yourself.”

This unfettered access to online discourse is reminiscent of what the internet used to look like. However, with great freedom online comes great responsibility and even greater risks. Keller wonders if this could only cause existing issues, like misinformation and hate speech, to spiral out of control. There are also other concerns to consider, such as labor issues, content moderation costs, and online privacy concerns. But even more than that, Keller said, there are questions about the basic capabilities of application programming interfaces (API), which may not be able to account for the massive amount of data that must be processed and organized almost instantly.

Cohn maintains there is still hope for a middleware experiment. Platforms like Reddit have navigated accountability while still supporting freedom of expression.

“I think that one of the keys to this is to move away from this idea that five big platforms make this tremendous amount of money,” said Cohn. “Let’s spread that money around by giving other people a chance to offer services.”

Corporate media outlets, like the New York Times, Washington Post and USAToday have certainly begun to cover the ways in which AI can be biased against marginalized groups on major platforms like Facebook. But none discuss potential solutions to these systemic issues, such as ComCom or a middleware service.

Source: Cindy Cohn and Danny O’Brien, interview with Daphne Keller, Electronic Frontier Foundation, podcast audio, November 30, 2021.

Student Researcher: Erick Duran (Diablo Valley College)

Faculty Evaluator: Mickey Huff (Diablo Valley College)

Review Article with Credder

You may also like