Algorithmic Bias

When we think about computer programs, it is important for us to acknowledge the current debates about how algorithms impact user experience. Some would argue that computers are excellent tools to counter prejudice in research, as machines do not inherently possess the same kinds of bias as humans do. However, those same machines are programmed by humans. Whatever bias a human programmer possesses can impact how software is created and can be replicated in the data produced by the machine, a problem which scholars refer to as one of algorithmic bias.

It is possible for algorithmic bias to be intentional, but it can also be an unintentional result of developers inadequately questioning the assumptions they use when creating computer programs. Humans can have blind spots about the impact certain decisions might have, and the data that machines come up with as a result can negatively impact certain populations. For instance, a program might try to gather data on ethnicity and race and offer three options: White, Black, or Hispanic. Such options would fail to acknowledge other ethnicities or other ways to self-identify, and these limited options also fail to allow someone to identify in multiple ways, for example as both Black and Hispanic or as both White and Hispanic. Similar problems occur with binary gender questions, allowing only male or female as a response. These limited response choices erase other identities, and they reveal ignorance and thoughtlessness on the part of the developers.3 That thoughtlessness can cause real harm to groups of people.

Sometimes examples of bias are straightforward and easy to identify from both user and programmer perspectives, but often bias is far more insidious. It can be difficult for a developer or a tester to see until one notices that the results of the data produced by a complex algorithm seem to favor one group over another. Safiya Umoja Noble explored the way search engines produce results that discriminate against Black women in Algorithms of Oppression: How Search Engines Reinforce Racism.4 Noble discussed how search results and automatic suggestions reproduce stereotypical attitudes toward groups of people. Similarly, Ruha Benjamin has compared this kind of bias to the Jim Crow laws that once enforced racial segregation in the United States, calling it “the new Jim Code.”5 Machine learning and Artificial Intelligence (Al) are also places to watch for algorithmic bias. Programs that try to learn about human behavior do not always replicate humans’ best qualities. For instance, programs that try to target ads at online users often miss the mark, sometimes offensively. A notable past failure of Al was Microsoft’s TayTweets Artificial Intelligence chatbot, which took only 24 hours from its launch on Twitter in 2016 to begin tweeting racist and offensive comments to users as it learned from others how to interact online.6

As you employ technology in your digital humanities projects, it is important to think about how predesigned programs may already contain bias. Consider how you might adapt such programs to do better or whether you need to use a different tool. If you are designing your own program, take time to interrogate what bias your team may inadvertently be introducing, and consider discussing your work with outside consultants or beta testers to help you think through issues of bias.

 
Source
< Prev   CONTENTS   Source   Next >