When thinking about machine learning, many librarians cast their minds to what they have read in dystopian literature. While titles like I, Robot and 2001: A Space Odyssey can paint a terrifying picture of sentient computers, the reality is much more mundane. Artificial Intelligence is a way to program machines that do not perform repetitive tasks but can instead adapt their functions to new situations based on data.
For example, Siri will learn that when you say, "directions home," you mean your primary residence not a restaurant up the street named "Home Cooking." It will learn what songs you prefer based on the ones you skip. Amazon can predict which books you'll like based on the books you've purchased or rented previously. These training sets help the machine predict the user's subsequent behaviors and needs. This is machine learning. It cannot empathize, or have feelings, or make moral judgements; it is merely predictive. It makes decisions based on a set of data. The end. It is neither good nor evil, but its impacts do have moral substance.
The place we most often see machine learning is in our Internet searches. We've all gone shopping for a comfortable pair of school shoes only to have ads from every orthopedic shoe maker on the globe appear in the sidebar of every other site we visit for weeks. Sometimes helpful; sometimes annoying. Search engines learn what we'd like to see based on our online purchases, our searches, and our location. They also use the searches and social media interactions of others. In this way, racist, sexist, and anti-science sites and searches often pop up—even when unwarranted and unwanted by the user—proliferating the spread of misinformation.
Christopher Hunt, the creative director at Ogilvy & Mather Dubai, made an advertising campaign for the United Nations in which real Google search suggestions were pasted over the mouths of women of color. The searches included "Women should: stay at home, be slaves, stay in the kitchen" and "Women should not: have rights, vote, work, box" (Noble 2018). It is disturbing. But, this campaign was in 2013. Six years later, the problem has proliferated online in a terrifying way.
To combat this problem, search engines are moving away from simple algorithms as they evolve from a math problem into a neural network. According to Jason Griffey, "Rather than reporting decisions in simple binary on-or-off states, neural nets collectively pass along 'weights' of decisions from one to the other, making best guesses as they process data, in a way that is modeled after biological processes" (Griffey 2019). This more complex system of guesses has, in recent years, led to more accurate guesses the more we interact with our machines and created a system in which "interaction" with websites, search terms, and social media posts makes them rise higher in the results. This leads to a proliferation of misinformation through confirmation bias. For example, shortly after the 2017 concert attack in Las Vegas, a 4Chan social media post identifying the shooter as "Reportedly a Democrat Who Liked Rachel Maddow, MoveOn.org and Associated with Anti-Trump Army" was amplified by a far-right opinion outlet The Gateway Pundit. This post spread quickly and widely because Twitter, Google News, and Facebook have algorithms that are weighted toward content that has greater prior engagement even though the information contained both in the post and in the Gateway Pundit story were false. The shooter was, in fact, 64-year-old Stephen Paddock who acted alone, had no such associations, and whose motives can only be speculated (Romo 2018). In this case, the more people who interacted with the false claims, the higher those claims climbed in the algorithm, regardless of veracity.
While search engines are attempting to improve their results and tech companies are making some efforts toward controlling the spread of misinformation with more inclusive training sets, one of the greatest tools we have to fight the misinformation created by the filter bubble is us.
Talking with students about the filter bubble and how machine learning impacts both their everyday routines and their opinions is critically important. Understanding how the machines in our life help us or hinder us is a critical skill as machine learning evolves.
Cambridge University's Bad News Game (http://getbadnews.com/) by Sander van der Linden is an online role-playing game that helps students understand how fake news proliferates online.
If you are a G-Suite school, student data is not sold to advertisers in accordance with FERPA and COPPA.
The Media Bias Chart (https://www.adfontesmedia.com/) is designed beautifully to create conversation around media bias. It helped me understand how confirmation bias was happening to me. Now, I make sure that if I check the news on MSN, I also go to Fox News in the same browser to balance out my search results.
We all have the ability to turn off the prediction services on our devices. The setting is usually found in the privacy settings.
As this technology evolves, it will be important to both be educated about it and to educate our students. They are entering a world in which technology will increasingly consist of "black box" neural nets and obfuscated processes. Knowing how data sets are created and deployed and how that impacts our learning is critical to having an educated citizenry. As G.I. Joe often says, "knowing is half the battle."
Griffey, Jason. Artificial Intelligence and Machine Learning in Libraries. Library Tech Reports. ALA TechSource, 2019.
Griner, David. "Powerful Ads Use Real Google Searches to Show the Scope of Sexism Worldwide." AdWeek (October 18, 2013). https://www.adweek.com/creativity/powerful-ads-use-real-google-searches-show-scope-sexism-worldwide-153235/.
Meserole, Chris. "How Misinformation Spreads on Social Media—And What to Do about It." Order from Chaos blog (May 9, 2018). https://www.brookings.edu/blog/order-from-chaos/2018/05/09/how-misinformation-spreads-on-social-media-and-what-to-do-about-it/.
Noble, Safiya Umoja. Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, 2018.
Romo, Vanessa. "Police End Las Vegas Shooting Investigation; No Motive Found." NPR (August 3, 2018). https://www.npr.org/2018/08/03/635507299/las-vegas-shooting-investigation-closed-no-motive-found.
Entry ID: 2234910