High on my list of things I dislike doing is grocery shopping—especially with two young children. So when my local Target announced grocery pickup, I immediately created an account and began using the service. I was saving time by not being in the grocery store and money by not grabbing things I didn't need as I walked down the aisles. I could have a recipe open in one tab and the Target website open in another, allowing me to buy the exact ingredients I needed. After just a few orders, the website began to remember my preferences. Instead of having to search by keyword or browse through the categories, a custom landing page would show my frequent purchases so I could quickly add them to my cart. The website even saved me from having to make another trip by suggesting I purchase some pancake syrup in addition to the pancake mix that was in my cart. Those ninety minutes each week that had been spent in the crowded grocery store were now free to do other things like go to the gym. (Although let's be real, I was definitely not using that ninety minutes to go the gym.) This service was saving me an incredible amount of time. It was so personalized and custom, I barely had to think about what I needed.
While this example shows how artificial intelligence and big data can be used to make our lives more efficient and convenient, these technologies are being applied far beyond improved shopping experiences and the implications are vast.
But, how do companies like Target, Facebook, and Google know what you want to see? While you never explicitly tell these companies information, your habits and other activities speak volumes. Google trackers can be found on the top 75% of the top million websites (Pariser 2011). These trackers collect vast amounts of information about you, all in the name of selling an item or selling an opinion. Everything from location, gender, family-income, ethnicity, dietary habits, medical history, the type of computer you're using, the time it takes you to get to work, how late you stay awake, and so much more. Google Takeout allows you to download all of the information they have stored on you which includes data from Google-owned products such as YouTube, Google Hangouts, Gmail, Google Maps, etc. My request took several hours to produce and resulted in a 68GB file. That's equivalent to 38 million Word documents worth of data.
Did You Know?
- Private browsing modes stop your browser history from being recorded on your computer but do not prevent the websites you visit from collecting information, such as search terms, about you (Is Private Browsing Really Private?" 2017).
- According to an engineer at Google, even when you're logged out, Google is still able to use fifty-seven points of data to tailor your experience (Pariser 2011).
- Facebook's privacy settings allow you to opt-out of seeing targeted ads, but they don't stop the collection of data in the first place ("About Facebook Ads" 2019).
And Yet We Continue
So why do we, as consumers, willingly give up this information? Like my example above, this technology does have positive implications. If searching Google for "invasive species," Google may use my location to tailor the results to invasive species in Pennsylvania (where I live). I may get an alert on my phone for a car accident that occurred on my normal route to work. And yes, Target can remind me to get pancake syrup.
The other reason we give this information to companies is because we aren't consciously aware we're doing it. Yet just because we aren't physically filling out forms or handing over files with our personal information doesn't mean companies aren't collecting it.
But Why Does It Matter?
By now I'm sure you've made the connection to the filter bubble and its implications for information and media literacy. While we typically hear the term "customized" or "personalized" in a positive light, when seeking information, those customizations can actually translate into bias and ignorance. What Eli Pariser termed "the filter bubble," the algorithms used to provide targeted search results and tailored news feeds are creating a distorted and divided reality by only revealing one side of the story—the side you want to see (2011).
Big data and the filter bubble have a reach far beyond our personal and political lives. In a time where many schools are 1:1 or BYOD, the implications for data mining and tailored web experiences are far too big of an issue to ignore. Yet, with little control over these practices, we can only combat the issue by being more aware of its occurrence. Unfortunately there are no pre-packaged programs or tried-and-true strategies to teaching in the age of the filter bubble. However, the awareness can prompt librarians to shift some of their practices to address its implications.
In a single day, consider the following:
- How many times a student logs into Google
- How many times a student uses a search engine
- How many different devices a student uses (personal and school owned)
- How many different apps a student is accessing and logging in with single sign on options (like Google).
- How many times a student accesses social media
- How many photos a student takes
Think of all the information that is being gathered through these activities and how that data may impact the way our students find (or are served) information. We have trained students to be so focused on what they see (by teaching source evaluation and showing them credible websites), that they aren't spending enough time on what they aren't seeing (the other side of the story). We become so focused on what information is appearing in our news feeds, that we aren't considering all the information that isn't there. We aren't worried about what isn't showing up in the search results because we're so focused on determining the best option from the results that have been tailored to us. It's time to change—or perhaps start—the conversation.
The filter bubble not only poses an issue for finding unbiased information, but it's also a privacy concern. Privacy is a longstanding value held by librarians and is still a key issue advocated by the American Library Association.
Knowledge Is Power. How to Teach It.
In the same way food documentaries are helpful in kickstarting healthy eating habits, having an awareness of how information is collected and used is a step in addressing the implications of the filter bubble. Here are some resources you might not know about to help those conversations
Eli Pariser — https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles
The TED Talk "Beware Online 'Filter Bubbles'" and book The Filter Bubble: How the New Personalized Web Is Changing What We Read and How We Think. Pariser gave his TED Talk in 2011, which should be some indication that he was far ahead of the curve in terms of discussing the implications of the filter bubble.
Harvard's Project Implicit — https://implicit.harvard.edu/implicit/
This resource includes a series of assessments aimed at identifying thoughts and feelings outside of one's conscious awareness and control. This is a great tool for your older students or your staff to help become more self-aware of the biases they may have.
Checkology from the News Literacy Project — https://checkology.org/
Checkology is an online suite of tools aimed at making students better users of information. With lessons on algorithms, bias, identifying hoaxes, and more, it's a comprehensive platform for addressing the many facets of media literacy.
Knowing the Data that Is Shared
One of the issues mentioned before is that we are not aware of all the data we often unintentionally and unknowingly share. Sometimes, seeing the data is helpful in recognizing its significance.
- Download the data stored about you. Both Google and Facebook allow users to download the data collected on them. No, this doesn't erase it, but the 67GB of data that Google had on me certainly put things in perspective.
- Track the trackers. Browser extensions like Lightbeam and Privacy Badger allow you to identify and disable thirdparty trackers.
Mozilla's Teach the Web and Google's Be Internet Awesome — https://learning.mozilla.org/en-US/ and https://beinternetawesome.withgoogle.com/
Both organizations provide information for teaching how the Web works. Yes, I recognize the irony, but they both have some excellent resources and activities.
Choose Privacy Every Day — https://chooseprivacyeveryday.org/
Each year ALA and other organizations celebrate #chooseprivacy week in May. This website hosts resources to promote privacy and confidentiality year-round. The site includes programming and lessons for elementary school through adults.
Keep Learning
Since the writing of this article, technology has changed, privacy terms have been updated, and I've shared at least a GB worth of data with Google. Stay up to date with these changes by following blogs from organizations promoting privacy like those of the Electronic Frontier Foundation, DuckDuckGo, and the Mozilla Foundation.
Works Cited
"About Facebook Ads." Facebook.com. https://www.facebook.com/ads/about/. Accessed July 2019.
"Are Ads Costing You Money?" DuckDuckGo Blog. March 7, 2017. https://spreadprivacy.com/ads-cost-you-money/.
Cyphers, Bennett. "A Guided Tour of the Data Facebook Uses to Target Ads." Electronic Frontier Foundation, January 24, 2019. https://www.eff.org/deeplinks/2019/01/guided-tour-data-facebook-uses-target-ads.
"Data Policy." Facebook.com. https://www.facebook.com/privacy/explanation. Accessed July 2019.
"Is Private Browsing Really Private?" DuckDuckGo Blog. January 27, 2017. https://spreadprivacy.com/is-private-browsing-really-private/.
Pariser, Eli. "Beware Online 'Filter Bubbles.'" TED Talks. TED.com, 2011. https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.
Valentino-DeVries, Jennifer, Jeremy Singer-Vine, and Ashkan Soltani. "Websites Vary Prices, Deals Based on Users' Information." Wall Street Journal (December 24, 2012). https://www.wsj.com/articles/SB10001424127887323777204578189391813881534.
Entry ID: 2211574