Zooming Past Equity in Higher Education: Technocratic Pedagogy Fails Social Justice Test

by Project Censored
Published: Updated:

By Nolan Higdon and Mickey Huff

The response to COVID-19 by governing institutions has altered the lives and practices of people across the nation, including the students, faculty, and staff in higher education. One of the biggest changes in educational institutions has been the increased reliance on Zoom conferences in-place of traditional face-to-face classroom meetings. For example, in May 2020, the website for Ohlone College, a community college in Fremont, CA, had an announcement that read, “IMPORTANT: All classes will be held online during the 2020 Summer Term. Classes that have scheduled meeting days and times will meet via ConferZoom online.”

The ways in which a big tech company like Zoom brings the educational space into the home are reminiscent of New York University media ecologist Neil Postman’s 1992 warning about how the unquestioned embrace of technology would lead to a technopoly, which he defined as the “surrender of culture to a totalitarian technocracy.” As Historian Tim Wu has noted, historically these warnings have fallen on deaf ears. From the advent of radio to the Internet in the present, Wu found that every technological change went through a cycle that began with great excitement predicated on techno-utopianism. In The Net Delusion: The Dark Side of Internet Freedom, author and technology researcher Evgeny Morozov describes this techno-utopian approach to media as the “a naive belief in the emancipatory nature of online communication that rests on a stubborn refusal to acknowledge its downside.” Furthermore, it irresponsibly recasts “all complex social situations either as neatly defined problems with definite, computable solutions or as transparent and self-evident process that can be easily optimized if only the right algorithms are in place.” Techno-utopian rhetoric often leads to widespread use, but as Wu noted, eventually the dangers associated with these new technologies are realized and the utopian visions prove empty. In response, the public rejects further widespread adoption of these tools, and instead prefer the next new gadget peddled by techno-utopians. The recent unquestioned adoption of Zoom in educational institutions raises profound questions about where we are in this technology cycle as a society.

Ohlone College’s embrace of Zoom is not at all unique, as similar approaches are being adopted by educational institutions around the country, but their close proximity to Silicon Valley puts them in driving distance of an industry whose economic models have shaped Zoom and other tech-platforms. As evidenced by the 2018 congressional hearings on Facebook, the public, let alone the political class, is largely unaware of how tech-companies function and garner profits. For example, at those hearings, Senator Orrin Hatch (R-Utah) had to ask Facebook CEO Mark Zuckerberg, “How do you sustain a business model in which users don’t pay for your service?”

Hatch’s question demonstrates a lack of knowledge about how tech-platforms generate wealth and what costs are involved. Users do in fact pay for every service, from email, Facebook, and Twitter accounts to Google searches and the supposedly Free Conference Call. They pay with their data. Users’ data can include their purchases, retina scans, DNA samples, online posts, clicks, relationships, searches, travel, consumption patterns, and much more. Internet companies such as Facebook, Google, and Amazon amass enormous amounts of information about consumers’ behavior in the form of data extracted from online activities. In 2018, Forbes reported that humans create 2.5 quintillion bytes of data daily, with over 90 percent of that data generated in the last two years alone. They have achieved this by promising users that data collection will positively enhance their online experience. In reality, these companies have been building an infrastructure that collects data on every user twenty-four hours a day, not to enhance the user experience, but to garner and provide critical insight into consumers’ behaviors and attitudes. This is one of the broadest, most pervasive social engineering experiments in history, where tech companies literally call people “users” and “consumers,” not students or citizens.

Indeed, the tech-industry claims that its machine intelligence capabilities can create algorithm-based tools that can both anticipate and direct human behavior. Predictive analytic products have proven so successful to the advertising industry that social media ad revenue jumped from $11 billion in 2015 to $23.5 billion in 2018. This new economic order was coined “surveillance capitalism” by scholars such as Harvard Business School’s Shoshana Zuboff. Predictive analytic products can serve various functions for various industries: health insurers would like to know what ailments their patients have searched for on Google and how active they are in order to calculate their patients’ health insurance fees; car insurance companies seek Global Positioning System (GPS) data to analyze their customers’ driving speeds and frequency in order to calculate their customers’ insurance premiums; law enforcement agencies want DNA data from genealogy websites in order to solve crimes; and advertisers need customers’ data to create effective messages that yield brand loyalty and repeat business.

Undoubtedly, there are many who will claim that they are not concerned that their data is being shared. Zuboff’s response to them is:

“Nonsense. If you have nothing to hide, you are nothing. What drives you as a person? What motivates you? What are your dreams? It is about who you are as a human being, your inner motives. The problem is also that these kinds of companies know everything about you, but their processes are designed so that you know as little as possible about their way of working. That creates an unfair situation. The distribution of power that results from this knowledge is not equal.” 

Indeed, just the use of the word “user” begs questions about who exactly is being used. Frederick Douglass noted a century and a half ago, slave owners’ power is maintained by making the exploited feel content. “I have found that, to make a contented slave,” writes Douglass “it is necessary to make a thoughtless one…He must be able to detect no inconsistencies in slavery; he must be made to feel that slavery is right; and he can be brought to that only when he ceases to be a man.” Similarly, in the 20th century, author Aldous Huxley noted the centrality of fostering a false sense of comfort to justify exploitation by writing “A really efficient totalitarian state would be one in which the all-powerful executive of political bosses and their army of managers control a population of slaves who do not have to be coerced, because they love their servitude.”  So, as it is with Zoom, the “users” are “content.” They seem to like it.

More recently, scholars have warned that the ease with which users are passively allowing the datafication of the economy is already reshaping our politics, government programs, work-places, policing, and our education system in problematic ways that disproportionately exploit and marginalize women, working people, and communities of color. Researchers like University of California Los Angeles’ Safiya Umoja Noble have found these algorithms are not at all objective. Instead, they reflect the very same racist, classist, and sexist attitudes of their creators. Given that Silicon Valley is dominated by a wealthy white male culture, one that Emily Chang described as “Brotopia,” it is no wonder that they exploit and deepen class, race, and gender inequities. In fact, taking it further along intersectional lines, YouTube is currently embroiled in a legal case challenging their deplatforming and censorship of select LGBTQ channels while allowing homophobic and anti-gay ones, all of which may be a result of biased algorithms. More broadly, Artificial Intelligence and facial recognition software have been shown to be incredibly flawed and biased. These are not perfect tools or systems.

These practices and economic models raise serious questions for educational institutions in general, and those that purport to value principles of equity in particular, about mandating that students use Zoom. How can institutions or individuals claim to be striving for social justice while they are engaging in the very exploitative practices that reinforce inequities and oppression? The reality is that Zoom as a company views COVID-19 as an opportunity, not a crisis. They are gaining increased access to the lucrative data of an estimated 19 million college students, not to mention 56 million students in K-12 institutions, or the millions of educators who are now spending hours providing said data as part of simply doing their jobs. Since the COVID-19 panic, Zoom users have increased from 10 million to 200 million. The data collected on Zoom is not limited to what is being vocalized on the platform, but includes the GPS location of the user and visual data about what people own, how they live, who they live with, and so on. In fact, there is currently a lawsuit against Zoom in Northern California for inadequate security measures, including not disclosing that user data was being shared with third parties. An investigation for similar issues by Zoom is underway in New York state for illegally sharing user information, and the New York City Schools temporarily banned the use of Zoom in their K-12 classrooms this spring because of related concerns. To their credit, it should be noted that Zoom has taken measures to implement end to end encryption as of late.

There are additional legal questions that may implicate instructors and educational institutions. At the federal and local level, students have certain privacy rights that cover their educational records from being disclosed without consent (FERPA); that limit the data collection practices for students under the age of 13 (COPPA); and protect the privacy of witnesses and victims of sexual harassment (Title IX). However, it is not clear if Zoom, by collecting and sharing data from office hours, class meetings, breakout sessions, and administrative meetings, is breaking these laws when it collects and shares data. Furthermore, if institutions of higher education and instructors are mandating that students use these technological tools in order to receive an education, are they complicit in violating state and federal privacy rights for students? It seems that administrators should be less concerned with strangers Zoombombing and disrupting class sessions and more worried about another kind of fallout– the unquestioning embrace of big tech as a Molotov Cocktail to principles of equity. Nonetheless, the data amassed and privacy relinquished in order to use Zoom is tantamount to the fallacy of Facebook being “free,” both of which come at the high cost of user data and privacy. We are all in the metrics.

Educators should also be concerned. There is a long history of profiteers implementing surveillance mechanisms to control and exploit labor in the U.S. dating back to tragic events like the Pullman Palace Car Strike and Triangle Shirtwaist Factory fire. More recently, a proposed change to West Virginia’s public worker health plan would have asked teachers to download a mobile fitness app called Go365 and earn points on it by using a Fitbit or other fitness tracker designed to monitor the users’ steps taken, heart rate, or other metrics. Employers were using Fitbit as a way to surveil employee behavior patterns. Fitbit was also sharing user data with third parties, including healthcare companies who used it to set insurance rates and even deny coverage. The only party seemingly not benefitting was the “user” who was actually being used. Zoom is simply the latest analogous iteration of this tradition allowing employers to monitor workers in the home via cyberspace.

As Richie Koch of Security Blog notes, “Zoom allows your boss to track your attention during calls, shares the copious amounts of data it collects with third parties, and has already had a major security vulnerability.” Furthermore, it brings the workplace into people’s homes, the very sanctuary that exists to provide freedom and solace from demands of the outside world. In institutions of higher education, this technocratic embrace promotes a form of pedagogical Taylorism that’s long been afoot under the guise of so-called education “reform,” replete with “teaching to test” models and accompanying metric regimes, inevitably tied to the life’s blood of any institution– funding. This approach gathers a lot of data, but does little in terms of actual education, and even harms the very demographics it claims to support.

In Future Politics, Jamie Susskind writes that the changes brought about by big tech companies are going to force a reckoning with fundamental questions about how our government can function in a tech-driven society. Similar questions need to be asked of educators and administrators in higher education. In the rushed response to COVID-19, educational institutions have sought to normalize the use of distance education technologies such as Zoom. In the process, crucial questions we should ask about fundamental issues ought to include: what is the quality of distance versus face-to-face instruction; what inequities are exacerbated by the digital divide and how can they be remedied; how do we address the privilege that enables one to turn their homes into a classroom, an office, a lab, or a test center while others are struggling to make ends meet; what does a social justice pedagogy looks like in a digital environment; how do we protect the need for privacy in a democratic yet increasingly technocratic culture; and just what is the impact of this ever-emerging technopoly, which permeates all aspects of our lives? Educational leadership at every level should seek answers to these very real and significant questions, not simply line up at the big tech trough.

There will be education after COVID-19, but the questions we ask and actions we take now shape the type of educational system we will ultimately have in the future. Are we at the start of the cycle that Tim Wu describes, unquestionably accepting techno-utopian promises, or are we ready to face and address the dangers posed by handing over fundamental aspects of one of our foundational institutions, public education, to a profit seeking technology industry and their technocratic overlords?

Dr. Nolan Higdon is an author and lecturer of history and media studies at California State University, East Bay. Higdon sits on the boards of the Action Coalition for Media Education and Northwest Alliance For Alternative Media And Education. His most recent publication is United States of Distraction with Mickey Huff. He is co-host of the Along the Line podcast, and a longtime contributor to Project Censored’s annual book, Censored. In addition, he has been a guest commentator for The New York TimesSan Francisco Chronicle, and numerous television news outlets.

Mickey Huff is director of Project Censored, president of the Media Freedom Foundation, coeditor of the annual Censored book series from Seven Stories Press (since 2009), co-author of United States of Distraction (City Lights, 2019), and professor of social science and history at Diablo Valley College where he co-chairs the history area and is chair of the journalism program. Huff also lectures in communications at California State University, East Bay and has taught sociology of media at Sonoma State University. He is the executive producer and co-host of the weekly syndicated Pacifica Radio program, The Project Censored Show, founded in 2010.