Spring 2024
More than a Glitch: Confronting Race, Gender, and Ability Bias in Tech
Prof. Meredith Broussard (NYU)
The word “glitch” implies an incidental error, as easy to patch up as it is to identify. But what if racism, sexism, and ableism aren’t just bugs in mostly functional machinery—what if they’re coded into the system itself? In the vein of heavy hitters such as Safiya Umoja Noble, Cathy O’Neil, and Ruha Benjamin, Meredith Broussard demonstrates in More Than a Glitch how neutrality in tech is a myth and why algorithms need to be held accountable.
Broussard, a data scientist and one of the few Black female researchers in artificial intelligence, masterfully synthesizes concepts from computer science and sociology. She explores a range of examples: from facial recognition technology trained only to recognize lighter skin tones, to mortgage-approval algorithms that encourage discriminatory lending, to the dangerous feedback loops that arise when medical diagnostic algorithms are trained on insufficiently diverse data. Even when such technologies are designed with good intentions, Broussard shows, fallible humans develop programs that can result in devastating consequences.
Broussard argues that the solution isn’t to make omnipresent tech more inclusive, but to root out the algorithms that target certain demographics as “other” to begin with. With sweeping implications for fields ranging from jurisprudence to medicine, the ground-breaking insights of More Than a Glitch are essential reading for anyone invested in building a more equitable future.
How We Became "Data-Driven"
Prof. Xiaochang Li (Stanford University)
This talk draws from Li's current book project, Beyond Recognition: Language, Datafication, and the Making of Algorithmic Culture (under contract with University of Chicago Press), on the history of speech and natural language processing (NLP) as a formative task domain in artificial intelligence. Li discusses the "statistical turn" in automatic speech recognition, which refashioned speech recognition from a project of replicating the embodied faculties of communication to one of sorting data. Opening the door for the widespread adoption of data-intensive machine learning techniques, this transformation in speech and language modeling offers a glimpse into how we became "data-driven": how words became data, how data became imperative, and how the challenges of speech and language shaped the very foundations of algorithmic culture.
Governing the Model Marketplace
Robert Gorwa, PhD (Berlin Social Science Center)
The AI development community is increasingly making use of hosting intermediaries intended to provide easy public access to a wide range of 'open' models and training data. These model marketplaces lower technical deployment barriers for hundreds of thousands of lay users, allowing them, for instance, to easily interact with and customize image-generation models, and to socially share the images created in the process. In this talk, we will explore the growing ecosystem of these various AI hosting intermediaries, discussing their political economic context, their social and technical affordances, and underlying business models. We will then turn to a discussion of the emerging governance issues that these platforms are currently grappling with, with a focus on the slow development of community guidelines and moderation practices in the image-generation model platform space. In the face of increasing public and private scrutiny, how are these new companies responding? And how might these platforms better mobilize resources to act as a careful, fair, and proportionate regulatory access point?
Fall 2023
The Making Culture Lab: Research Creation as Sensory Anthropology
Professor Kate Hennessy (Simon Fraser University)
From galaxies to pixels, from flowing rivers to humming server farms, from family image archives to AI-generated compositions: the contemporary media environment is deeply relational, material, and political. In this talk, Professor Hennessy shares some work-in-progress that uses research-creation as a method for interpreting the ways in which emergent anthropological medialities are sensorial and entangled with human agencies, fugitive archives, and climate emergency. This work is situated within anthropology’s turn towards multimodality and her interest in using art-led, collaborative, and sensory practices to counter the invisibility of algorithmic ecologies that anthropological tools, archives, and practices are now deeply entangled with.
The Future of Algorithm Auditing is Sociotechnical
Professor Danaë Metaxa (University of Pennsylvania)
Algorithm audits are powerful tools for studying black-box systems without direct knowledge of those systems’ inner workings. While they have been effectively deployed to identify harms and biases in algorithmic content, algorithm audits’ narrow focus on technical components stop short of considering users themselves as integral and dynamic parts of the system, to be audited alongside its algorithmic components. After an overview of the state of the art in algorithm auditing, this talk introduces sociotechnical auditing: evaluating algorithmic systems at the sociotechnical level, focusing on the interplay between algorithms and users as each impacts the other over time.
Spring 2023
A Right to Freedom from Automation
The idea that we can assert a right to be free from automation sounds fanciful if not outright absurd. It flies in the face of a planetary level of industrial and technological development that has significantly improved human life along important dimensions such as health and overall wealth. We are, on average, richer than the Greeks and live longer than the Ottomans. Automation is the bedrock of all the technologies designed to save, assist, and connect us. From industrial lathes to OpenAI’s ChatGPT, automated tools are one of the most ground-breaking achievements in the history of humanity. So, it is perhaps a strange 21st century development to see language skeptical of automated tools and systems expressed in places like the United States’ Blueprint for an AI Bill of Rights (“use of technology, data, and automated systems in ways that threaten the rights of the American public”) or the European Union’s GDPR (“right not to be subject to a decision based solely on automated processing”). Dr. Ziyaad Bhorat argues that this uneasy language is expressive of the way humans have been asserting themselves against automated technologies to guard three important abilities and corresponding rights: 1) the right to work; 2) the right to know and understand what we consume; and 3) the right to make our own decisions. Far from an absolute, neo-Luddite tear-down of the factory, a comprehensive right to freedom from automation designates what is worth protecting in an increasingly technologically-dependent human world.
*An event in collaboration with the USC Dornsife Center on Science, Technology, and Public Life (STPL)
The Far Voice: The Emergence of the Crisis Hotline, Community Health, and Carceral Refusal
Professor Hannah Zeavin (Indiana University)
“The Far Voice” describes the rise of mass telecommunication therapies, focusing on the suicide crisis hotline (originated by Protestant clergy) in England and the United States in the 1950s and 1960s and investigates how this service first became thinkable, and then widely adopted and used. Zeavin redescribes the hotline as psycho-religious in origin and intent, rather than as the secular service it has usually been assumed to be. Zeavin argues that these services, in their use of the peer-to-peer modality, radically upset former regimes of pastoral care and counseling, as well as those of psychodynamic therapy. Hotlines generate
a new, hyper-transient frame for the helping encounter, removing nearly all the traditional aspects of the therapeutic setting except for speech and listening. At the same time, these hotlines devalue the need for expertise and rescind the fee
associated with that expertise. They challenge every clinical concept associated with the structure and dynamic of the analytic encounter. It is contingent, it is not in person, and requires (or permits) a” distanced intimacy” with no guarantee of repeating; and it makes use of the phone—an appliance paradoxically thought of as capable of bringing people together and as responsible for their greater alienation. Zeavin concludes by examining the afterlives of these radical early hotlines in our contemporary, when algorithmic surveillance, datafication, and tracking have relinked the hotline with forced hospitalization and carceral
intervention.
Writing the Revolution: Wikipedia and the Survival of Facts in the Digital Age
Professor Heather Ford (University of Technology Sydney)
When asked what sort of thing was most likely to blow governments off course, British Prime Minister Harold MacMillan famously replied, "Events, dear boy, events.” Were the events themselves in charge? Were they, “in the saddle, riding mankind”? (George F. Will, 2002) In this talk, Ford discusses the key findings from her book, Writing the Revolution: Wikipedia and the Survival of Facts in the Digital Age, where she follows how facts about the 2011 Egyptian Revolution were constructed by Wikipedians and attendant algorithms over the course of a decade. Ford shares the five key agents found to have influenced how history is written on Wikipedia – and how these representations reverberate through the internet, forming the raw material for the facts that the majority of us consume as consensus truth about what happened and why it happened. Wikipedia, it turns out, is not just a quirky project, an anomaly of the internet age: it is the source of knowledge/power in an age dominated by the logics and affordances of data.
Fall 2022
Professor Renyi Hong (National University of Singapore): a discussion of Professor Hong's new book Passionate Work (2022), where Hong theorizes the notion of being “passionate about your work” as an affective project that encourages people to endure economically trying situations like unemployment, job change, repetitive and menial labor, and freelancing.
A Response to Errors in Machine Learning
Professor Cindy Lin (Penn State University)
Professor Lin examines the value of error as a site for negotiation within machine learning (ML) work. The talk presented an ethnographic analysis of how ML datasets are developed and evaluated to argue two points: first, errors disclose existing structures of collaboration, often undervalued or overseen in supposedly working systems, and second, errors rework old actors and sites in new ways, that re-enter and/or devalue the position of different actors.
Professor Karen Levy (Cornell University): a discussion of Professor Levy's new book Data Driven: Truckers, Technology, and the New Workplace Surveillance (2022), considering the dual, conflicting narratives of job replacement by robots and of bodily integration with robots, to assess the true range of AI's potential effects on low-wage work.
Spring 2022
Charley Johnson (Data & Society): an exploration of research opportunities for improving the capacity of government decision-makers grappling with the social implications of technology.
Dr. Rachel Moran & the University of Washington's Center for an Informed Public: a joint MASTS-CIP meeting gathering to share research projects, make individual connections, brainstorm new work that works across the two groups.
Dr. Katalin Fehér (Hungarian Academy of Sciences): a review of Dr. Fehér's Fulbright Scholarship work on patterns in how interdisciplinary scholars understand "Artificial Intelligence", and the new "AI Media Research Network" she has founded to convene researchers.
Shannon Dosemagen (Open Environmental Data Project): a discussion of where data about the environment come from, how data are managed and governed, and what collaborations such data might make possible.
Dr. Ali Pearl ("Making a Neighborhood"): a discussion of how the "Making a Neighborhood" (Los Angeles Time profile) newsletter came to be, its successes and challenges, and how local journalism can arise from residents telling stories about the histories and dynamics of where they live.
Professor Luke Stark (Western University): a discussion of Stark's book project "Ordering Emotion", a "history of the psychological and behavioral sciences’ influence on computing from World War II to the present."
Fall 2021
Professor Catherine Knight Steele (University of Maryland, College Park): a discussion and celebration of Professor Steele’s new book, Digital Black Feminism: Our Histories and Futures (2021). Black women are at the forefront of some of this century’s most important discussions about technology: trolling, online harassment, algorithmic bias, and influencer culture. But Black women’s relationship to technology began long before the advent of Twitter or Instagram. To truly “listen to Black women,” Steele's book looks to the history of Black feminist technoculture in the United States and its ability to decenter white supremacy and patriarchy in a conversation about the future of technology.
Professor Caitlin Petre (Rutgers University): a discussion of Petre’s new book, All the News That’s Fit to Click: How Metrics Are Transforming the Work of Journalists (2021). Petre describes how digital metrics are a powerful but insidious new form of managerial surveillance and discipline in her eye-opening account of data-driven journalism.
Professor Dylan Mulvin (London School of Economics): a discussion based on Mulvin’s book, Proxies: The Cultural Work of Standing In (MIT 2021), which uses the “proxy” to investigate the history and politics of knowledge through the models, prototypes, and templates that surround us.
Professor Stuart Candy (Carnegie Mellon University): in collaboration with the USC Dornsife Center on Science, Technology, and Public Life (STPL) and the Berggruen Institute, a visit from 2021-22 Berggruen Fellow Stuart Candy (@futuryst) to discuss his work on thinking about the future using experimental methods for scholarship, activism, and everyday life.
Spring 2021
Professor Lana Swartz (University of Virginia): A discussion of Professor Swartz’s work on Cryptocurrency Imaginaries. What are cryptocurrencies, how do they work, what’s at stake and why do they matter? Professor Swartz will also discuss her wider work on “new money” and social payment as well as her dissertation and book writing experience.
Professor Trebor Scholz (The New School): In collaboration with the Berggruen Institute, USC Dornsife Levan Institute for the Humanities, Annenberg Innovation Lab, and the USC Dornsife Center on Science, Technology, and Public Life, this event hosted Trebor Scholz in presenting his activist research on the worker cooperative as a promising economic alternative for the digital economy. Showcasing work with platform co-ops in India, Germany, Australia, Brazil, and the United States, Scholz demonstrates that a democratically-owned People’s Internet is not only possible but that it’s a promising economic alternative for the digital economy.
Fall 2020
Professor Daniel Kreiss (University of North Carolina): “Two Days after Election Day, What’s Going On?” An informal conversation about the state of the 2020 US election as it stands on November 5th as we make sense of how voters, journalists, and platforms have acted, and what the future might hold.
Dr. Joan Donovan (Shorenstein Center on Media, Politics and Public Policy at Harvard’s Kennedy School of Government): Former Facebook executives admit they used the tobacco industry’s playbook for addictive products. Perhaps it can also be used to undo the damage. Joan Donovan will discuss policy and regulatory responses for dealing with social media, disinformation and media manipulation.
Professor Sarah Brayne (University of Texas, Austin): In the digital age, we scatter millions of digital traces in our wake as we go about our everyday lives. A discussion of Brayne’s book Predict and Surveil: Data, Discretion, and the Future of Policing (2020), where she draws on her research with the Los Angeles Police Department to understand how the police use the digital trails we leave to deploy resources, identify criminal suspects, and conduct investigations. Although big data policing holds potential to reduce bias and increase efficiency, this research analyses how it also reproduces and deepens existing patterns of social inequality, threatens privacy, and challenges civil liberties.