Laura Nelson – CUDAN Lecture
When: 2025-11-17 16:00-18:00 (Tallinn time)
Where: A108 & online
The event is public via zoom:
https://zoom.us/j/94629959885?pwd=2NktNsXm0SzbzwwmfGbqlk5UZQoARw.1
Meeting ID: 946 2995 9885 Passcode: 007238
Speaker
Laura Nelson
Centre for Computational Social Science, University of British Columbia, Vancouver, Canada
Lecture title
We’re Talking About the Wrong Error: Why Variance Matters More than Bias in AI
Abstract
Bias gets all the attention when it comes to AI. And for good reason: in social systems, bias determines whose résumés are seen, who gets access to care, whose voices are amplified or erased, and more. For social scientists, the fact that computational models encode bias, or have algorithmic fidelity to the social and cultural associations embedded in text and images, is precisely what makes them analytically valuable. The focus on bias from both the practical and analytical sides, however, is rooted in a relatively older technology: static word embeddings. With large language models, I argue, the bias challenge/opportunity has been flipped on its head. LLMs are no longer faithful encodings of specific biases, they are amalgams and it is the amalgam, or, more precisely, their lack of variance, that is the core challenge with LLMs. In this talk, I argue for shifting the focus among computational social scientists and data scientists from bias, to variance. I show how low-variance LLMs can be incredibly powerful for certain uses, such as some forms of text classification and annotation, and also why they pose challenges for other uses, especially social simulation and comparative analysis. I close by reflecting on what exactly we are measuring when we use large language models, and why working with them requires a fundamental rethinking of what computational methods are for.
Links
https://sociology.ubc.ca/profile/laura-nelson/
https://www.lauraknelson.com/