Events Calendar

Social Selection of Algorithms

Wednesday, March 24, 2021
4:30 pm
Zoom (by registration)

Presented by Alex Mayhew as part of the FIMS mediations lecture series.


Increasingly algorithms are being used to govern complex decisions, such as criminal sentencing and insurance premiums. The increasing influence of algorithms has brought the question of algorithmic bias to prominent attention. If the data we generate to power the algorithms captures our prejudices, then it is little surprise that algorithms themselves reproduce those same prejudices. Worse still, at the moment most algorithms are blackboxs, leaving this bias hidden. 

One potential response to this challenge is Explainable AI (XAI): often these are algorithms that analyze other algorithms and explain their ‘reasoning’, exposing the hidden bias and enabling us to respond. While this is a promising approach, it poses its own challenges. Obviously any XAI system would itself be an algorithm, subject to prejudiced data and biased outcomes.

But the case of XAI reveals another challenge. Like any software, XAI systems will increasingly exist as a population, with future versions preferentially based on particular versions of XAIs under use in the previous generation. This creates an evolutionary environment where the selection of each generation is influenced by nebulous social measures, like user satisfaction or mollification. Cognitive Science has shown us that humans typically prefer coherence over truth. This could result in the XAI optimizing for what is convincing instead of what is true, without anyone intending such an outcome. 


Alex Mayhew is a LIS PhD candidate in FIMS at UWO. He earned an MLIS in 2016 also at FIMS. Before that he earned an Undergrad degree in Philosophy at the University of Ottawa. He is interested in thinking tools and philosophical engineering, particularly knowledge organization.

Faculty of Information and Media Studies
FIMS Communications

Powered by Blackbaud
nonprofit software