MScAC
by Ishtiaque Ahmed (University of Toronto)
Artificial intelligence systems rely on massive datasets to learn what is “true,” but the process of creating that truth is far from neutral. This talk reveals the hidden infrastructure of data annotation that underpins every supervised model: millions of human annotators, often working under precarious contracts across the Global South, who interpret ambiguous data into the labels that machines learn from. Each annotation encodes social, cultural, and linguistic assumptions that ultimately shape a model’s behaviour, bias, and performance. By examining how annotators’ decisions and working conditions determine what counts as accurate or ethical data, I argue that AI ethics must move beyond abstract principles toward an understanding of the material and geopolitical systems that produce ground truth. From labelling hate speech to detecting misinformation, these processes expose how technical pipelines reproduce global hierarchies of labour, language, and legitimacy. Reframing annotation as a core computational process allows us to build AI that is not only robust and reliable but also socially accountable and globally aware.
This is a hybrid talk. To attend in person, over at the OPG building, or to attend online please register. You can find the registration links in the talk announcement here https://mscac.utoronto.ca/mscac-talks/