Building Cultural AI
Biases in data can be both explicit and implicit. A simple two-word phrase can carry strong contestations, and entire research fields, such as post-colonial studies, are devoted to them. However, these sometimes subtle (and sometimes not so subtle) differences in voice are as yet not often found in the results of automatic analyses or datasets created using automated methods. Current AI technologies and data representations often reflect the popular or majority vote. This is an inherent artefact of the frequentist bias of many statistical analysis methods resulting in simplified representations of the world in which diverse perspectives are underrepresented.
'Humane Conversation' featuring Marieke van Erp, Head of the Digital Humanities Research Lab at the KNAW Humanities Cluster. The conversation is hosted by Research Priority Area (RPA) on Human(e) AI at the University of Amsterdam.