Datasets.

Social B(eye)as Dataset V1.0 (SBD) [2018]

Authors: Barlas, Pinar; Kyriakou, Kyriakos; Kleanthous, Styliani; Otterbacher, Jahna

Image analysis algorithms have become an indispensable tool in our information ecosystem, facilitating new forms of visual communication and information sharing. At the same time, they enable large-scale socio-technical research which would otherwise be difficult to carry out. However, their outputs may exhibit social bias, especially when analyzing people images. Since most algorithms are proprietary and opaque, we propose a method of auditing their outputs for social biases. To be able to compare how algorithms interpret a controlled set of people images, we collected descriptions across six image tagging APIs. In order to compare these results to human behavior, we also collected descriptions on the same images from crowdworkers in two anglophone regions. While the APIs do not output explicitly offensive descriptions, as humans do, future work should consider if and how they reinforce social inequalities in implicit ways. Beyond computer vision auditing, the dataset of human- and machine-produced tags, and the typology of tags, can be used to explore a range of research questions related to both algorithmic and human behaviors. (2019-01-15)

Social B(eye)as Dataset V2.0 (SBDv2) [2020]

Authors: Barlas, Pinar; Kyriakou, Kyriakos; Guest, Olivia; Kleanthous, Styliani; Otterbacher, Jahna

Researchers of Web and social media rely extensively on image analysis tools to understand users’ sharing behaviors and engagement with content on the large scale. However, it has been made clear over the past years that there are disparities in the way that these tools treat images depicting people from different social groups. Previously, we released the Social B(eye)as Dataset, consisting of machine- and human-generated descriptions on a controlled set of people images without context. This resource allows researchers to compare the behaviors of taggers and humans systematically. We now update this, with a process that imposes the people-images onto backgrounds. The current release uses four stereotypically “feminine” and four “masculine” contexts. Thus, it enables us to consider the possible influences upon the gender inferences that are made by tagging algorithms. We also provide an updated typology of tags used by the six proprietary taggers as well as initial analyses. Our methodology for imposing semi-transparent images onto background images is publicly available, allowing others to repeat the process with other combinations of images for various research topics. (2020-01-15)

Emotion Bias Dataset (EBD) [2020]

Authors: Kyriakou, Kyriakos; Kleanthous, Styliani; Otterbarcher, Jahna; Papadopoulos, George

Vision-based cognitive services (CogS) have become crucial in a wide range of applications, from real-time security and social networks to smartphone applications. Many services focus on analyzing people images. When it comes to facial analysis, these services can be misleading or even inaccurate, raising ethical concerns such as the amplification of social stereotypes. We analyzed popular Image Tagging CogS that infer emotion from a person’s face, considering whether they perpetuate racial and gender stereotypes concerning emotion. By comparing both CogS and Human-generated descriptions on a set of controlled images, we highlight the need for transparency and fairness in CogS. In particular, we document evidence that CogS may actually be more likely than crowdworkers to perpetuate the stereotype of the “angry black man” and often attribute black race individuals with “emotions of hostility”. This dataset consists of the raw data collected for this work, both from Emotion Analysis Services (EAS) and Crowdsourcing (Crowdworkers from the Appen (formerly known as FigureEight) Platform targeting US and India participants. We’ve used the Chicago Face Database (CFD) as our primary dataset for testing the behavior of the target EAS. (2020-07-13)

Social B(eye)as over Time Dataset (SBT) [2022]

Authors: Pınar Barlas; Maximilian Krahn; Styliani Kleanthous; Kyriakos Kyriakou; Jahna Otterbacher

Many eyes have scrutinized the social behaviors of computer vision services, given their popularity with researchers and developers. When analyzing images depicting people, their descriptions often reflect social inequalities and stereotypes, yet the proprietary nature of these services mean that it is difficult to anticipate or explain their behaviors. Mechanisms providing oversight of these processes can enable more responsible use, allowing stakeholders to audit their behaviors and track potential changes over time. Previously, in 2019, we audited image tagging algorithms for social bias when processing images of people. In this work, we i) present data from an audit of the same services three years later, with ii) additional outputs for input images depicting other racial/ethnic groups and iii) a toolkit enabling several fully-automated analyses on the algorithms’ behaviors across time. (2022-01-14)