The Fairness and Ethics in AI – Human Interaction Group (fAIre), formerly known as Transparency in Algorithms Group (TAG) of CYENS Centre of Excellence (Research Centre of Excellence on Interactive Media, Smart Systems and Emerging Technologies), which is headed by Dr Jahna Otterbacher, Associate Professor at the Open University of Cyprus (OUC), has secured funding for the implementation of a new research project entitled: “KeepA(n)I – A Methodological Approach for Identifying Social Stereotypes in Artificial Intelligence Applications”.
The project will be funded (200.000€) under the Cyprus Researchand Innovation Foundation Excellence Hubs programme, it will be coordinated by CYENS (more specifically by Dr. Evgenia Christoforou, CYENS research associate and OUC adjunct lecturer), and implemented in cooperation with Algolysis Ltd. The Open University of Cyprus is one of the founding members of CYENS.
Given the potential of algorithmic systems to influence the social world, by amplifying or abating bias and potential discrimination, KeepA(n)I develops a structured, methodological approach to aid developers and machine learning practitioners to detect social bias at the input datasets and output data of the application. In contrast to existing methods posed by the Fair ML community, which evaluate group and individual fairness in datasets and algorithmic results, in an attempt to reduce/mitigate the effect of bias, KeepA(n)I takes a different approach. The project focuses on the expression of social stereotypes (e.g., based on gender, race or socio-economic status) and how those are reflected in biases shared by groups of people interacting in different ways with the system. KeepA(n)I is envisioned as a human-in-the-loop approach, methodically exposing social stereotypes and reducing the negative impact or even enhancing people’s access to opportunities and resources when interacting with both high and low risk AI applications. By engaging humans in the evaluation process (i.e., through crowdsourcing), KeepA(n)I will achieve a diverse (e.g., across cultures) and dynamic (e.g., across contexts and time) evaluation of social norms, according to the objective of the evaluated application. The project will focus on computer vision applications that analyse people-related media (e.g., image content analysis or “tagging,” gender or age recognition from a profile photo) with significant implications for high-risk applications (e.g., screening job applicant profiles or dating applications).