Multidisciplinary Research Group (fAIre MRG)
We focus on algorithmic accountability and transparency, human biases in algorithms, user-algorithm feedback loops.
Our research is being published by well-known scientific venues such as ACM FAccT, ACM UMAP, ACM CHI, AAAI HCOMP, AAAI ICWSM and more.
AI plays a key role in the development of interactive media and smart technologies, which are in turn, impacting all sectors of society, from transportation to education to healthcare. Algorithmic systems and processes allow the exploitation of rich and varied data sources; however, there are increasing concerns surrounding their ethical and social dimensions. Even when their designers have the best intentions, algorithmic processes can inadvertently result in consequences in the social world, such as biases in their outputs that can result in discrimination against individuals and/or groups of people. Furthermore, the democratization of AI and its rapid technical evolution make it difficult for legal regulation to keep up.
Fairness and Ethics in AI – Human Interaction MRG (fAIre-MRG), formerly known as Transparency in Algorithms Group (TAG MRG), focuses on understanding the nature and impact of human biases in interactive media and smart systems, and develops tools and techniques to promote algorithmic fairness, transparency and positive symbiosis in Human – AI interaction. fAIre researchers use data science and/or social science approaches to examine the impact of human biases as well as to create and evaluate interventions.
by fAIre MRG research
such as automated image descriptions and emotion analysis
such as photo and video retrieval
in generating training data, studying perceptions and more
in machine learning applications, by laypersons and developers alike
communication and collaboration processes between Humans and AI enabled technologies
Research activity, invited talks, announcements and more.