Trump's administration threatens progress in AI inclusivity initiatives
- Ellis Monk was approached by Google to help improve the inclusivity of its AI products several years ago.
- Monk developed a color scale to better represent human skin tones in AI applications, resulting in significant positive consumer feedback.
- Concerns are growing that the current political efforts may undermine future initiatives aimed at addressing algorithmic bias in technology.
In Cambridge, Massachusetts, experts in artificial intelligence (AI) are reflecting on the repercussions of shifting government priorities regarding diversity, equity, and inclusion (DEI) initiatives within technology. Notably, Harvard sociologist Ellis Monk was engaged by Google several years ago to enhance the inclusivity of its AI products. This collaboration stemmed from the tech industry's recognition of the inherent biases in AI systems, particularly in computer vision technologies. These systems were found to struggle with accurately representing people of color, echoing biases rooted in historical camera technologies. Monk's contribution included the development of a color scale aimed at improving the portrayal of human skin tones in AI image recognition. By replacing outdated standards that predominantly catered to white dermatology patients, the Monk Skin Tone Scale had significant positive consumer reception and has been integrated into various technologies, including camera phones and video games. However, there are growing concerns that the current political climate may jeopardize future funding and initiatives aimed at making technology more equitable for all. The Trump administration's recent emphasis on eliminating what they describe as 'ideological bias' in AI suggests a pivot away from understanding and correcting algorithmic biases that have been flagged for years. This change is accompanied by an acknowledgment from experts that ideological bias in AI reflects a broader recognition of algorithmic bias, which affects critical areas such as housing and healthcare. Consequently, some advocates fear that the commitment to genuinely inclusive AI technologies could be undermined, as a chilling effect on future initiatives may result from this renewed focus. Moreover, incidents of bias in AI have been documented by government scientists dating back to the Trump administration in 2019. These instances included evidence that facial recognition software performed unevenly across race and gender, emphasizing that the biases present in technology are both pervasive and dangerous. Despite advancements such as the Monk Skin Tone Scale's widespread application, questions linger about the sustainability of such efforts amid a shifting political landscape influencing corporate strategies and funding within the tech industry.