Consumer groups demand protection for AI testing and transparency
- The Consumer Federation of America and Mozilla sent a letter to the White House.
- They urged the administration to maintain essential AI testing and transparency rules.
- The absence of such rules could negatively affect consumers, particularly vulnerable groups.
In the United States, a coalition of organizations including the Consumer Federation of America and Mozilla submitted a joint letter on Thursday, January 30, 2025, urging the White House to uphold crucial safety rules regarding artificial intelligence. This appeal follows President Donald Trump's recent decision to rescind an executive order signed by former President Joe Biden in 2023 that required rigorous safety assessments and equitable guidelines for AI systems, especially those with potential national security implications. The Biden executive order mandated that developers of large-scale AI must submit safety test results to the government prior to public release, a measure that had garnered support from civil society leaders for addressing harms associated with AI use. The letter was addressed to notable officials, including David Sacks, the White House’s AI czar, and Mike Waltz, the national security adviser. In it, signatories expressed deep concern that Trump's new guidelines might loosen necessary testing and disclosure requirements that safeguard consumers—particularly vulnerable populations such as seniors and veterans—who rely on services influenced by AI systems. They warned that without proper regulations, AI implementations risk causing detrimental impacts on essential benefits and health services. Furthermore, opponents of the Biden order had previously raised concerns over potential stifling of innovation by imposing stringent rules. However, civil society advocates emphasized that low standards of safety and transparency already existed, which is insufficient for protecting everyday citizens and marginalized communities from technological missteps and abuse. These advocates highlighted the pressing need to maintain a minimum threshold for testing AI systems to ensure safety and accountability, especially in applications concerning veterans' health care and retirement benefits, which are currently under the purview of untested AI technologies. The letter called for the retention of the principles established by Biden's rules to prevent the use of potentially harmful AI systems without oversight, underlining the critical importance of safety engineering across technological advancements.