FBI warns generative AI used for deceptive attacks on officials
- U.S. Secretary of State Marco Rubio was impersonated using AI to contact foreign officials.
- The FBI has reported an increase in generative AI being used for social engineering attacks.
- Individuals are urged to verify identities of callers or senders to avoid falling victim to scams.
In recent months, the FBI has issued warnings concerning the growing threat of generative AI being employed in social engineering attacks. This alarming development was illustrated when news broke of artificial intelligence being utilized to impersonate U.S. Secretary of State Marco Rubio. The State Department has reported that at least three foreign ministers, a U.S. senator, and a governor were contacted by individuals employing this deceptive technology. The attacks involved creating a fake account on the encrypted messaging platform Signal, which allowed the fraudsters to communicate via text and voice messages. Margaret Cunningham, the AI and Strategy director at Darktrace, described these attacks as alarmingly simple to execute. She highlighted that while the attempts to impersonate high-profile individuals were ultimately unsuccessful, they showcased how generative AI could easily be leveraged to generate credible social engineering attacks. Cunningham further noted that the tactics used in these incidents exploited human vulnerabilities during times of multitasking and stress, indicating that people are more likely to fall victim when they are under pressure, which lowers their defenses against such scams. While the specific incident involving impersonations of Secretary Rubio has captured media headlines, the broader issue of AI-driven attacks has been a growing concern for months. Experts caution that many professionals could find themselves targeted through such deceptive practices, which exploit available social media information to build credibility. The FBI's directive is clear: individuals must verify the identity of anyone reaching out via calls or messages, utilizing publicly available contact details to ensure the authenticity of the communication. Additionally, it was mentioned that the sophistication of AI-generated content has advanced to the degree that it often becomes difficult to identify fraudulent communications. Thomas Richards from Black Duck emphasized that this represents a fundamental shift in how individuals and organizations perceive security threats. As the State Department continues to address these cybersecurity challenges, it ensures that they are actively taking steps to improve protective measures against such advanced forms of manipulation.