Google ignores confirmed vulnerabilities in Gmail AI security
- Security researchers have confirmed vulnerabilities within Google's Gemini AI, specifically affecting Gmail and other Google products.
- The vulnerabilities allow for indirect prompt injection attacks, posing significant risks for user data and security.
- Google has classified these issues as intentional and has opted not to issue fixes, raising questions about user safety.
In the wake of growing concerns over AI-related security threats, Google has faced criticism for its handling of security vulnerabilities within its Gmail platform. This situation has escalated rapidly since early January 2025, when security researchers published analyses detailing several exploitable weaknesses in Google's Gemini AI. These vulnerabilities were demonstrated to enable indirect prompt injection attacks, allowing potential phishing attempts and manipulation of chatbot responses. Despite numerous reports indicating the risks associated with these vulnerabilities, Google classified them as 'intended behavior' and marked the associated security tickets as 'Won't Fix.' This decision has raised alarm among cybersecurity experts and users alike, emphasizing the gap between rapid AI advancements and adequate protective measures in place to safeguard user data. The situation reflects a broader trend in AI security, compelling users to reconsider their trust in AI-powered systems and the importance of robust cybersecurity. As this issue unfolds, ongoing discussions regarding the effectiveness and safety of AI tools continue to be paramount for individuals and organizations relying on these technologies for communication and workflow optimization. In light of the increasing sophistication of attacks, many experts suggest users should disable certain smart features within Google products as a precaution.