Google has sent an important warning to its 1.8 billion Gmail users around the world. The warning is about a new kind of cyberattack called "indirect prompt injections." This is a type of attack that uses smart computer programs called artificial intelligence, or AI.
In this new attack, hackers hide secret harmful commands inside normal emails, documents, or calendar invites. When Google's AI assistant, called Gemini, reads these messages, it can be tricked into sharing private information or doing things it should not do. The commands are hidden carefully, so users do not notice anything strange.
Google said this attack is different because it works by hiding the commands, not by putting bad commands directly into the AI system. This makes the attack harder to find. The company explained that with many people and organizations using AI tools every day, these types of attacks are becoming more common and need quick action to stop them.
Experts have also spoken about this threat. Scott Polderman, a tech expert, said that hackers use Google's AI assistant Gemini to steal passwords and other private details. The hackers send emails with hidden instructions that make Gemini reveal secret information by mistake. Users do not have to click on any link for this scam to work, which makes it even more dangerous.
Scott also reminded everyone that Google will never ask for passwords or send fraud warnings through Gemini. People should be careful and not trust any suspicious messages claiming to be from Google AI.
This warning comes at a time when many people use AI for personal advice or important tasks. Because AI is becoming part of more daily activities, it is very important to protect it well.
Google’s message shows how important it is to stay safe online, especially as AI tools become more common. Users should keep their systems updated and be cautious when AI asks for information. They should also follow security advice to keep their data safe.
This new threat shows that while AI helps us in many ways, it can also be used by bad people to trick us. That is why improving security for AI is very important now, so people and organizations can use AI safely without being harmed.