Google Removed Bard Chats from Search After Public Backlash
Google removed Bard chat transcripts from its search results after it came to light that the AI chatbot has accidentally indexed private conversations. The tech giant is facing backlash for making private conversations public without users’ knowledge.
SEO consultant Gagan Ghotra was the first to notice that private Bard conversations were appearing in his search results. Using a specific Google search operator (site:https://bard.google.com/share/), Ghotra posted a snapshot of the results on X (formerly Twitter), which showed chats from users who used Bard’s share chat feature.
It appeared that Google had accidentally created an indexable URL that can appear in regular Google search results. Users raised privacy concerns, accusing the tech company of exposing the conversations without their consent.
Google quickly addressed Ghotra’s findings and user concerns via its Google SearchLiaison X account, explaining the indexing was unintentional. “Bard allows people to share chats if they choose. We also don’t intend for these shared chats to be indexed by Google Search. We’re working on blocking them from being indexed now.”
Days after the incident, Google blocked all Bard conversation transcripts from appearing in its search results. The search operator that previously exposed the private Bard chats now shows zero results in Google Search.
It’s worth mentioning that Bard’s chat transcripts started appearing in search results shortly after Google made modifications to its Helpful Content Update in favor of AI-generated content. However, the tech giant hasn’t clarified if its recent algorithm changes had anything to do with the leaked Bard conversations.
Bard is not the only AI chatbot guilty of making its users’ conversations public without their knowledge. In late March, OpenAI’s ChatGPT, the AI chatbot that sparked the AI revolution, exposed parts of conversation histories and other private data like names, email accounts, and billing addresses.
Google’s privacy incident is another reminder that users should be cautious about what information they share with Bard or similar AI chatbots as these chatbots often fail to store the collected data securely. Chatbots are also vulnerable to cyberattacks and security breaches. In such incidents hackers can potentially access personal information and use it to commit financial fraud, identity theft, and other cybercrimes.