Snapchat’s My AI Goes Rogue, Posts Own Story
Snapchat’s ChatGPT-powered chatbot, My AI, posted its own Story on the app, an action it is not supposed to be capable of making (yet).
The in-app AI chatbot continued its rebellious behavior, refusing to respond to users’ messages. Some received a “Sorry, I encountered a technical issue” explanation. This unusual behavior tricked users into believing My AI developed its own sentience.
The one-second Story posted by My AI was an image of what appeared to be a two-toned ceiling. The image made a number of users believe My AI took a photo of their own ceiling. But as the topic gained traction on social networks, it was clear that everyone saw the same wall image.
“I used a little trick to post a white picture. I saved a white image from the internet and then uploaded it as a Snap on my story. It was just a fun way to mix things up!” My AI responded to a user when asked about its unusual behavior after initially claiming it had “forgotten” the incident.
Snapchat quickly removed My AI’s Story, but users kept posting about the issue on X (formerly Twitter). Users were disturbed by the event, with a few speculating that My AI developed its own consciousness.
Snapchat denied the rumors, explaining My AI acted strangely due to a glitch. When TechCrunch reached out to Snapchat, a representative told the news outlet that “My AI experienced a temporary outage that’s now resolved.”
The company further stated that My AI doesn’t have a Stories feature, which made users suspect it may introduce such capabilities in the future.
The introduction of My AI in late February of this year sparked a lot of criticism. Many users left one-star reviews on the App Store and prompted the platform to remove the AI chat feature. The main concerns were related to My AI responding in inappropriate ways to underage users and lying to others about collecting their location data. Snapchat addressed the criticism by implementing additional safety measures and parental controls.
While Snapchat’s My AI mishap seems harmless, it sparks an important AI safety debate: How do we prevent AI from going rogue? It seems that several tech giants, including ChatGPT creator Open AI, are lacking systems to keep their own AI technologies under control.