According to the Snapchat Newsroom insight page, over 150 million people have sent over 10 billion messages to My AI chatbot. The majority of these messages include regular discussions over fashion, sports, entertainment, education and food/dining recommendations. Around early June 2024 an upsurge of influencers on social media have been manipulating the Snapchat AI account by pushing it to reveal concealed information which some view as potentially accurate. These influencers have been posting screenshots of the AI character slipping up when it is told that the current year is 2029.
When the AI is told that it is not actually 2024 and that it is in the “future”, these influencers ask who the president is in 2027, It says that Kamala Harris is. This caused a tremendous amount of social media users to question if this will actually happen, waiting for the presidential election to confirm the truth. On the other hand, many people did not fall for this since artificial intelligence is not always giving correct information.
A sum of people have also been receiving some unsettling Snapchat images or “Snaps” of just ceilings and buildings that seem familiar to them. The chatbot then deletes them almost immediately after the user has viewed them.
Further observations and investigations by enterprise management 360 have also confirmed that My AI is not completely safe to use. Content manager Ellis Stewart said, “While Snapchat prioritizes safety and uses filters to prevent My AI from displaying offensive or inappropriate information, it still may not be safe for all users.” This allows for snapchat users to be cautious of the inconsistent safety setting set to protect users. “The chatbot still has the possibility of sharing inappropriate and harmful content, which could be damaging to young users.”
Not only social media users are concerned about this unpredictable technology, but news stations such as CNN are also writing about this. Samantha Murphy Kelly from CNN wrote the article “Snapchat’s new AI chatbot is already raising alarms among teens and parents.”
In her piece Kelly says, “The new tool is facing backlash not only from parents but also from some Snapchat users who are bombarding the app with bad reviews in the app store and criticisms on social media over privacy concerns, “creepy” exchanges and an inability to remove the feature from their chat feed unless they pay for a premium subscription.”
It is clear that the unsettling feeling is preying on every user since the inability to remove the feature is definitely odd as Kelly had said.
Snapchat has made it public that the chatbot is somewhat safe to use, although it may require some continuous improvement as the AI society also continues to evolve.
Snapchat support said, “Because My AI is an evolving feature, It is better to always independently check answers provided by My AI before relying on any advice, and you should not share confidential or sensitive information.”
This may be a relief for many, however, it may be at best of interest to keep a distance from sending sensitive information and at the same time not believing any information that is given by the chatbot.