CALIFORNIA (Kashmir English): Meta will eliminate the end-to-end encrypted direct messaging feature on Instagram before May 8, 2026. The company stated that the discontinuation of the feature originated from its usage by only a few users who chose to activate it.
Meta announced through a statement that Instagram will eliminate end-to-end encrypted messaging because very few users selected this feature. The company stated that users who wish to send encrypted messages can use WhatsApp, which provides end-to-end encryption as its standard feature.
Meta first began testing end-to-end encryption for Instagram direct messages in 2021. The feature existed because CEO Mark Zuckerberg wanted to develop more private communication methods for his company.
The encrypted messaging feature never reached full distribution because it required manual activation in all locations. Meta extended Instagram encrypted messaging to adult users in Russia and Ukraine two weeks after the Russia-Ukraine war began in February 2022.
The company implemented this feature to strengthen user security during the ongoing war. The decision comes as major social media companies continue debating the role of encryption in messaging platforms.
TikTok announced last week that it would not implement end-to-end encryption for its direct messaging system. The company claimed that encryption would create obstacles for law enforcement and safety personnel who need to inspect messages under specific conditions. Meta has not announced any replacement feature for encrypted Instagram messages.
Instagram Policy for Teens
Earlier, Instagram said it would alert parents if their teenagers repeatedly search for terms related to suicide or self-harm within a short period.
The decision comes amid growing pressure for governments to follow Australia’s ban on the use of social media for under-16s.
In January, Britain said it was considering restrictions to protect children online, after Australia’s move in December. Greece, Spain, and Slovenia, in recent weeks, said they are also looking at limiting social media access.
Instagram, owned by Meta Platforms Inc, said it would start alerting parents who are signed up to its optional supervision setting if their children try to access suicide or self-harm content.
“These alerts build on our existing work to help protect teens from potentially harmful content on Instagram,” the platform said in a statement. “We have strict policies against content that promotes or glorifies suicide or self-harm.”
Instagram’s existing policy
Instagram said its existing policy is to block such searches and redirect people to support resources, adding that it would begin the alerts from next week for those signed up in the US, Britain, Australia, and Canada.
Governments across the globe are increasingly seeking to protect children from harm online, particularly after worries over the AI chatbot Grok, which has generated non-consensual sexualised images.
It is to mention here that Instagram’s “teen accounts” for under-16s need a parent’s permission to change settings, while parents can select an extra layer of monitoring with the consent of their teenager.




