A new study highlights serious issues with Meta’s moderation of harmful self-harm content on Instagram. The research comes from Digitalt Ansvar, a Danish organization dedicated to responsible digital practices. The findings suggest a troubling link between the platform’s current practices and the mental health challenges many teenagers face today.
Findings of the Study
Digitalt Ansvar’s study shows that Instagram’s moderation is not effective. The organization found that Instagram’s algorithm unintentionally promotes harmful self-harm imagery. These images include graphic depictions of blood, razor blades, and messages encouraging self-injury. Researchers discovered that not only were explicit images not being removed, but the platform also facilitated connections among users who might be vulnerable.
To investigate this, researchers created fake profiles of 13-year-olds. They shared 85 pieces of self-harm-related content within a private Instagram network. Interestingly, despite Meta’s claims that its AI system removes 99% of harmful content, the study found that none of the images were flagged or removed during this month-long experiment.
The Role of the Algorithm
The research raises alarms about Instagram’s lack of effective action. Digitalt Ansvar suggests that the platform’s AI capabilities are underutilized. Researchers developed an alternative tool that analyzed the shared self-harm content. This tool successfully identified 38% of the self-harm images and 88% of the most explicit posts. The fact that the tool was able to identify more content than Instagram’s algorithm indicates that Meta has the technology to address the issue but is not using it effectively.
Furthermore, the study found that Instagram’s algorithm actively helps self-harm networks grow. Researchers noted that if a member of a self-harm group connected with a newly created 13-year-old profile, Instagram would suggest that the teen befriend all other members. This functionality might not only fail to reduce the spread of harmful content, but it may also actively increase it.
Meta’s Response
In response to these findings, Meta defended its content moderation policies. The company reiterated that content promoting self-injury is against Instagram’s rules and is generally removed. They also mentioned the introduction of Instagram Teen Accounts. These accounts are designed to protect teenagers from sensitive content by placing restrictions on their exposure.
Meta claimed it removed over 12 million pieces of content related to suicide and self-harm in the first half of 2024. Most of this content was flagged proactively, according to their statement. However, experts are not convinced.
Criticism from Experts
Psychologist Lotte Rubæk criticized Meta for its failure to remove explicit self-harm content. Rubæk, who resigned from Meta’s global expert group on suicide prevention earlier this year, expressed disappointment in the findings. She pointed out that the content linked to self-harm is increasingly associated with youth suicides.
Rubæk stated, “The failure to moderate self-harm images can have severe consequences. These groups go unnoticed by parents, authorities, and those who could offer support.”
Similarly, Hesby Holm, CEO of Digitalt Ansvar, raised concerns about Instagram’s role in facilitating these harmful networks. He emphasized the risk of Instagram prioritizing user engagement over safety. “We don’t know if they moderate larger public groups,” he said, “but the problem is that self-harm groups tend to be smaller and more private. These are often the groups that go unchecked.”
Implications for Social Media Regulation
The implications of this study are significant, especially when considering the European Union’s Digital Services Act. This act requires large digital platforms to identify and mitigate systemic risks to users’ well-being. Meta’s ineffective handling of self-harm content raises serious questions regarding the company’s compliance with these new regulations. It also calls into question its responsibility for the mental health of young users.
Moving Forward
Growing evidence points to a troubling trend. Instagram is not doing enough to protect its youngest users from harmful content. Alarmingly, it may even be fueling networks that pose risks to these vulnerable individuals. As public scrutiny increases, the pressure is mounting on Meta to demonstrate accountability. They must take meaningful actions to protect the mental health of the children and teenagers using their services.