- A local man in Levittown, Pennsylvania was accused of beheading his father, with the horrifying incident being publicized through a 14-minute YouTube video that quickly spread online.
- YouTube, the platform where the video was posted, claims to have strict policies against graphic violence and extremism, but questions remain about how the video was missed by their moderation systems.
- This incident adds to the growing concerns about the effectiveness of social media companies’ moderation practices in preventing the spread of violent and extremist content, highlighting the need for improved efforts in combating online extremism.
Additional Coverage:
Neighbors in Levittown, Pennsylvania were shocked after a local man was accused of beheading his father inside their home. The horrifying incident was publicized through a 14-minute YouTube video that quickly spread across the web. The graphic video has once again raised concerns about social media companies’ ability to prevent the spread of horrific content. YouTube, the platform on which the video was posted, claims to have strict policies against graphic violence and violent extremism. They removed the video and terminated the account associated with it, but questions remain about how it was caught and whether it could have been done sooner.
YouTube relies on a combination of artificial intelligence and human moderators to monitor its platform. In the third quarter of 2023 alone, 8.1 million videos were taken down for policy violations, with over 95% of those videos being flagged by automated systems. However, YouTube did not respond to inquiries about how the beheading video slipped through the moderation systems.
This incident comes at a time when social media companies are facing scrutiny from federal lawmakers regarding child safety online. The CEOs of Meta, TikTok, and other platforms recently testified before lawmakers regarding concerns over the lack of progress in this area. Despite YouTube’s popularity among teens, the company did not attend the hearing.
The beheading video from Pennsylvania adds to the growing list of horrifying content that has been shared on social media platforms in recent years. Livestreams of mass shootings and acts of violence both in the US and abroad have raised questions about the effectiveness of moderation practices on these platforms.
Experts highlight the role of human moderators in catching content that may be new or unusual, and therefore difficult for automated systems to detect. While artificial intelligence is improving, it is not yet foolproof. The Global Internet Forum to Counter Terrorism (GIFCT), a group formed by tech companies to combat the spread of violent content, alerted its members about the video within hours. However, the video had already spread to other platforms, such as X (formerly Twitter), where it remained for several hours.
Social media and the internet have made it easier for individuals to explore extremist groups and ideologies. The ease of access allows those predisposed to violence to find like-minded individuals and communities that reinforce their beliefs. While social media platforms have policies to remove violent and extremist content, the emergence of less closely moderated sites has provided a breeding ground for hateful ideas.
Experts argue that social media companies need to be more vigilant in regulating violent content and combating extremism. The current efforts are seen as insufficient in the face of a growing online presence of extremism and terrorism. Calls for transparency, investment in trust and safety workers, and serious commitment to push back against violent content are rising.
In conclusion, the circulation of a graphic video on social media platforms has once again raised concerns about their ability to prevent horrific content from spreading. The incident highlights the need for improved moderation practices and the importance of a coordinated effort from social media companies to combat violent extremism online.