Skip to content
High Severity

Criminal Liability for Platform Moderation

This provision criminalizes hate speech that incites genocide or aggravated violence, but when combined with the broad "control over communication" standard in 39, it creates criminal liability exposure for platform moderators, editors, and content curators. Anyone who "substantially dictates how content should be framed" or can "communicate or remove content" may face criminal penalties for hate speech they did not originate. The provision is also incompletely drafted—it references "section []" without specifying penalties and cuts off the definition of "aggravated violence" mid-sentence—creating legal uncertainty about what conduct is prohibited and what penalties apply. This chills legitimate platform moderation and editorial judgment, forcing businesses to choose between over-moderation or criminal liability risk.