Newly released data reveal how a no-compromise approach to AI-generated and other fictional sexual content depicting children has diverted resources away from prosecuting real child sexual abuse material (CSAM).
This article analyses the role that informational architectures and infrastructures in federated social media platforms play in content moderation processes.