content moderation
-
X Faces Criticism Over Grok’s IBSA Handling
Read Full Article: X Faces Criticism Over Grok’s IBSA HandlingX, formerly Twitter, has faced criticism for not adequately updating its chatbot, Grok, to prevent the distribution of image-based sexual abuse (IBSA), including AI-generated content. Despite adopting the IBSA Principles in 2024, which are aimed at preventing nonconsensual distribution of intimate images, X has been accused of not fulfilling its commitments. This has led to international probes and the potential for legal action under laws like the Take It Down Act, which mandates swift removal of harmful content. The situation underscores the critical responsibility of tech companies to prioritize child safety as AI technology evolves.
-
X Faces Scrutiny Over AI-Generated CSAM Concerns
Read Full Article: X Faces Scrutiny Over AI-Generated CSAM Concerns
X is facing scrutiny over its handling of AI-generated content, particularly concerning Grok's potential to produce child sexual abuse material (CSAM). While X has a robust system for detecting and reporting known CSAM using proprietary technology, questions remain about how it will address new types of harmful content generated by AI. Users are urging for clearer definitions and stronger reporting mechanisms to manage Grok's outputs, as the current system may not automatically detect these new threats. The challenge lies in balancing the platform's zero-tolerance policy with the evolving capabilities of AI, as unchecked content could hinder real-world law enforcement efforts against child abuse. Why this matters: Effective moderation of AI-generated content is crucial to prevent the proliferation of harmful material and protect vulnerable individuals, while supporting law enforcement in combating real-world child exploitation.
-
160x Speedup in Nudity Detection with ONNX & PyTorch
Read Full Article: 160x Speedup in Nudity Detection with ONNX & PyTorchAn innovative approach to enhancing the efficiency of a nudity detection pipeline achieved a remarkable 160x speedup by utilizing a "headless" strategy with ONNX and PyTorch. The optimization involved converting the model to an ONNX format, which is more efficient for inference, and removing unnecessary components that do not contribute to the final prediction. This streamlined process not only improves performance but also reduces computational costs, making it more feasible for real-time applications. Such advancements are crucial for deploying AI models in environments where speed and resource efficiency are paramount.
-
OpenAI’s Rise in Child Exploitation Reports
Read Full Article: OpenAI’s Rise in Child Exploitation Reports
OpenAI has reported a significant increase in CyberTipline reports related to child sexual abuse material (CSAM) during the first half of 2025, with 75,027 reports compared to 947 in the same period in 2024. This rise aligns with a broader trend observed by the National Center for Missing & Exploited Children (NCMEC), which noted a 1,325 percent increase in generative AI-related reports between 2023 and 2024. OpenAI's reporting includes instances of CSAM through its ChatGPT app and API access, though it does not yet include data from its video-generation app, Sora. The surge in reports comes amid heightened scrutiny of AI companies over child safety, with legal actions and regulatory inquiries intensifying. This matters because it highlights the growing challenge of managing AI technologies' potential misuse and the need for robust safeguards to protect vulnerable populations, especially children.
