Over 40 states in the US are taking legal action against Meta, accusing the company of benefiting from the suffering of children. This move comes amidst concerns about the impact of social media platforms on the mental health of young individuals.
Meta has announced the development of new tools aimed at safeguarding teenage users from “sextortion” scams on Instagram. These scams involve coercing individuals into sharing explicit images and then threatening to expose them unless a ransom is paid.
One of the tools being tested is an AI-driven “nudity protection” feature that can detect and blur images containing nudity sent to minors through the app’s messaging system. This initiative is part of Meta’s efforts to protect young users from unwanted and potentially harmful content.
In addition to technological solutions, Meta plans to provide guidance and safety tips to individuals involved in sending or receiving such messages. The company is also facing legal challenges from over 40 US states, alleging that it has prioritized profit over the well-being of children.
Despite these controversies, Meta has pledged to implement measures to enhance the safety of users under 18, including stricter content regulations and improved parental supervision tools. The company emphasizes its commitment to protecting young people from online threats and ensuring a safer digital environment.
Utilizing on-device machine learning, Meta’s “nudity protection” tool analyzes images without accessing them unless reported by users. Furthermore, AI tools will be used to identify and restrict accounts that engage in inappropriate behavior towards young users on the platform.
The revelations made by whistle-blower Frances Haugen have shed light on Meta’s internal research highlighting the risks posed by its platforms to the mental health of young individuals. As Meta continues to face scrutiny over data privacy concerns, its focus on enhancing child protection measures remains a key priority.