Meta to blur Instagram messages containing nudity in latest move...
페이지 정보
본문
April 11 (Reuters) - Instagram will test features that blur messages containing nudity to safeguard teens and prevent potential scammers from reaching them, its parent Meta said on Thursday as it tries to allay concerns over harmful content on its apps.
The tech giant is under mounting pressure in the United States and Europe over allegations that its apps were addictive and have fueled mental health issues among young people.
Meta said the protection feature for Instagram's direct messages would use on-device machine learning to analyze whether an image sent through the service contains nudity.
The feature will be turned on by default for users under 18 and Meta will notify adults to encourage them to turn it on.
"Because the images are analyzed on the device itself, nudity protection will also work in end-to-end encrypted chats, where Meta won't have access to these images - unless someone chooses to report them to us," the company said.
Unlike Meta's Messenger and WhatsApp apps, direct messages on Instagram are not encrypted but the company has said it plans to roll out encryption for the service.
Meta also said that it was developing technology to help identify accounts that might be potentially engaging in sextortion scams and that it was testing new pop-up messages for users who might have interacted with such accounts.
In January, the social media giant had said it would hide more content from teens on Facebook and Instagram, мошенники adding this would make it more difficult for them to come across sensitive content such as suicide, self-harm and eating disorders.
Attorneys general of 33 U.S. states, including California and New York, sued the company in October, saying it repeatedly misled the public about the dangers of its platforms.
In Europe, the European Commission has sought information on how Meta protects children from illegal and harmful content. (Reporting by Granth Vanaik in Bengaluru; Editing by Aditya Soni and Alan Barona)
The tech giant is under mounting pressure in the United States and Europe over allegations that its apps were addictive and have fueled mental health issues among young people.
Meta said the protection feature for Instagram's direct messages would use on-device machine learning to analyze whether an image sent through the service contains nudity.
The feature will be turned on by default for users under 18 and Meta will notify adults to encourage them to turn it on.
"Because the images are analyzed on the device itself, nudity protection will also work in end-to-end encrypted chats, where Meta won't have access to these images - unless someone chooses to report them to us," the company said.
Unlike Meta's Messenger and WhatsApp apps, direct messages on Instagram are not encrypted but the company has said it plans to roll out encryption for the service.
Meta also said that it was developing technology to help identify accounts that might be potentially engaging in sextortion scams and that it was testing new pop-up messages for users who might have interacted with such accounts.
In January, the social media giant had said it would hide more content from teens on Facebook and Instagram, мошенники adding this would make it more difficult for them to come across sensitive content such as suicide, self-harm and eating disorders.
Attorneys general of 33 U.S. states, including California and New York, sued the company in October, saying it repeatedly misled the public about the dangers of its platforms.
In Europe, the European Commission has sought information on how Meta protects children from illegal and harmful content. (Reporting by Granth Vanaik in Bengaluru; Editing by Aditya Soni and Alan Barona)
- 이전글Kia Duplicate Key: The Good, The Bad, And The Ugly 24.10.10
- 다음글2024년 대한민국에서 가장 즐겨하는 카지노사이트와 바카라사이트 TOP 8 24.10.10