TikTok to crack down on extremist and AI content

At its European Trust and Safety Forum, which took place in Dublin, the platform said it had removed more than 6.5 million videos in the first half of the year for breaching its rules on violent and hateful content.

The company said these accounts attempted to avoid detection by using coded emojis, rebranding accounts and directing users to external websites.

Up to 17 organisations, using over 920 accounts, were reported to be linked to extremist material, including content praising terrorist attacks, encouraging mass violence or targeting protected groups.

However, TikTok said 98.9pc of the content was taken down before users flagged it, and 94pc was removed within 24 hours.

After removing these accounts, the platform said it continues monitoring attempts to rebuild activity on the platform.

A new trial with the Violence Prevention Network, a watchdog for extremist content, will introduce in-app resources for users in Germany searching terms linked to violent extremism.

TikTok said it will review the trial before deciding whether to expand it to other countries.

In a separate update, TikTok announced new controls and labelling methods for AI-generated content (AIGC). It will begin testing an AIGC control within its “Manage Topics” menu, allowing users to increase or reduce the amount of AI-generated material shown in their “For You” feeds.

This will operate alongside existing personalisation tools, including keyword filters and the “not interested” option, it said.

The platform is set to introduce invisible watermarking for AI-generated videos which will be a watermark applied to content created with TikTok’s AI tools, such as AI Editor Pro, and to videos uploaded with C2PA Content Credentials. The watermark is designed to remain intact even if content is downloaded, edited or reposted elsewhere, it said.

TikTok noted that it has already labelled over 1.3 billion videos using a combination of creator labels, detection models and C2PA data, but outlined that labels can be removed during reuploads or editing on other platforms.

The platform has also launched a $2m (€1.73m) AI literacy fund, aimed at supporting organisations producing educational material about AI transparency and safety.

source

Leave a Reply

Your email address will not be published. Required fields are marked *