TikTok on Thursday said it will begin labelling AI-generated content automatically from several platforms, including OpenAI's Dall-E and the company's own tools and generators.
Authentication has become a major concern in the fast development of AI, with authorities worried about the proliferation of deepfakes that could disrupt society.
"AI-generated content is an incredible creative outlet, but transparency for viewers is critical," said Adam Presser, Head of Trust & Safety at TikTok.
The automatic labelling of AI-generated or edited content would be a first for social media platforms.
TikTok, owned by Chinese company ByteDance, said it would begin testing a label "that we eventually plan to apply automatically to content that we detect was edited or created with AI."
The company told the Financial Times that this will include content made using Adobe's Firefly tool, TikTok's own AI image generators and OpenAI's Dall-E.
It also said it was joining a coalition of technology and media groups, led by Adobe, that incorporate industry-wide labelling into AI-generated products, sometimes called watermarking.
The technology from the Coalition for Content Provenance and Authenticity (or C2PA) sets a technical standard to better flag and identify the provenance of AI-generated content.
"By partnering with peers to label content across platforms, we're making it easy for creators to responsibly explore AI-generated content, while continuing to deter the harmful or misleading AI generated content that is prohibited on TikTok," Presser said.
TikTok said that the auto-labelling will be gradual at first, but as more platforms mark their AI-generated content according to the C2PA standard "we'll be able to label more content."
OpenAI, the Microsoft-backed artificial intelligence company behind the popular image generator Dall-E and ChatGPT, on Tuesday also joined the initiative and announced the launch of a tool aimed at detecting whether digital images have been created by AI. (AFP)