Did you see the trailer for the new remake of Back to the Future starring  Tom Holland and Robert Downey Jr? Or how about the video where Taylor Swift gives away 3,000 Le Creuset Cookware? AI-generated content is getting so good that it can be nearly impossible for an unaided human to tell the difference between what’s real and what’s artificial.

Aside from the line blurring between fact and fiction just being generally terrifying, there are also several elections coming in the near future that add just a bit more urgency to the problem. The social platforms are at least taking a shot at a solution.

Meta is expanding their AI labelling efforts, using their models to seek out and identify content that’s been either generated or manipulated using AI tools. Every photo and video that’s uploaded to a Meta property is going to get scanned for markers that it may be AI-Generated. If it is, it gets a label.

Where Meta’s main concern is content that is generated elsewhere and uploaded to Facebook/Instagram to trick users, Snapchat may actually have the opposite issue. Its filters and other tools have gotten so good that some users have built whole accounts using filters that modify their appearance.

Snapchat is starting at the source, adding a little Snapchat Ghost as a watermark to label content that has been modified using its AI tools. It also discloses to users which of its tools use AI, and therefore will require that watermark.

So What?

First, as users, we can’t yet rely on the platforms to act as our AI filters. At least for the foreseeable future, we’ll need to approach every piece of content we encounter with a healthy amount of skepticism (especially as it relates to politics).

When it comes to our own content, we need to assume that anything we create using an AI tool is going to get identified and labelled. So, use AI to help to generate ideas, plan content, and even to edit your work, but if you’re going to publish something that’s AI-generated, just know that it’s probably going to get called out.