With AI-generated videos becoming almost indistinguishable from real ones, I feel like the Dead Internet Theory is starting to make more sense than ever.
One idea I’ve been thinking about is some kind of platform-wide verification for creators who commit to never posting AI-generated videos. For example, a visible green tick or ‘not-ai’ label that confirms the account only shares real, human-created content.
This wouldn’t be automatic. Accounts would need to meet certain minimum criteria to qualify, and if there’s credible evidence or repeated reports that an account posted AI content, the tag would be permanently removed. The idea is to create a set of trusted sources people can rely on without constantly questioning whether what they’re watching is real.
I’m not saying this solves everything, but it could be one way to preserve authenticity as AI content keeps flooding feeds.
I have a few other ideas too, but I’ll save those for a separate thread. Curious what others think. Is something like this realistic, or are we already past the point of no return?
EDIT: I was thinking maybe higher monetary incentives these ‘not-ai’ tagged creators might help.
But there are bots that don’t use ai. They just repost other people’s real videos so they would fall through the cracks and get this verified checkmark.
Fair point. That’s why the verification wouldn’t just be “no AI,” but could be “original human authorship.” Repost accounts, even of real videos, wouldn’t qualify. The badge could be tied to accountability and proof of creation, not just the content itself.
Visibility for creators that commit to non AI and non-disguised-bot (not entirely non-bot because some bots in threads do their jobs (e.g. auto moderate) without pretending to be people) is more vital now that around or about half of all content is AI or bot.
It's like that analogy of picking out the skittles from the m&ms (or vice versa) as the ratio changes from one becoming dominant to the other.
At a certain point we need to label/tag the things we do want rather than the things we don't want.
Repeated reports would never work. Have you seen the toxic art community? They tear down legitimate artists calling them AI Slop and demand they prove themselves over and over. Imagine if account flags were automated based on that nonsense.
What I was thinking is not take a route of reporting AI content but actually verifying human made content, and give specific tags to creators who commit to using original content and give them more visibility on social media. And even if these creators want to AI content for educational purposes or just cause it’s fun, they responsibly use the AI tag.
This would be as effective as “100% organic” tags on food. As in, not at all.
Its not in social media best interest to acknowledge it. Ai videos are content, bots are users. It all boosts their numbers.
I don’t know the stats on this yet, but I believe majority of people want to see real human made content, unless its some educational/fun content created by AI.
Watching more AI generated useless content- will only encourage fake news and false information. Hence, people stop watching videos on the internet cause you don’t know what’s real, more bot interaction and then we’ll welcome - ‘The Dead Internet’
Yea no one i know wants to see it. Bottom line for them is advertising revenue and untill that looks like its gonna take a hit they won't enforce anything.
Yeah, I agree. Driving factor is revenue. That would only stop when people stop engaging but seems like that’s not gonna happen anytime soon.