Game publisher Finji has a problem: TikTok is generating and running advertisements for their games without asking permission. But it gets worse—the AI-generated ads contain racist content that misrepresents the games and damages the developers' reputations.
This is what happens when you automate ad creation at scale without human oversight. TikTok built a system that generates racist content, charges advertisers for it, and does all of this without even asking if the advertisers want the ads in the first place. It's algorithmic brand destruction as a service.
Finji, the publisher behind indie hits like Tunic and Night in the Woods, discovered the unauthorized ads when users started reporting them. The ads weren't just unauthorized—they contained offensive stereotypes and imagery that had nothing to do with the actual games. Imagine putting years of work into a thoughtful indie game, only to have TikTok's AI slap together a racist ad and plaster it across the platform with your name on it.
The mechanics of what TikTok appears to be doing are troubling. The platform seems to be scraping information about games—titles, screenshots, maybe app store descriptions—and feeding that into a generative AI system that creates video ads. No human review. No permission from the publisher. No checks to ensure the content isn't offensive or wildly inaccurate.
And then, in what might be the most audacious part of this whole mess, TikTok is apparently charging advertisers for these AI-generated disasters. It's unclear whether Finji is being billed for ads they never wanted, but the sheer possibility raises serious questions about TikTok's advertising platform and how it operates.
This isn't an isolated incident of a system making a mistake. This is a fundamental design failure. When you build an automated ad generation system, you have a choice: you can include human oversight to catch problems, or you can optimize purely for speed and scale. TikTok clearly chose the latter.
Generative AI is a powerful tool for creating content quickly. But speed without judgment is how you end up with racist ads for indie games. The technology can generate images and video faster than any human could, but it has no understanding of context, appropriateness, or brand safety. That's why every responsible deployment of AI creative tools includes human review before content goes live.
TikTok skipped that step. And now small indie developers—who have limited resources to fight back against a platform as large as TikTok—are stuck dealing with the fallout.
The broader implication is that this is what unregulated AI deployment looks like. A platform with billions of users builds an automated system to generate more ads, doesn't include safety checks, and when it inevitably creates offensive content, there's no accountability. The developers whose brands are being damaged have to publicly complain on social media just to get someone's attention.
Finji's case is public because they spoke out. How many other developers are dealing with the same problem but don't have the platform or resources to make noise about it? How many users are seeing AI-generated content that misrepresents products, spreads misinformation, or worse?
The technology to generate ads automatically exists. The systems to ensure those ads aren't racist garbage also exist. TikTok chose to deploy one without the other.




