Welcome to the TikTok Transparency Blog
Trust and Safety
Community Engagement

How TikTok Counters Deceptive Behavior

TikTok is built on the joy of authentic experiences, and we strictly prohibit attempts to undermine our platform's integrity, mislead people, or manipulate our systems. There's a wide-range of inauthentic activities that deceptive actors try to carry out across online platforms, which we prohibit and proactive work to prevent. In this post, we shine a light on how we define, remove, and stay ahead of these deceptive behaviors and the actors behind them, including:

  • The wide range of deceptive behaviors that our policies prohibit
  • Our strategy for tackling covert influence operations, which can be particularly harmful and sophisticated
  • How we prevent fake engagement at scale
  • The additional ways we proactively promote and safeguard authentic experiences
Countering deception with comprehensive policies


Deceptive behaviors can cover a wide range of topics, tactics and goals—from spam to impersonation to covert influence operations. We have comprehensive policies to tackle them at scale, and regularly update these rules as inauthentic behaviors evolve. These include:

  • We do not allow the use of accounts to engage in platform manipulation, such as the use of automation to register or operate accounts in bulk.
  • We do not allow spam, including manipulating engagement signals to amplify the reach of certain content (such as using bots, scripts, or other means to distribute content or interactions in bulk and artificially boost views, likes, comments, shares, or other engagement metrics).
  • We do not allow impersonation, including accounts that pose as another real person or entity (other than parody accounts that are clearly disclosed as such).
  • We do not allow presenting as a fake person or entity that does not exist (a fake persona) with a demonstrated intent to mislead others on the platform.
  • We do not allow fake engagement, such as facilitating the trade or marketing of services that artificially increase engagement, selling followers or likes or providing instructions on how to artificially increase engagement on TikTok.
  • We do not allow covert influence operations where networks of accounts work together to mislead people or our systems and try to strategically influence public discussion.
  • We do not allow the indiscriminate dissemination of hacked materials that would pose significant harm.


Disrupting covert influence operations

While covert influence operations are relatively rare compared to other deceptive behaviors, they use particularly sophisticated tactics and can cause disproportionate harm, making them one of the most challenging types of deceptive behaviors our industry tackles. That's why we've invested heavily in building dedicated teams, policies and transparency reports to focus on covert influence operations full-time.


Defining covert influence operations

Our policies define covert influence operations as coordinated, inauthentic behavior where networks of accounts work together to mislead people or our systems and influence public discussion on important social issues, including elections. This includes networks where the accounts themselves are inauthentic, as well as networks of potentially authentic accounts that post content about political or social issues that target a specific country on behalf of an undisclosed foreign entity like a government, political party, intelligence agency, military, company, or organization.


To enforce our policies against influence operations, we have dedicated, international trust and safety teams with specialized expertise across threat intelligence, security, law enforcement, and data science to work on them full-time. These teams continuously pursue and analyze on-platform signals of deceptive behavior, as well as off-platform activity and leads from external sources. They also collaborate with external intelligence vendors to support specific investigations on a case-by-case basis.


We report the influence operations we disrupt every month in a dedicated report in our Transparency Center. In 2024, we disrupted over 50 influence operations around the world.


Targeting inauthentic expression

Accounts that engage in influence operations often avoid posting content that would be violative of platforms' guidelines by itself. That's why we focus on accounts' behavior and technical linkages when analysing deceptive behaviors, specifically looking for evidence that:

  1. Accounts are coordinating with each other. For example, they are operated by the same entity, share technical similarities like using the same devices, or are working together to spread the same narrative.
  2. Accounts are misleading our systems or users. For example, they are trying to conceal their actual location, or using fake personas to pose as someone they're not.
  3. Accounts are attempting to manipulate or corrupt public debate to impact the decision making, beliefs and opinions of a community. For example, they are attempting to shape discourse around an election or conflict.


These criteria are aligned with industry standards and guidance from the experts we regularly consult. They're particularly important to help us distinguish malicious, inauthentic coordination from authentic interactions that are part of healthy and open communities. For example, it would not violate our policies if a group of people authentically worked together to raise awareness or campaign for a social cause, or express a shared opinion (including political views)—nor would it violate our policies if those efforts successfully helped relevant content reach other TikTok community members with shared interests. However, multiple accounts deceptively working together to spread similar messages in an attempt to influence public discussions is strictly prohibited and would be disrupted.


Disrupting influence networks

Countering covert influence operations is an evolving challenge for any platform because the adversarial actors behind them continuously change their tactics and how they attempt to conceal their efforts. That's why we continuously evolve our detection systems for on-platform activity, work with threat intelligence vendors for additional signals, and encourage authorities to share any potential leads with us proactively. We also look at off-platform activity, and make use of open-source intelligence to identify any related deceptive behavior on TikTok.


After we remove networks, we also monitor vigilantly to prevent them from returning to our platform. Every month, we remove thousands of accounts associated with previously disrupted networks, which we caught attempting to re-establish their presence. We report these account removals in our Covert Influence Operations Reports.


Preventing fake engagement

Fake engagement and spam are much more common deceptive behaviors than covert influence operations, and are typically motivated by personal profit. They also often operate at vast scales and use more obvious tactics that are easier to spot—such as registering or operating fake accounts in bulk using automation.


We invest in advanced technologies to intercept billions of fake engagement attempts every year, and report these efforts every quarter in our Community Guidelines Enforcement Reports. In 2024, through automated technology globally we:

  • Prevented over 2 billion spam accounts from being created, and removed over 1 billion videos posted by fake accounts.
  • Prevented over 54 billion fake likes, and removed a further 8 billion fake likes.
  • Prevented over 33 billion fake follow requests, and removed over 5 billion fake followers.


Even if it's at a smaller scale or doesn't use technology like automation, we don't allow the trade or marketing of services that attempt to artificially increase engagement or deceive TikTok's recommendation system. We remove content or accounts that violate these policies—and in Q3 2024, over 94% of the videos that violate those fake engagement policies were removed proactively.


Differentiating deceptive behaviors

Sometimes, behavior or content that violates our spam or fake engagement policies touches on the same topics that a covert influence operation might. For example, it’s common in our industry for financially-motivated actors to try and leverage sensitive issues like elections to drive engagement for personal profit. These cases are not classified as covert influence operations unless they meet our influence operations criteria, since they don't share the same strategic goals, technical signals, or deceptive tactics (which you can read more about here). However, they are still strictly prohibited and would be removed and reported as part of wider fake engagement efforts in our Community Guidelines Enforcement Report.


More often than not, inauthentic behavior that is externally visible on the platform is not part of a covert influence network, which goes to much to greater lengths to hide any obvious linkages and usually requires in-depth technical investigations to uncover.


Safeguarding authentic experiences

This work to counter deceptive behaviors is just one aspect of our expansive approach to safeguarding authentic experiences on TikTok. Just a few of these measures include:

  • We prohibit harmful misinformation, restrict unverified content from For You feeds, and partner with over 20 fact-checking organizations globally to enforce those policies accurately. 98% of the misinformation that violates our rules is removed proactively.
  • We label unverified content, connect people to authoritative sources of information in-app, and provide "verified" account badges to signal an account belongs to who it claims to be. TikTok is one of the only remaining platforms where verified badges are earned based purely on authenticity criteria, rather than bought.
  • We label state-affiliated media accounts, restrict them from advertising to audiences outside their registered country, and make them ineligible for For You feed recommendation if they try to influence foreign audiences on current affairs.
  • We require people to label realistic AI-generated content so viewers aren't misled, and invest in an easy-to-use labeling tool for creators as well as technologies like Content Credentials that help us automatically label AI-generated content ourselves.
  • We partner with experts around the globe on media literacy features and educational videos that raise awareness for critical thinking skills in-app and connect hundreds of millions of people to authoritative information about elections, evolving natural disasters, health, and more.


Continuing to invest and evolve

In addition to our proactive detection measures, we enable people to easily report content or accounts they're concerned about to us. In our app, people can report deceptive behavior, spam and harmful misinformation and more. We also review reports we receive through our Community Partner Channel and remove violations of our policies.


The work to protect our platform's integrity never ends. We'll continue to invest, evolve, and report on these efforts to help people access reliable information, discover original content, and share authentic interactions.

Learn more about our work:


Next article

January and February transparency highlights

Next article