The Dual Nature of TikTok: A Platform Under Scrutiny
TikTok, a video-sharing social platform, has always been a subject of both admiration and skepticism. Since its launch in 2017, it has grown into a super app with approximately 1.2 billion users. However, concerns about its operational direction persist, especially given that its parent company, ByteDance, is a Chinese enterprise. Even after the transfer of TikTok’s U.S. business rights to a local investment firm, doubts about its data practices remain.
The TikTok Transparency and Accountability Center (TAC), located in Singapore’s One North district, was visited on the 25th of last month. This facility was established to address such concerns. The TAC is managed by TikTok’s Trust and Safety Team (T&S), which reviews content for harmfulness before it reaches users. Korean content is primarily reviewed by native Korean-speaking employees. The TAC also serves as an exhibition hall, emphasizing transparency by annually disclosing its review process.
AI Filters and Harmful Content Detection
At the Singapore TAC, one can directly experience how TikTok’s AI filters harmful content in real-time. During a demonstration, when a model knife was swung threateningly in front of a monitor embedded with TikTok’s AI, the on-screen “dangerous tool index” instantly surged to over 90%. Conversely, when the knife was held like a microphone and used to mimic singing, the number remained in the 0% range. A TikTok representative explained, “The AI analyzes the ‘context’ of dangerous actions” and added, “89.7% of harmful content worldwide is deleted before being viewed.”
The criteria for judging harmfulness, called “Community Guidelines,” are updated annually by TikTok’s Global Trust and Safety Team (T&S), comprising thousands of experts worldwide, and fed into the AI. Thousands of categories, including “dangerous tools” like beer bottles and knives, “extremist symbols” like IS flags, and “drinking, smoking, or criminal acts,” are added to the AI’s review criteria.
Content flagged by the AI as ambiguous for judgment undergoes secondary review by human moderators in the T&S team. This includes “gray-area” content such as dance videos without sexual acts despite revealing clothing, or hate speech without violent scenes. The six judgment categories are:
- Safety and Civic Awareness
- Mental and Behavioral Health
- Sensitive Adult Themes
- Authenticity and Integrity
- Regulated Items and Commercial Activities
- Privacy and Security
TikTok invests over 2 billion Korean won (approximately 2.8 trillion Korean won) annually in the T&S team’s safety management budget.
The Role of Human Moderators
While the proportion of AI reviews has been steadily increasing, a local representative emphasized, “AI can never fully replace human moderators,” and noted, “Detailed reviews reflecting cultural differences in each country remain the domain of humans.” For example, “in the Korean TikTok market, harmful content influencing eating disorders among teenagers or defamatory videos within the rapidly increasing K-pop content has been added as new review areas.”
The core criterion for judging “politically inflammatory content,” which draws the most suspicion that TikTok operates “according to the tastes of each government,” is currently “harmfulness.” Whether inflammatory content can lead to misinformation or violence is the standard for video deletion.
According to TikTok’s report, content deleted upon request by the South Korean government surged from 19 cases in 2022, 19 cases in 2023, 72 cases in 2024, to 113 cases in the first half of 2025 alone. While this is smaller compared to Türkiye, the country with the most deletion requests in the first half of 2025 (3,340 cases), the growth rate is steep. TikTok has not disclosed specific deletion reasons, but 2024 and 2025—when deletion requests from the South Korean government surged—were periods with the 22nd National Assembly election, the 9th nationwide simultaneous local elections, and an early presidential election.
A TAC representative also mentioned that fact-checking for Korean-language fake news videos is conducted through “Lead Stories,” a third-party agency based overseas, not in Korea. This is due to the judgment that “there is a lack of credible third-party Korean-language fact-checking agencies in South Korea.”
Complaints that TikTok’s arbitrary regulations excessively suppress users’ “freedom of expression” remain a major challenge. A TikTok headquarters representative responded, “While the freedom of assembly and protest must be guaranteed, whether the message is appropriate for children depends on the context,” and stated, “TikTok’s position is that application criteria can vary depending on ‘who views the content and in what context.’”
Addressing AI-Generated Content
“AI Slop” (low-quality AI-generated content), a growing headache for social media platforms, is also a gray area for TikTok. Even if AI is used to mass-produce low-quality content, active sanctions are difficult if there is no harm.
Instead, TikTok operates an “AI labeling” system that uses AI technology to identify and label AI-generated content. If AI content infringes on copyrights or spreads fake news, TikTok’s T&S can classify it as harmful. A typical example is videos where AI subtitles are added to copied footage.
A TikTok representative also stated, “In some overseas countries, a blocking filter function (topic management settings) is being implemented to prevent AI content from appearing on the platform if users request it,” and added, “After preliminary testing, it will be gradually introduced in South Korea.”
Membership Offers for News Readers
For those interested in accessing more content, there are membership options available. For instance, a monthly membership costs 5900 won, while newspaper readers can enjoy it for 2900 won. The “Chosun Membership” offers access to 8 different newspapers and magazines worth 55000 won. Additionally, members receive 7000 points that can be used like cash for shopping.
