The latest wave of fearmongering about TikTok involves a study purportedly showing that the app suppresses content unflattering to China. The study attracted a lot of coverage in the American media, with some declaring it all the more reason to ban the video-sharing app.
“Hopefully members of Congress will take a look at this report and maybe bring the authors to Washington to give testimony about their findings,” wrote John Sexton at Hot Air. The study “suggests that the next generation will have had a significant portion of their news content spoon fed to them by a communist dictatorship,” fretted Leon Wolf at Blaze Media. “TikTok suppression study is another reason to ban the app,” declared a Washington Examiner editorial.
But there are serious flaws in the study design that undermine its conclusions and any panicky takeaways from them.
In the study, the Network Contagion Research Institute (NCRI) compared the use of specific hashtags on Instagram (owned by the U.S. company Meta) and on TikTok (owned by the Chinese company ByteDance). The analysis included hashtags related both to general subjects and to “China sensitive topics” such as Uyghurs, Tibet, and Tiananmen Square. “While ratios for non-sensitive topics (e.g., general political and pop-culture) generally followed user ratios (~2:1), ratios for topics sensitive to the Chinese Government were much higher (>10:1),” states the report, titled “A Tik-Tok-ing Timebomb: How TikTok’s Global Platform Anomalies Align with the Chinese Communist Party’s Geostrategic Objectives.”
The study concludes that there is “a strong possibility that TikTok systematically promotes or demotes content on the basis of whether it is aligned with or opposed to the interests of the Chinese Government.”
There are ample reasons to be skeptical of this conclusion. Paul Matzko pointed out some of these in a recent Cato Institute blog post, identifying “two remarkably basic errors that call into question the fundamental utility of the report.”
The errors are so glaring that it’s hard not to suspect an underlying agenda at work here.
Most notably, the researchers fail to account for differences in how long the two social networks in question have been around. Instagram launched nearly 7 years before TikTok’s international launch (and nearly 6 years before TikTok existed at all) and introduced hashtags a few months thereafter (in January 2011). Yet the researchers’ data collection process does not seem to account for the different launch dates, nor does their report even mention this disparity. (Reason reached out to the study authors last week to ask about this but has not received a response.)
The researchers also fail to account for the fact that Instagram and TikTok users are not identical. This leads them “to miss the potential for generational cohort effects,” suggested Matzko. “In short, the median user of Instagram is older than the median user of TikTok. Compare the largest segment of users by age on each platform: 25% of TikTok users in the US are ages 10–19, while 27.4% of Instagram users are 25–34.”
It’s easy to imagine how differing launch dates and typical-user ages could lead to differences in content prevalence, with no nefarious meddling by the Chinese government or algorithmic fiddling by Bytedance needed.
Take, for instance, the finding that there were vastly more Instagram hashtags related to Tibet or the Dalai Lama than there were on TikTok (37.7 on Instagram for every one on TikTok). The NCRI reads this as evidence that TikTok hid posts related to these subjects. But Instagram had seven additional years to rack up posts related to Tibet. And those were years in which Western interest in Tibet was generally higher than in more recent years. (“A quick peek at Google trends data show that public discourse about Tibet in the US has been in a general decline throughout the 2000s and 2010s, albeit punctuated by exponential spikes…in April 2008 and December 2016,” noted Matzko.) It’s only natural that there would be many more Tibet-related posts on Instagram than on the more recently-launched TikTok.
Or take the finding that Instagram had many more Ukraine-supportive posts than TikTok did. For instance, there were 12 Instagram posts with the #StandWithUkraine hashtag for every one on TikTok, and 4.2 Instagram posts with the #SaveUkraine hashtag for every one #SaveUkraine TikTok post. Some of the difference might stem from the fact that Instagram was around in 2014—when Russia annexed Crimea from Ukraine—while TikTok was not. And even if we assume that most of the hashtags relate to the more recent conflict, we’re still left with the fact that Instagram’s users are older than TikTok’s users. It wouldn’t be surprising if 20- and 30-somethings are more likely to post about Ukraine than teens and tweens are.
It’s not simply median user age that separates Instagram and TikTok. While all sorts of content can be found on either platform, they have each developed distinct cultures, protocols, etc., as well, and that makes cross-platform comparisons hazy.
It’s also worth noting that while the Instagram to TikTok ratio for general pop culture and political hashtags was fairly low (a 2.2 to 1 ratio for 14 pop culture hashtags and a 2.6 ratio for 18 political hashtags), there was variation within these groups, particularly in politics. For instance, there were 19.4 #Potus posts, 3.8 #HarryStyles posts, 6.8 #ProLife posts, and 0.6 #Trump2024 posts on Instagram for every one on TikTok. So the idea that China-sensitive content is the only area with discrepancies is not correct.
A comparison of hashtags related to Kashmiri independence paints a particularly odd picture. The hashtags #StandWithKashmir, #WeStandWithKashmir, and #IStandWithKashmir are relatively scarce on Instagram but quite abundant on TikTok—to the tune of 370,407 on Instagram and 229,231,866 on TikTok in total. But a quick search shows that there are 8,816,839 Instagram posts with the hashtag #Kashmir alone. It’s possible some of these posts are pro-Kashmiri independence and the two platforms just developed different popular tags.
It’s also possible that something fishy is going on with the Kashmir posts. But even then it wouldn’t necessarily follow that this involves nefarious moves by TikTok. Perhaps a pro-Kashmir entity—Chinese or otherwise—created a bot operation to spam TikTok with this hashtag. The hashtag’s prevalence alone doesn’t tell us that anyone at TikTok tried to amplify it.
And even if we accept thatChina was behind this (despite having no hard evidence for that), we’re still left with zero information about what kind of accounts used the hashtag, what kind of reach they had, and whether their posts were seen by many users.
A hashtag being used millions of times could mean nothing if it’s used by low-follower accounts on videos that get few views.
TikTok noted as much in a recent press release about Israel/Palestine content on the platform. “The number of videos associated with a hashtag, alone, do not provide sufficient context,” it states. “For example, the hashtag #standwithIsrael may be associated with fewer videos than #freePalestine, but it has 68% more views per video in the US, which means more people are seeing the content.”
Assuming that all those #IStandWithKashmir posts translates to significant views and impact is the same mistake people made with Russian bots after the 2016 election. People took the number of bots or posts as evidence of widespread impact, but relatively few people ever saw or interacted with their content.
These flaws in the NCRI study don’t disprove the idea that TikTok suppresses China-sensitive content, of course. The relative scarcity of certain hashtags certainly could still be due to deliberate work. But this study is far from sufficient evidence for that claim. And it seems irresponsible for researchers—and reporters—to draw conclusions from this data without noting that Instagram has well over half a decade on TikTok, that some of the studied topics were more widely discussed before TikTok existed, and that there’s a significant difference in the median user age of each platform.