In recent years, much attention has been drawn to the potential for social media manipulation to disrupt democratic societies. The U.S. Intelligence Community’s 2023 Annual Threat Assessment predicts that “foreign states’ malign use of digital information … will become more pervasive, automated, targeted … and probably will outpace efforts to protect digital freedoms.”
Chinese Communist Party (CCP) disinformation networks are known to have been active since 2019—exploiting political polarization, the COVID-19 pandemic, and other issues and events to support its soft power agenda.
Despite the growing body of publicly available technical evidence demonstrating the threat posed by the CCP’s social media manipulation efforts, there is currently a lack of policy enforcement to target commercial actors that benefit from their involvement in Chinese influence operations (IO). However, there are existing policy options that could address this issue.
The U.S. government can topple the disinformation-for-hire industry through sanctions, enact platform transparency legislation to better document influence operations across social media platforms, and push for action by the Federal Trade Commission (FTC) to counter deceptive business practices, to better address the business of Chinese IO.
Commercial entities, from Chinese state-owned enterprises to Western AI companies have had varying degrees of involvement in the business of Chinese influence campaigns. Chinese IO does not occur in a vacuum; it employs various tools and tactics to spread strategically favorable CCP content.
For example, as reported in Meta’s Q1 2023 Adversarial Threat Report, Xi’an Tianwendian Network Technology built its own infrastructure for content dissemination by establishing a shell company, running a blog and website which were populated with plagiarized news articles, and fake pages and accounts.
Chinese IO efforts have also utilized Western companies. Synthesia, a UK-based technology company was used to create AI avatars and spread pro-CCP content via a fake news outlet called “Wolf News.”
Another example is Shanghai Haixun, a Chinese public relations firm that pushed IO in an online and offline context when it financed two protests in Washington DC in 2022 and then amplified content about those protests on Haixun-controlled social media accounts and fake-media websites.
The role of private companies in Chinese IO can be expected to expand, as they provide sophisticated and tailor-made generative AI services to amplify reach and increase tradecraft. Though the Chinese IO machine is widely known to lack sophistication, it has continued to mature and adapt to technological developments, evidenced by its use of deepfakes and AI-generated content.
Most recently, Microsoft’s Threat Analysis Center discovered that a recent Chinese IO campaign was using AI-generated images of popular U.S. symbols (such as the Statue of Liberty) to besmirch American democratic ideals. The use of generative AI will introduce new challenges to counter the business of Chinese IO and the U.S. government needs to act fast to curtail it.
Our first recommendation is for the U.S. government to slowly dismantle the disinformation for-hire industry by calling out the Chinese companies involved and imposing sanctions or financial costs on them. The Chinese government utilizes its gray propaganda machine to conduct overt influence operations through real media channels such as CGTN, Xinhua News, The Global Times and others, and fake accounts to spread content from these media channels in covert influence operations.
With the attribution of IO to specific private entities such as Shanghai Haixun and others, the U.S. government could build a public case against covert Chinese IO and impose financial costs on Chinese companies, especially if they also provide legitimate products and/or services.
The U.S. government has the jurisdiction to sanction private entities that directly pose a threat to U.S. national security through the Treasury Department’s Office of Foreign Assets Control (OFAC). There are currently OFAC sanctions in place for Chinese military companies, but not Chinese companies involved in influence operations targeting individuals in the United States.
There is also some potential historical precedent for sanctioning Chinese IO given that it is a type of malicious cyber activity; in 2021 the Biden Administration sanctioned Russian and Russian-affiliated entities involved in “malicious cyber-enabled activities” through an executive order. If the executive branch were to direct a policy focus towards known Chinese entities involved in malign covert influence operations, it could signal a first step toward naming and sanctioning Chinese companies.
Furthermore, by sanctioning these entities, social media companies would be more inclined to remove sanctioned companies’ content from their platforms to avoid liability risks. When the European Union imposed sanctions on the media outlets Russia Today and Sputnik after the recent Russian invasion of Ukraine, Facebook and TikTok complied and removed content from these outlets to avoid liability issues, though they had not taken sweeping action on overt state media before.
The U.S. government could use this approach to identify Chinese private companies bolstering IO directed at the American public, name them, and impose transactions costs on them through sanctions.
Our second recommendation is to mandate that large social media companies or Very Large Online Platforms (VLOPs) adhere to universal transparency reporting on influence operations and external independent research requirements.
Large social media platforms currently face the challenge of deplatforming influence operations at scale, which grants them the ability to choose what to report in the absence of government regulations. Regulation that mandates universal transparency reporting IO would be a meaningful first step toward prodding platforms to devote greater attention to that challenge.
The implementation of this recommendation could prove to be more challenging given that transparency reporting currently operates on a voluntary basis, and the efforts of policymakers could be stymied by First Amendment and Section 230 protections.
Recently, a bipartisan group of U.S. Senators proposed the Platform Accountability and Transparency Act in which social media platforms would have to comply with data access requests from external researchers. Any failure in compliance would result in the removal of Section 230 immunity.
Initiatives such as these are essential to promoting platform transparency. If policymakers can mandate transparency reporting on influence operations for VLOPs, including specific parameters of interest: companies involved, number of inauthentic and authentic accounts in the network, generative AI content identified, malicious domains used, political content/narratives, etc., the U.S. government could acquire further insight about the nature of IO at scale.
A universal transparency effort could also empower the open source intelligence capabilities of intelligence agencies, result in principled moderation decisions, increase knowledge about the use of generative AI by malign actors, and empower external researchers to investigate all forms of IO.
Our third and last recommendation is for the FTC to continue to pursue and expand its focus on both domestic and foreign companies engaging in deceptive business practices to bolster Chinese influence operations. In 2019, the FTC imposed a fine of $2.5 million on Devumi, a company that engaged in social media fraud by selling fake indicators of influence (retweets, Twitter followers, etc.).
Though this action was a helpful first step, it is not likely to be a long-term deterrent for all companies engaged in these harmful practices. The FTC should continue to pursue such cases and work with its international partners via its Office of International Affairs. The challenges of increased FTC involvement are vast; the agency has been under resourced and must choose its cases carefully to achieve maximum impact.
However, a sharper FTC focus on the business of Chinese IO could reduce deceptive practices online, protect consumers against the harmful use of generative AI and other technologies, and increase visibility for this issue for social media companies.
Holding private sector actors accountable for Chinese influence operations will not be a straightforward process for the U.S. government, given the need for transparency regulation for social media platforms, the political capital needed for the executive branch to sanction Chinese private entities involved in IO, and FTC’s resource constraints. However, these policy options are necessary to impose costs and help dismantle the disinformation business behind Chinese influence operations.
Bilva Chandra is an adjunct technology and security policy fellow and Lev Navarre Chao was previously a policy analyst at the nonprofit, nonpartisan RAND Corporation.