Comments for the Record
September 17, 2024
Comments on Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements
COMMENTS OF JEFFREY WESTLING[1]
The Commission has initiated this Notice of Proposed Rulemaking (NPRM) to address legitimate concerns about the use of artificial intelligence (AI) models to generate deceptive political advertisements.[2] But deceptive media is nothing new, and society has always dealt with technological advancements that seemingly threatened our ability to decipher truth from fiction.[3] As the Commission begins its timely consideration of rules to help mitigate some of the unique risks that stem from so-called deepfake media, it should remain aware that popular concern can exaggerate the actual risks of AI-generated media. The imposition of specific rules on some technologies and not others can contribute to the very confusion the Commission seeks to address.
The NPRM Assumes Risks Specific to AI-generated Media That Are Unjustified by Reality
The NPRM assumes that AI-generated content will cause significant harms to the electoral process, and while undoubtedly some harms can and likely will occur, the Commission should accurately weigh these risks when developing new rules.
Bad actors do not need AI to cause the types of harms at issue in this proceeding
As the Commission considers disclaimers for AI-generated political advertising, it should recognize that AI is unnecessary to cause the harms the Commission cites, and often less sophisticated tools have similar, if not greater, impact.[4] The NPRM specifically asks for comment about harms associated with political deep fakes, specifically that AI-generated media could depict a candidate doing something they never did or mislead voters on a candidate’s political positions.[5] While these concerns have some merit, AI-generated media isn’t necessary for bad actors to achieve these results, nor are deepfakes necessarily an optimal means to achieve these goals.[6]
For example, viewers do not necessarily trust their eyes alone. Instead, they take into account the context surrounding the information presented to them and form beliefs.[7] In one study, researchers found that participants were more likely to trust an article when it had been shared by people whom the individual already trusted.[8] If a viewer of Fox News or MSNBC sees misleading content on those networks, they are more likely to believe it if they already trust the network (and depending on whether they trust or distrust the depicted candidate), regardless of whether the information was an AI-generated realistic depiction of a candidate, or simply a real video of a pundit making an unsubstantiated claim. If less advanced techniques can achieve the desired result, there is less incentive to use AI tools that could produce falsifiable content.
More important, whether the viewer decides to believe information presented in an ad largely depends on whether the information conforms to the viewer’s existing beliefs.[9] If an individual is presented with a video of a presidential candidate appearing to state a controversial opinion that they never actually endorsed, whether a viewer will believe the candidate’s actual stated that opinion will depend more on whether the viewer already believes that candidate holds that view and less on whether the video appeared realistic. A Trump supporter, for example, will likely not believe a video depicting the former president stating that he supports more open immigration.
As it turns out, if campaigns wish to deceive voters, they do not require sophisticated AI tools to do so – and such content can even cause more trouble for its creator.[10] Just recently, Donald Trump retweeted an AI-generated image of Taylor Swift seemingly telling her fans to vote for Trump.[11] It was likely a joke, and because of the above phenomenon, most people, especially the “Swifties,” knew such content wasn’t real. But when candidate Trump retweeted the photo, the larger story became about the president sharing inauthentic photos.[12] If the photos were simply of actors pretending to be “Swifties for Trump,” the story likely wouldn’t have received nearly the amount of coverage.
Put simply, concerns that deepfakes will cause unique harms to the political process may be unfounded. Meta’s second quarter Adversarial Threat Report highlighted that deepfakes and other generative AI tools provide only incremental productivity and content-generation gains.[13] As the company’s President of Global Affairs Nick Clegg stated earlier in the year, “it is striking how little these tools have been used on a systematic basis to really try to subvert and disrupt elections.”[14] That doesn’t mean the relevant regulators shouldn’t explore measures to mitigate the specific harms that could arise, but when evaluating the relative costs and benefits, the Commission should not simply assume AI will cause new and unique problems.
Rules regarding AI tools in election ads should take the larger information ecosystem into account
While a general transparency rule raises few concerns in isolation, the Commission must remain cognizant of how the rule can impact the use of AI media more broadly. The Commission’s proposed rule would only apply to those services over which the Commission has jurisdiction, namely direct-to-consumer video services. But this is only a minor portion of the information ecosystem.
Political campaigns have begun to turn to digital advertising in significant amounts, and as of May, both presidential campaigns had spent more on digital ads than television ads.[15] The Commission, however, lacks the authority to regulate campaign advertising in the digital space. As campaign ads increasingly go digital, a rule imposing disclaimer requirements on traditional media and not digital media may cause the very confusion the Commission seeks to avoid.
For example, as explained above, consumers can build trust in information based on how it’s presented. If a consumer begins to see television ads with a notification that AI was used in generating the advertisement, they will associate that warning with AI-generated content. At the same time, ads without the disclaimer would imply to the viewer that the advertisement had no such AI-generated content. If campaigns placed political ads only in media over which the FCC has jurisdiction, this wouldn’t be a major problem. But as consumers often see advertisements before YouTube videos, for example, the lack of a disclaimer may cause a viewer to trust that the videos presented are authentic solely because they are conditioned to believe that AI-generated content would come with a disclaimer.
Further, much of the AI-generated content online isn’t presented in the form of an advertisement. The Taylor Swift image mentioned above, for example, was just a post from a random Twitter user that the former president quote tweeted. If the FCC imposes transparency rules on television advertisements, viewers may become accustomed to seeing the disclaimer and assume content without the disclaimer is authentic.
A disclaimer on AI-generated election advertisements could provide benefits to Americans, but the FCC should impose such restrictions, if it has the authority, only in conjunction with the Federal Election Commission to ensure that a disjointed approach to AI generated content doesn’t end up causing the same kind of confusion it seeks to resolve.
Imposing Additional Burdens on Television Providers Can Harm Competition
The Commission should also consider the competitive effects of any new rule. The Internet has become the dominant communications venue in the United States, and increasingly Americans go online for their news, to stay in touch with friends and family, and to watch their favorite television shows.[16] Indeed, Congress and the Commission are considering a wide range of policy proposals to help television stations, especially local broadcasters, survive in the Internet age.[17]
This rule could impact competition in two ways.
First, it adds a requirement that television stations ask and certify with potential advertisers that the advertisement does not contain AI-generated materials. Collecting and storing this information will necessarily increase the regulatory burden on stations, especially if they could be financially liable for failing to comply. Internet platforms and social media companies, with which television stations are actively competing for viewers’ attention and advertising dollars, will not have these same costs. If the Commission does adopt these rules, it should design the rules in a way that minimizes the administrative burden on television providers.
Second, the rule could drive political advertisers to online platforms in greater numbers. Currently, political advertisers still prefer television advertising over online options, but having disclaimers run on their ads could lower the value of these ads or could incentivize campaigns to use traditional video-editing tools at greater expense, raising costs. Instead, political campaigns may find that similar video ads online or shifting advertising more toward banner ads, search result ads, or other advertisements, such as paid sponsorships of social media posts, offer better value.[18] Further, if the regulations impose costs on television stations, they may be forced to pass the costs on to the advertisers, meaning television advertisements will increase in price as their value goes down. As this occurs, political advertisers will increasingly turn to Internet-based options, further harming the ability of television stations to compete with digital media.
If the Commission moves forward with this rule, it should minimize the costs on television stations and ensure that failing to identify AI-generated content does not lead to significant penalties in cases where the television station is not aware of the nature of the advertisement. It should also make as clear as possible the types of content covered by the rule to lower the risk to both television stations and political advertising of inadvertently violating the rule. This will prevent, to the greatest extent possible, negative competitive effects from the Commission rules.
Conclusion
Transparency requirements such as those proposed in the NPRM could have positive effects for Americans, but the rules should be imposed with an understanding of how disclaimer requirements fit into the AI conversation writ large. The Commission runs the risk of potentially adding more confusion to viewers and costs to businesses if the rules are unclear or cause one venue for communication to be treated differently than others.
[1] Jeffrey Westling is the Director for Technology & Innovation Policy at the American Action Forum. These comments represent the views of Jeffrey Westling and not the views of the American Action Forum, which takes no formal positions as an organization.
[2] Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements, Notice of Proposed Rulemaking, MB Docket No. 24-211 (July 10, 2024) (“NPRM”), https://docs.fcc.gov/public/attachments/FCC-24-74A1.pdf.
[3] For example, photo editing techniques existed long before Adobe first released its photo editing software Photoshop, and the introduction of computer programming caused some tensions but society ultimately adapted to changes. Jeffrey Westling, “Deep Fakes: Let’s Not Go Off the Deep End,” Techdirt (Jan. 30, 2019), https://www.techdirt.com/2019/01/30/deep-fakes-lets-not-go-off-deep-end/.
[4] See generally Jeffrey Westling, “Are Deep Fakes a Shallow Concern? A Critical Analysis of the Likely Societal Reaction to Deep Fakes,” TPRC47 (July 24, 2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3426174.
[5] NPRM at ¶ 10.
[6] Jeffrey Westling, “Deception & Trust: A Deep Look at Deep Fakes,” Techdirt (Feb. 28th, 2019), https://www.techdirt.com/2019/02/28/deception-trust-deep-look-deep-fakes/.
[7] Id.
[8] “’Who shared it?’: How Americans decide what news to trust on social media,” American Press Institute (Mar. 20, 2017), https://americanpressinstitute.org/trust-social-media/.
[9] Jeffrey Westling, “Deception & Trust: A Deep Look at Deep Fakes,” Techdirt (Feb. 28, 2019), https://www.techdirt.com/2019/02/28/deception-trust-deep-look-deep-fakes/.
[10] Id.
[11] Betsy Reed, “How did Donald Trump end up posting Taylor Swift deepfakes?” The Guardian August 26, 2024, https://www.techdirt.com/2019/02/28/deception-trust-deep-look-deep-fakes/.
[12] Rachel Looker, “Trump falsely implies Taylor Swift endorses him,” BBC (Aug. 19, 2024), https://www.bbc.com/news/articles/c5y87l6rx5wo.
[13] Second Quarter Adversarial Threat Report, Meta (August 2024), https://transparency.fb.com/sr/Q2-2024-Adversarial-threat-report.
[14] Felix M. Simon et al, “AI’s impact on elections is being overblown,” MIT Technology Review (Sept. 3, 2024), https://www.technologyreview.com/2024/09/03/1103464/ai-impact-elections-overblown/.
[15] “Digital Ad Spending Nearly Even with TV in Presidential General Election,” Wesleyan Media Project (May 31, 2024), https://mediaproject.wesleyan.edu/releases-053124/.
[16] “News Platform Fact Sheet,” Pew Research Center (Nov. 15, 2023), https://www.pewresearch.org/journalism/fact-sheet/news-platform-fact-sheet/.
[17] See Priority Application Review for Broadcast Stations that Provide Local Journalism or Other Locally Originated Programming, Notice of Proposed Rulemaking, MB Docket No. 24-14 (Jan. 17, 2024), https://docs.fcc.gov/public/attachments/FCC-24-1A1.pdf; see also Joshua Levine, “Journalism Competition and Preservation Act: The Price of Digital Ink,” American Action Forum (Sep. 7, 2022), https://www.americanactionforum.org/insight/journalism-competition-and-preservation-act-the-price-of-digital-ink/.
[18] David Bauder, “They look like—and link to—real news articles. But they’re actually ads from the Harris campaign,” ABCNews (Aug. 16, 2024), https://abcnews.go.com/Business/wireStory/link-real-news-articles-ads-harris-campaign-112890727.
March 27, 2024
Comments for the Record
Comments on the Rural Digital Opportunity Fund
Jeffrey Westling
COMMENTS OF JEFFREY WESTLING[1] The Federal Communications Commission designed the Rural Digital Opportunity Fund (RDOF) to ensure continued and rapid…