While I am not a “Swifty,” I’m a very big fan of the National Football League (NFL). Football fans like myself have been recently seeing a lot of highly popular singer-songwriter Taylor Swift on national TV since she’s been dating an NFL player named Travis Kelcie of the Kansas City Chiefs (and the Chiefs are an excellent football team), she has been attending Kelcie’s games and the TV networks televising Chiefs games routinely show her rooting for Kelcie and the Chiefs from her private stadium seats.

Of course, as a very high-profile celebrity whose ongoing “Eras Tour” concert series has been wildly successful, Ms. Swift has historically been subject to lots of media scrutiny. Nowadays, with the rise of sophisticated AI tools, celebrities like Ms. Swift are unfortunately also very susceptible to something known as “deep fakes” where AI technology is used to make false videos, voices or images of a person, event, situation, etc…Often times, these deep fake videos, voices or images can seem to be very plausible and convincing in nature and lead to disinformation.

For example, earlier this month there was a deep fake advertisement featuring Ms. Swift endorsing cookware products. Much more troubling was a very recent report that fake AI-generated highly inappropriate sexually explicit images of Ms. Swift were circulating on the internet.

As we begin an important year of elections across the world (and of course our Presidential election in the United States) there are also big concerns that the rise of deep fakes can result in an increase in disinformation for voters. For instance, earlier this week prior to the Republican primary in New Hampshire there were reports of fake robocalls from President Biden to discourage voters.

Deep fakes should be a concern for everyone as we all have a risk that our respective names, images, likeness and reputations could potentially be inappropriately manipulated by AI technology. It also seems that the more we put ourselves out on social media and post content, the greater likelihood that our names, images, likeness and reputations can be negatively impacted by AI tools.

So how do we try to contain the rising spread of disinformation through deep fakes which has been fueled by the growing prevalence of AI technology? Here’s some possibilities:

  • Watermarking Mechanisms: There’s a view that using digital watermarks to help identify genuine online content is a good way to push back against deep fakes. President Biden’s October 30, 2023 Executive Order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence and the Voluntary AI Commitments that the Biden Administration secured earlier in 2023 from various technology companies recognize that using watermarks is a best practice. However, watermarks are not a “silver bullet” in stopping deep fakes. This recent article by the Electronic Frontier Foundation goes into detail as to why we cannot so conveniently rely on watermarks and states the following near the end of the article: “Watermarking of AI generated content is an easy-sounding fix for the thorny problem of disinformation.” While watermarking technology may not stop deep fakes, as technology continues to advance, hopefully we will see other types of new digital tools that can deter deep fakes.
  • US Federal Laws: While there has been lots of activity in educating Congress about AI and discussion about some bipartisan AI legislation, I’m not sure that we will see any comprehensive US Federal AI legislation in the very near future. However, there may be opportunities for Congress to pass some legislation which is focused on deep fakes as there seems to be growing momentum in this area and a bill was introduced in 2023 known as the AI Labeling Act.
  • US State Laws: During 2023, Minnesota, Michigan and Washington enacted laws to combat deep fakes. Many other states have gotten off to a “fast start” in 2024 to introduce new legislation in the deep fakes area.
  • Laws Outside the US: The eventual comprehensive EU AI Act will provide some regulation on deep fakes. When the AI Act does come into effect, I expect some countries to learn from it and potentially enact local laws which may be similar in nature. Disinformation has also become a significant issue in Brazil as voter misinformation was rampant during its 2022 Presidential election. As a result, Brazil has considered adopting Bill No. 2630 which is also known as the “fake news” law.
  • Active Enforcement of Laws: Of course, for any laws in this space to have impact, it needs to be enforced by applicable regulatory authorities. As an example, it was a step in the right direction to see the Federal Communications Commission issue a recent unanimous ruling to ban the use of AI-generated robocalls.

AI-powered deep fakes are a growing and highly serious concern that leads to misinformation and can damage individuals. Thoughtful action by our respective governments, the technology industry and our society will be required to help contain the rise of deep fakes.

Print:
Email this postTweet this postLike this postShare this post on LinkedIn
Photo of Dennis Garcia Dennis Garcia

Dennis Garcia is an Assistant General Counsel for Microsoft Corporation based in Chicago. He practices at the intersection of law, technology and business. Prior to joining Microsoft, Dennis worked as an in-house counsel for Accenture and IBM.

Dennis received his B.A. in Political…

Dennis Garcia is an Assistant General Counsel for Microsoft Corporation based in Chicago. He practices at the intersection of law, technology and business. Prior to joining Microsoft, Dennis worked as an in-house counsel for Accenture and IBM.

Dennis received his B.A. in Political Science from Binghamton University and his J.D. from Columbia Law School. He is admitted to practice in New York, Connecticut and Illinois (House Counsel). Dennis is a Fellow of Information Privacy, a Certified Information Privacy Professional/United States and a Certified Information Privacy Technologist with the International Association of Privacy Professionals. Please follow Dennis on Twitter @DennisCGarcia and on his It’s AI All the Time Blog.