Back in September, I investigated what generative AI would do to our elections. One of the bigger surprises was how much of an effect generative AI is likely to have on local elections.
Also: How Microsoft plans to protect elections from deepfakes
After reading the article, Prof. Robert Crossler reached out to discuss the issues surrounding elections in our digital-centric society. Crossler, an associate professor of information systems at Washington State University, is eminently qualified to discuss this area. His recent article in Government Information Quarterly looks at measuring the effect of political alignment, platforms, and fake news consumption on voter concern for election processes.
Crossler's work has been funded by the National Science Foundation and the Department of Defense. He served as president of the AIS Special Interest Group on Information Security and Privacy from 2019 to 2020. He was also awarded the 2013 Information Systems Society's Design Science Award for his work on information privacy.
Crossler and I had a fascinating discussion over email. What follows is the contents of our discussion. My questions are in bold font. The only edits are for punctuation.
ZDNET: In what ways can generative AI be leveraged to influence voters or election outcomes, and what are the potential risks associated with its use in this context?
Robert Crossler: Generative AI can be leveraged to further polarize voters. If there is an issue that one party wants to convince another person of, then generative AI can be used to create a narrative that supports this argument. The potential risk is that voters will believe something that is not true, but feel strongly that it is true because they have seen very realistic material that supports the viewpoint that they were led to believe.
Generative AI has the potential to target communications specifically at people based on easily gathered knowledge that people put in the public sphere. This is a technique that we are already seeing utilized for social-engineering attacks from a cybersecurity perspective.
With generative AI, the tools could be used to customize political messaging based on what the generative AI can easily determine are the interests and motivating factors of voters. Being able to do this at scale has the potential to take what have been targets made based on demographics to targets based on a much more granular understanding of the user.
Also: How to use Bing Chat (and how it's different from ChatGPT)
ZDNET: What are the ethical considerations regarding use of generative AI in the electoral process, and how should policymakers and the public approach these concerns?
RC: I wish I knew the answer to this question. But this is the exact question that should be asked and answered. Ethically, I believe that technology should not be used to distort the truth, or better yet make up alternative truths. However, utilizing policy to establish and then enforce ethical considerations in a changing technology is a difficult issue to get correct.
While it is possible to get politicians to agree on an ethical standard when it comes to the use of generative AI, I'm not sure how this applies the further you get from politicians. For example, Political Action Groups, how would this apply to them?
How would limiting their use of this technology impede on their First Amendment rights? I'm not a legal scholar, but anytime the government tries to tell people what they can and can't say, I start to get concerned.
Also: Google to require political ads to reveal if they're AI-generated
ZDNET: Can you share any findings or insights from your research that have implications for the broader understanding of technology's role in the democratic process?
RC: One of the things I found in my research is the role that social media, especially Facebook, has in spreading fake news. As a result, I don't use Facebook and I put little credence to what I read in other social media contexts. I would encourage others to not get drawn into a particular way of thinking based on what they are seeing in social media.
This is especially important given the algorithms that social media uses to engage users. These algorithms are written to show you more of what you engage with and less of what you don't. The result is an echo chamber of sorts, where the same information is confirmed but contrary information isn't necessarily shared.
Also: Facebook bans political campaigns from using its new AI-powered ad tools
ZDNET: What is the role of foreign actors in fake news and generative AI?
RC: Microsoft released information that supported the argument that China was using AI in social media to influence US voters. China and Russia are using generative AI deepfakes to control the narrative regarding the war in Ukraine. We're seeing AI used to try and control narratives around the conflict in Hamas and Israel.
Given what Microsoft reported and how generative AI is being used by foreign actors in relation to the war narrative, it seems reasonable to conclude that this will amplify as we approach Election Day 2024.
ZDNET: How big a threat are foreign actors to our election process? Are they specifically using generative AI to increase their influence or effectiveness?
RC: I think foreign actors pose a great threat to our election process. Foreign actors have always wanted certain people in power that would be more aligned with their agenda. Propaganda has always been a part of the process. Generative AI provides a tool that allows for more targeted and specific messaging that could likely prove to have an even greater influence than traditional methods of propaganda.
A related issue that I think is important to consider as we look at the election process and the potential power of foreign actors (but even domestic) is an unwillingness for political contenders to engage in debate. If the politicians won't put themselves into the public square to engage in conversation, then people will look elsewhere for information. The further people move from the primary source of information, the more likely it is that the information they are being provided is being manipulated by generative AI tools.
Also: AI boom will amplify social problems if we don't act now, says AI ethicist
ZDNET: How do you personally differentiate between legitimate political communication and potentially manipulative content generated by AI systems during election campaigns?
RC: I have started to do two things due to the last election and the disinformation during that election cycle. The first thing I do is triangulate the information that I am receiving. By that, I mean I want to consume information from multiple sources that perhaps have different biases in their reporting. The better I can triangulate information, the more confident I feel in the truth of that information.
The second thing I like to do, which enables me to accomplish the first, is to purposefully not allow myself to form an opinion when I initially learn about a new issue. This is especially important when it seems that the information is different than what I already knew, or potentially politically important. Waiting to form that opinion allows me to give enough time to triangulate with additional information that can confirm (or not) the original information.
ZDNET: Are there particular types of generative AI tech that are more prevalent or influential in the electoral context?
RC: I see this from a lens of media richness, with richer context having greater influence. For example, a video is richer than something that is text-only. What makes the world of generative AI much more effective is the ability for various types of generative content to be utilized together.
While a video may be a rather rich medium to communicate with people and can be very convincing, when there is a created text to accompany the video, it helps to triangulate for someone the information that is trying to be conveyed.
Also: Real-time deepfake detection: How Intel Labs uses AI to fight misinformation
ZDNET: How can the public be better informed about the use of generative AI in elections, and what role do media and educational institutions play in promoting awareness?
RC: The use of generative AI was thrust upon us approximately a year ago, in November 2022, and is still radically changing. I would encourage the public to be paranoid and seek out interactions with people in person over believing what they are seeing online.
I believe this is a time when the media can step up and be skeptical about everything and use their investigative tools to reveal truth. The importance of getting the story right should be more important than getting the story first. As media outlets build a reputation for shedding light on the truth, regardless of a bias they might bring, it will provide an outlet that voters can rely upon. I would love to see a media outlet embrace this as their approach to reporting news. Being first could very well lead to being wrong and losing credibility.
Educational institutions should utilize this as an area to encourage students to critically think. Critically evaluating news as we enter a new presidential election cycle will help create a more informed electorate. I'd also like to see more civil conversations emerge that allow people with differing views to discuss what they are seeing with others and to do so in a public square. The key to this is civil conversation — and if we can find a way to provide that environment, it can only help to clarify the truths in this election process.
Also: Most Americans think AI threatens humanity, according to a poll
ZDNET: How do you see the intersection of technology, democracy, and elections evolving in the coming years, and what potential challenges or opportunities do you anticipate?
RC: I only see our ability to discern the truth becoming more difficult as technology advances. This includes democracy and elections. Part of me wonders if we begin to step away from technology in order to discern the truth. This would allow for less input to be provided into our assessment of what is going on.
However, technology has allowed those without a voice to reveal truths of what is going on in their world. We have seen this throughout the world, that with social media especially, people are able to share what others may want to suppress. As a result, changes have been made.
The biggest challenge that needs to be addressed, and maybe it is addressed by those who own the generative AI technology, is to somehow inform the world when something is created with that technology. Without knowing what is created with this technology, it is going to be increasingly difficult for humans to be able to discern fact from fiction.
ZDNET: How will generative AI continue to influence the electoral process beyond the 2024 elections?
RC: I believe we will see some of the most creative uses of generative AI during the 2024 election cycle. After it is complete, these techniques will likely be enhanced and made better. I wish I knew the path to preventing this or what it will look like.
I hope this becomes a conversation that our politicians engage in, as I believe this is an area where the parties can come together. The manipulative power of generative AI has the potential to equally affect everyone. As such, by bringing together the best and brightest minds, I am hopeful that there is the ability to harness that which is good with these technologies and limit the influence of that which is harmful.
Also: Don't get scammed by fake ChatGPT apps: Here's what to look out for
ZDNET: What is your biggest fear about generative AI and elections? Explain why that is your biggest concern.
RC: My biggest fear about generative AI is that the choices of who represents America will be regretted after the fact. There isn't a mechanism I am aware of that allows Americans to easily undo the choices made on Election Day. If one voice is able to find a way to manipulate and control the outcome of an election, and then through governing it becomes apparent what happened, how do the people respond?
Why is this my fear? The observed influence and ability to target people in the voting process through social media has been highly effective. Generative AI has the potential to amplify this influence in incredible ways. Is there a point where that level of influence causes people to respond in unanticipated ways, throwing resulting in protests?
We've been seeing this occur over the last four years and it is happening in Europe. Are tensions getting high enough that things would explode further? Sometimes it feels that way.
ZDNET: Are there any positive uses for generative AI in American elections?
RC: Generative AI has great potential to increase efficiencies in the election process. From an administrative perspective, it can make the processing of communications much more effective. I would imagine that generative AI could be used to help a candidate prepare for interviews by having a smart agent ask them questions in preparation for a debate. There are a lot of ways that this could be used to make things better.
One skill I recently learned about was a way to use generative AI to take a sensational news story and reduce it to a boring news story. I look forward to using this during the election season to remove the sensationalism that often accompanies political news.
What do you think?
"Removing the sensationalism." That seems like a good place to end. I would like to thank Robert for taking his time and joining me in a fascinating conversation. If you have any questions, comments, or opinions about what we've discussed, please leave them in the comments below.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter on Substack, and follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.