Future-Proofing Democracy in the Age of AI

Imagine this scenario:

In the wake of coordinated barrage of AI-powered cyberattacks by an authoritarian regime, a democratic country, ‘inoculated’ against such threats through public education and a set previously implemented system upgrades, calmly ignores the attempts to destabilize it. Undeterred, the authoritarian regime redoubles its efforts during a subsequent election cycle whose outcome will signify the country’s commitment to democracy. The candidate who is most aligned with democratic values is elected, and democracy prevails.

This is not a future fantasy. This already happened in Taiwan.

In our book, we argue that the market forces at play in the deployment of AI in the United States did not make allowances for a proper or measured approach to educating the public, which should have proceeded the release of AI into the public domain. This lack of wisdom and foresight—both on the part of tech companies and the U.S. government—could have catastrophic implications in a future where disinformation is cranked up due to AI. What happens if U.S. citizens, who are unable to distinguish true from false or up from down due to a coordinated AI cyberattack, panic?

Audrey Tang, Taiwan’s first Minister of Digital Affairs of Taiwan, has been implementing the following upgrades to future-proof democracy since her tenure began in 2022:

Verified phone numbers in the form of short codes: All information issued via SMS from the Taiwanese government comes from 111. Utility companies, banks, and other public agencies are adopting their own incorruptible short codes so that citizens know what information is coming from a verified source and what is not.

Prebunking: Prebunking operates on the theory that if you understand how misinformation can manipulate you before you see it, you are less likely to believe it. In 2022, prior to the broad release of generative AI, Tang filmed a deepfake video of herself on a MacBook to demonstrate how easy it was to create a deepfake with AI software. Because ‘inoculating’ citizens takes time, this video was routinely broadcast to the public. Through prebunking, citizens learn not to trust a video just because it features a high-ranking government official or a celebrity.

The implementation of multiple back-up systems. This offensive move presupposes that hacking will be attempted and provides protection in the wake of an actual attack.

Using paper ballots for elections: By issuing paper ballots, and allowing citizens to use their own video to verify all the counting in polling places, the outcome of an election is indisputable (Harris & Raskin, 2024).

Tang is also implementing a range of other upgrades to facilitate participatory democracy. One such example is the Polis platform, which is used to crowdsource consensus on initially contentious public issues. According to Tang, Polis is ‘Pro-Social Media’ that uses AI to consistently promote common points of view to bridge divides. Participants and opinions are visualized on the same page, as well as how divided the groups are. Polis does not just provide a visual mapping of democratic input, it also produces an interactive report, updated in real time (Tang et al, 2023).

Should we be following Taiwan’s lead to future-proof democracy in the United States? Is it too late to implement these upgrades? Participate in the discussion below.

Sources:

Harris, T. & Raskin, A. (2024, February 29). Future-proofing democracy in the age of AI. [Audio podcast episode]. In Your Undivided Attention. Center for Humane Technology.

Tang, A., Liu, R., & Hsueh, W. (2023, September 15). Digital democracy in the age of AI. PDIS.

Are we Living in the Age of Future Shock?

One of the biggest challenges of writing a book on the topic of artificial intelligence is the rapidly evolving nature of this technology. As authors, the struggle to keep pace with the headlines and emergent issues around AI was an omnipresent battle. At the current rate of change, our work can only be considered a mere snapshot—an ephemeral reckoning of the implications of artificial intelligence as it unfolded between May 2023 and March 2024.

In 1971, Alvin Toffler wrote a book called Future Shock whose main thesis was that the development and release of new technologies has outpaced the human biological capacity to cope. Toffler pointed out then—over 50 years ago—that the time between the original concept and practical use of a technology had dramatically shrunk in his lifetime. Consider that Toffler was writing about the phenomenon of ‘future shock’ before the Digital Revolution! He offers many examples of how our ancestors, both distant and recent, may not have seen the impacts of a new technology in their lifetimes. One memorable illustration was the typewriter. The first English patent for a typewriter was issued in 1714. But a century and a half elapsed before typewriters became commercially available.

As Toffler pointed out, new ideas are put to work much more quickly than ever before in human history. In the past 20-30 years, we have seen technological changes that would have made Toffler’s head spin. He accurately predicted that the time between the idea and application of a technology would be even more radically reduced simply because technology feeds on itself. Technology makes more technology possible.

Toffler worried that this rapid acceleration of change in society—the “fantastic intrusion of novelty, newness into our existence” —was surpassing our capacity to cope. He argued that co-arising shifts in both social norms and technological advances were profoundly affecting the way we, as humans, experienced reality, our sense of commitment, and our ability—or inability—to cope. Toffler maintained that the ever-shrinking cycle of DISCOVERY—APPLICATION—IMPACT—DISCOVERY, combined with increasing newness and complexity in the environment, is precisely what strained our capacity to adapt and created the danger of future shock.

Toffler defines ‘future shock’ this way:

Future shock is the dizzying disorientation brought on by the premature arrival of the future. It may well be the most important disease of tomorrow.

Do you think too much change in too short a period of time makes it increasingly difficult to thoughtfully process these changes? Is ‘future shock’ contributing to the seemingly widespread apathy about ethical AI on the part of everyday citizens?

Chime in with your thoughts below.

Book Blog

This blog welcomes posts and comments about the Ethos of AI. This is not a discussion of the technical workings of AI Large Language Models. Instead, we welcome your posts and comments about the Ethos or ethical character of AI.