Visitor logs show that the president of Microsoft, Brad Smith, visited the White House on dozens of occasions while the company was reportedly trying to influence the Biden administration’s policy on artificial intelligence, but some observers are wondering whether influencing the outcome of the upcoming presidential elections was also a topic of discussion.
Microsoft’s concern about artificial intelligence policy stems from its partnership with ChatGPT creator OpenAI, who has made
sizeable investments in federal lobbying to support their business. The CEO of Microsoft, Satya Nadella, and OpenAI CEO Sam Altman have each visited the White House on six occasions. In addition, senior executives at both firms have made substantial donations to Biden’s reelection campaign and a committee that boosts it, Federal Election Commission data shows.
The records indicate Smith visited the White House while Biden was in office 30 times, speaking with the president as well as other high-ranking officials in the White House. He also met with National Security Advisor Jake Sullivan and Deputy National Security Advisor Anne Neuberger on several occasions.
This is not standard practice for tech executives; Google’s former CEO and Chairman Eric Schmidt only visited the White House on 18 occasions when they were trying to influence tech policy under President Barack Obama, while current CEO Sundar Pichai has only been there ten times during the Biden presidency and Amazon CEO Andrew Jassy has visited just three times.
Microsoft and OpenAI already agreed to voluntarily “manage the risks” of AI with the White House last July, including the risk of “harmful bias and discrimination, and protecting privacy.”
This is the exact type of language that has been used to justify politically motivated censorship on social media and in search engines, and it's natural to worry how this will be leveraged in AI output.
AI can be used to influence elections in numerous ways
There are numerous ways that AI could be used to rig elections, and one method that has gained a lot of attention recently is the use of AI deepfakes. 2024 will be the first American presidential election to take place at a time when readily available AI tools can create very convincing fake images, audio clips and videos, and some of them have already been used in experimental campaign ads.
These days, even people without tech credentials can create these videos and spread them on social media. And if the Google Gemini debacle has taught us anything, it’s that it is all too easy for engineers to program their biases into these tools. Just as the tool refuses to create images of white people while happily supplying images of black or Hispanic individuals, such tools could refuse to help create favorable images about Trump while complying readily with prompts for favorable material about Biden or negative material about Trump.
Of course, these are just the surface-level and obvious ways they can influence elections. AI is immensely powerful, and there are many more sophisticated ways that it could help the elections go in favor of Biden.
Much like the social media censorship against conservative viewpoints that is already in full swing, AI could be trained to suppress disfavored narratives and elevate positive narratives about Biden, for example.
AI is already being used around the world to sway the outcome of elections. In the run-up to recent elections in Slovakia, AI-generated audio recordings were released impersonating a candidate talking about rigging elections and raising the prices of beer, and they were widely circulated on social media.
In the U.S., there are reports that AI robocalls impersonating Biden have been used to contact voters. There have also been fake photos circulating that claim to show Trump being arrested.
Google’s Gemini chatbot also came under fire after suggesting that some experts think the prime minister of India, Narendra Modi, espouses fascist policies; he is currently running in an upcoming general election there.
UK Home Secretary James Cleverly has warned that bad actors could
use AI deepfakes to manipulate the democratic process in nations like the UK.
With this year’s election possibly coming down to just tens of thousands of votes in a handful of states, even the smallest hoaxes that
sway people toward one candidate could ultimately decide the outcome.
Sources for this article include:
YourNews.com
TheGuardian.com