NewsLocal News

Actions

'Zero-trust world': Experts explore future of AI in politics

AI expert
Posted

KANSAS CITY, Mo. — AI-generated photos and videos are becoming more common as the technology progresses.

Earlier this year, an AI voice similar to President Joe Biden was used in a robocall, which encouraged New Hampshire Democrats not to vote in the state’s presidential primary.

J. Scott Christianson is a professor at the University of Missouri. He said the robocall was well-timed to create misinformation.

“That was probably the perfect example of where you would deploy this because it was deployed in a robocall fashion, so it wasn’t broadcast in a place where people could inspect it easily,” Christianson said. “It was done the night before, right before people are going to the polls, so there was little time to get the word out that this was a fake.”

The Associated Press reports the political consultant behind the call said he produced the fake message in order to show the harm AI can create without guidelines in place.

Chris Kovac, co-founder of the Kansas City AI Club and Kovac.ai Marketing, said the actions of one person have the community talking about needed regulation.

“Somebody took the plunge, and whether or not that was ethical remains to be seen, but now we are starting to talk about it,” Kovac said.

Both Kovac and Christianson agree rules are needed from top to bottom.

The Society for Human Resource Management conducted a survey in which it found 75% of companies have no guidelines or policies related to AI. The survey also found 35% of HR professionals reportedly use AI in their work.

“We need to have a system that is clearly inspectable and understandable, and these little pledges and stuff by companies, they go out the window when it’s down to the bottom line,” Christianson said. “We need regulation, and that is the role of our government to regulate.”

Kovac said consumers can start now by asking for transparency.

Nationwide regulation of AI likely will not come before the November election.

Third-party actors, like outside countries, can take advantage of AI to spread misinformation and create division with a simple video, Kovac warned.

“I think it’s scary,” Kovac said. “But the more we know what we are looking at, the more we can be prepared to balance out the information we receive.”

Christianson said protecting oneself from the misinformation that can come from AI starts by looking to trusted sources for information.

Additionally, he recommended seeking out differing viewpoints; looking for media that confirms beliefs can lead to bias.