(The Center Square) – Critics say rules to label all artificial intelligence-made content in Illinois Senate Bill 150 could hurt political debate.
American Civil Liberties Union’s Stephen Ragan argued in committee that the proposed regulations are too broad and could potentially mislead voters by casting doubt on harmless AI applications, such as editing software or campaign messaging tools.
“For example, a candidate might use AI to come up with new ways to phrase an old message or to help identify patterns in an opponent’s voting record,” said Ragan. “AI can be a creative tool in the editing process to brighten colors or lighten blemishes without affecting the underlying content. It appears that these uses of AI would have to be labeled as generated by AI, which sends a message that this content is not trustworthy.”
Ragan said it’s also unclear if there will be a process to identify if political ads were generated by artificial intelligence.
“As written, the bill provides that each distribution or airing to the public is an infraction,” said Ragan. “The specter of large fines will chill and discourage core First Amendment speech.”
The ACLU also took issue with the definition of artificial intelligence in the bill not being aligned with definitions laid out in the Human Rights Act and the Right to Publicity Act.
State Sen. Steve Stadelman, D-Rockford, the bill’s sponsor, said false information is being weaponized to push political agendas, with AI generated fake media in ads worsening the issue.
“These so-called deepfakes use technology to misrepresent someone as doing or saying something they did not. At the national level, one example is when people in New Hampshire received a robocall from President [Joe] Biden discouraging them from heading to the polls,” said Stadelman. “It’s critical that Illinois do what we can to ensure AI is not used to undermine the public’s trust through disinformation, especially in today’s political climate.”
Stadelman said this legislation does not ban false statements or regulate what can be said in political ads, but rather it simply requires disclosure for transparency.
Stadelman is trying to advance the measure before legal challenges in other states are settled.
Minnesota’s 2023 anti-deepfake law, which criminalizes AI generated political disinformation, faces a legal challenge on First Amendment grounds. Tyler Diers, TechNet’s executive director for the Midwest said they had no position on SB 150.
“Creators of political content that includes deceptive media should have an obligation to provide clear disclosures. We support statutory language that would ensure that liability for dissemination of such media is limited to the person who creates and disseminates it,” said Diers. “We’ve seen legal First Amendment challenges on these bills in states like Minnesota and California. It may be worthwhile to see how these settle or to examine some of the legal arguments around the First Amendment.”
Stadelman and state Sen. Sue Rezin, R-Morris, discussed the intent of the legislation in committee Wednesday.
“We’re all starting to use AI to write a little better, make our pictures a little better, younger or skinnier, whatever, but it is happening,” said Rezin. “So where is that fine line?”
“Well certainly the intent isn’t to affect those effects you mentioned, like brightness. It’s really an effort to deceive the public regarding content and information,” said Stadelman. “I’m open to improving language to draw that line. I think it is pretty clear as far as the intent and what it is we are trying to address here, and it’s not simply superficial alterations to video or audio.”
The bill remains in the AI and Social Media subcommittee.