AI political ads crackdown: Massachusetts passes disclosure bill as Maryland deepfake proposal sparks free speech fight

 AI political ads crackdown: Massachusetts passes disclosure bill as Maryland deepfake proposal sparks free speech fight

The Massachusetts State House. Image Credit: Lisa Kashinsky/POLITICO

With artificial intelligence rapidly transforming campaign strategies, state lawmakers are moving to regulate its use in elections. This week, the Massachusetts House of Representatives unanimously passed a bill requiring clear disclosure labels on AI-generated political ads, while in Maryland, a proposed deepfake criminalization measure is drawing constitutional scrutiny.

The legislative push underscores a growing national concern: how to balance election integrity with First Amendment protections in the AI era.



Massachusetts Passes AI Disclosure Bill Unanimously

On Feb. 11, the Massachusetts House voted 157–0 to approve a bill that would require any political advertisement using artificial intelligence-generated “synthetic media” to include a clear disclosure stating: “contains content generated by AI.”

The disclosure must appear at the beginning and end of any audio or video political ad and remain visible or audible throughout the duration of the content that incorporates AI.

Rep. Daniel Hunt, D-Dorchester, House Chair of the Committee on Election Laws, said AI is no longer theoretical. “If you watched the Super Bowl, you saw how widespread artificial intelligence has become. AI is in our everyday lives. Voters deserve to know that what they’re seeing is real,” Hunt said.

The bill now heads to the Massachusetts Senate for further consideration.

What the Massachusetts AI Political Ads Bill Would Do

Under the proposed legislation:



  • Any paid political communication using AI must disclose its use clearly.
  • The rule applies to ads influencing votes for or against a candidate or ballot measure.
  • Violations would carry a $1,000 fine.

The measure follows similar efforts nationwide. In 2024, New Hampshire General Court passed AI regulations after a fake robocall impersonating then-President Joe Biden urged voters not to participate in a presidential primary.

Massachusetts lawmakers say the goal is transparency, not censorship.

Another Election Bill Targets Deceptive Communications

In addition to the AI disclosure measure, the Massachusetts House also passed a separate bill aimed at preventing deceptive campaign tactics.

The proposal would prohibit candidates or political groups from distributing misleading communications within 90 days of an election. This includes content intended to:

  • Damage a candidate’s reputation through false depictions
  • Mislead voters about election dates or voting procedures

The bill passed 154–3 and would allow victims to sue. It does not apply to news reporting, satire, or parody.



Meanwhile, the Massachusetts Senate has advanced separate legislation requiring greater transparency in ballot campaign financing.

READ ALSO

Corey Lewandowski under fire: DHS chaos claims, gun request controversy and Trump tensions explode

Maryland’s Deepfake Bill Raises First Amendment Concerns

While Massachusetts focuses on disclosure, lawmakers in Maryland are considering a more aggressive approach.

House Bill 145, introduced in the Maryland House of Delegates, seeks to criminalize certain election-related AI-generated deepfakes. Violations could result in a $5,000 fine and up to five years in prison.



However, the proposal has drawn criticism from the Reason Foundation, which argues the bill could infringe upon free speech rights.

In testimony submitted to the Maryland House Government, Labor, and Elections Committee, Technology Policy Fellow Richard Sill warned that the bill relies on “subjectively defined terms” such as election-related deepfakes. He argued that empowering the state to determine intent could chill constitutionally protected political speech.

Disclosure vs. Criminalization: Two Regulatory Models

Critics of Maryland’s bill suggest a disclosure-based framework similar to Utah State Legislature House Bill 329, which focuses on requiring campaign actors, candidates, PACs, and political committees, to disclose AI use in paid advertisements.

This approach narrows regulation to formal campaign structures rather than policing everyday online expression.

Legal analysts note that courts traditionally provide strong protection for political speech, making criminal penalties for vaguely defined deepfake content legally vulnerable.

Why AI Political Ads Are a Growing Concern

Artificial intelligence tools now allow campaigns to:

  • Create hyper-realistic voice clones
  • Manipulate video footage convincingly
  • Generate persuasive political messaging at scale

With the 2026 midterm elections approaching, lawmakers are racing to prevent AI from undermining public trust.

The debate reflects a broader question facing states nationwide: Should governments prioritize transparency or criminal penalties when regulating AI in politics?

Massachusetts appears to favor disclosure and voter awareness. Maryland’s proposal tests the limits of criminal enforcement.

The outcome could shape how AI-driven campaigning evolves across the country.

 

 

 

FAQ

What did Massachusetts pass regarding AI political ads?

The Massachusetts House passed a bill requiring political ads that use AI-generated content to include a disclosure stating, “contains content generated by AI.”

Does the Massachusetts bill ban AI in political ads?

No. It does not ban AI. It requires disclosure so voters know when AI-generated content is being used.

What is considered “synthetic media” under the bill?

Synthetic media includes AI-generated or manipulated audio and video used in paid political advertising.

What are the penalties for violating the Massachusetts AI ad rule?

Violators could face a $1,000 fine.

What is Maryland House Bill 145?

HB 145 is a proposed law that would criminalize certain election-related deepfakes, with penalties including fines up to $5,000 and possible prison time.

Why is Maryland’s deepfake bill controversial?

Critics argue it may violate First Amendment protections by allowing the state to determine intent behind AI-generated political content.

How is Utah’s approach different?

Utah’s model focuses on disclosure requirements for official campaign actors rather than criminalizing AI content broadly.

Why are states regulating AI in elections now?

Advances in AI technology have made it easier to create realistic fake audio and video that could mislead voters.

Could these laws face legal challenges?

Yes. Particularly Maryland’s proposal, which critics argue may not survive constitutional scrutiny in court.