The Federal Communications Commission is making it illegal for robocalls to use AI-generated voices. The ruling, issued on Thursday, gives state attorneys general the ability to take action against callers using AI voice cloning tech.
As outlined in the ruling, AI-generated voices are now considered “an artificial or prerecorded voice” under the Telephone Consumer Protection Act (TCPA). This restricts callers from using AI-generated voices for non-emergency purposes or without prior consent. The TCPA includes bans on a variety of automated call practices, including using an “artificial or prerecorded voice” to deliver messages, but it wasn’t explicitly stated whether this included AI-powered voice cloning. The new ruling clarifies that these recordings should indeed fall under the law’s scope.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters,” FCC Chairwoman Jessica Rosenworcel said in a statement. “State Attorneys General will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.”
Although state attorneys general could already go after the bad actors behind robocalls based on the scam or fraud they’re perpetrating, this new ruling gives them the power to hold scam artists accountable solely because they’re using an AI-generated voice. The FCC first proposed banning the use of AI voices in robocalls last month.
Scrutiny of AI voices in robocalls has ramped up in recent weeks. In January, some New Hampshire residents received a call that appeared to use AI to impersonate President Joe Biden’s voice, and it told them not to show up at the polls for the state’s presidential primary. An investigation has since linked the robocall to two Texas-based companies: Life Corporation and Lingo Telecom. The FCC issued a cease-and-desist order to Lingo Telecom, which transmitted the call.