the conversation gap: when did your likeness become someone else's asset? — the identity gap congress is trying to close
in july 2025, sharon brightwell of florida received a phone call from her daughter. she recognized the voice immediately — the cadence, the cry, the particular way her daughter said "mom." her daughter told her she had caused a car accident involving a pregnant woman and needed bail money immediately. brightwell sent $15,000. her daughter, who had never made the call, was fine and at work. the family believes the voice was assembled from audio pulled from social media — clips from facebook and snapchat. hillsborough county detectives opened an investigation. under current federal law, the act of cloning brightwell's daughter's voice was not, by itself, illegal.
the gap in the law
the united states has no federal law protecting your voice or your likeness from ai replication. that sentence sounds like it should be wrong. it isn't.
what the law does have is a patchwork of state-level "right of publicity" statutes — a majority of states have some version of one, via statute or common law — which typically protect individuals from the unauthorized commercial use of their name, image, or likeness. these laws were written in an era of photographs and television advertisements, designed to prevent someone from putting your face on a cereal box without your permission. they were not designed for a world in which an ai model can generate a convincing replica of your voice from three seconds of publicly posted audio — audio that can be used to deceive your family, impersonate you in a financial transaction, or place your voice in a recording you never made.
the fbi reported that americans lost $893 million to ai-related scams in 2025. the federal trade commission logged more than 845,000 imposter scam reports in 2024 — the broader category that encompasses voice cloning fraud. voice cloning technology, available for as little as $5 a month through commercial platforms, requires no technical expertise and no more than a few seconds of source audio. a birthday video. a voicemail greeting. a clip from an instagram story.
that gap — between what the law was written for and what the technology now makes possible — is the structural problem at the center of the no fakes act, bipartisan federal legislation reintroduced in both the house and the senate in april 2025. before getting to what the bill is trying to build, it helps to understand what currently exists.
what the current framework actually looks like
the existing patchwork has real holes, and where those holes fall matters. under most state right of publicity laws, protection is commercial in nature — the violation occurs when someone uses your likeness to sell something without permission. what happens when an ai model generates audio of your voice saying something you didn't say, not to sell a product, but to deceive your family, defraud your employer, or produce content designed to harm your reputation? the legal answer in most jurisdictions is: it depends on the state, the use, and whether a jury finds it sufficiently commercial.
tennessee moved first. in march 2024, governor bill lee signed the ensuring likeness, voice, and image security act — the elvis act — into law, making tennessee the first state in the country to explicitly add voice to its right of publicity statute and to specifically address ai-generated replication. the bill passed the state house and senate unanimously. the law prohibits using ai to mimic a person's voice without consent and creates both civil and criminal liability, enforceable as a class a misdemeanor. california, new york, illinois, and texas have since introduced or strengthened similar statutes, with varying scope and standards.
but state laws have limits that go beyond their borders. if your voice is cloned by someone in new york, the audio is distributed through a platform incorporated in delaware, and you live in georgia — which state's law applies? the answer is genuinely unclear. and critically, most state statutes were not written to reach the specific mechanics of ai training — the question of whether feeding a recording into a model without consent is itself a violation is one that courts have not yet settled, and that most state legislatures have not addressed. the no fakes act — whose full name, nurture originals, foster art, and keep entertainment safe, makes it sound narrow — is attempting to replace this patchwork with a single federal standard. the mechanism it is trying to build is considerably broader than its name suggests.
what the no fakes act is trying to build
the no fakes act would establish the first federal intellectual property right in voice and likeness — a right that, once created, would exist uniformly across all fifty states. under the bill's framework, it would be unlawful to produce, host, share, or otherwise make available an ai-generated digital replica of a person's voice or likeness without their explicit authorization. a "digital replica" is defined narrowly: a newly created, computer-generated, highly realistic representation that is readily identifiable as a specific individual.
the bill's enforcement mechanism is modeled on the digital millennium copyright act's notice-and-takedown system — the same architecture that currently governs copyright infringement claims on platforms like youtube. a rights holder could file a complaint, triggering an obligation on the platform to remove the content or face potential liability. platforms that act in good faith are protected from liability; those that fail to respond are not.
The bill also considers how these rights function in perpetuity. By allowing estates and heirs to retain enforcement power for a defined period after a death, the legislation ensures that a person's voice doesn't become part of the public domain the moment they are gone.
the legislation is bipartisan. senate sponsors include marsha blackburn (r-tn) and chris coons (d-de), along with amy klobuchar and thom tillis. it has received backing from sag-aftra, the recording industry association of america, the walt disney company, warner music group, openai, google, and youtube — a coalition that reflects both the entertainment industry's exposure and the technology sector's recognition that a clear legal framework serves everyone.
the consent question
the structural argument at the center of the bill is about consent — specifically, the distinction between authorized and unauthorized use of a person's identity. that distinction is not always obvious.
in november 2023, the beatles released a song built from a 1970s lennon demo, using ai to separate his vocals from background noise that had made the tape unusable for decades. lennon's widow, yoko ono, had given the surviving beatles the cassette tape in 1994. the release was fully authorized. that is the model the no fakes act is trying to protect: technology used with consent, for a purpose the person or their estate would have sanctioned.
the sharon brightwell case represents the other end of the spectrum — technology used without consent, against the interests of the person whose voice was taken, with no legal mechanism specifically designed to address it. between those two poles sits a vast gray area that the current patchwork of state laws was not built to navigate. a person who posts a video online has not consented to having their voice extracted and used to call their mother. a worker who records a training session has not consented to having their voice used to impersonate them in a financial transaction. the no fakes act would make the legal default explicit: use requires authorization.
where the complexity lives
the criticism of the bill is not primarily partisan. the electronic frontier foundation and the center for democracy and technology have raised concerns about the notice-and-takedown mechanism, arguing that it incentivizes platforms to remove content first and ask questions later — effectively giving any individual a tool to suppress speech they dislike by filing a complaint, even if the content is lawful satire, journalism, or commentary. the bill does include carve-outs for news, documentary, historical, and biographical works, but critics argue those exceptions are narrow and contingent enough that platforms, wary of liability, will err toward removal regardless.
there is also the question of what the bill does with licensing. the framework permits individuals to license their voice and likeness — which is already an established commercial practice — but with new guardrails: licenses must be in writing, must specify the intended use, and are limited in duration. what the bill also permits, and what some legal scholars have flagged, is an "authorized representative" to license a person's voice or likeness on their behalf, without necessarily requiring the person's direct involvement. for established performers with managers and legal teams, this is familiar territory. for anyone else who has signed a broad contract — an athlete, a student, a worker — it means someone else may have the authority to authorize uses of their voice they never specifically reviewed.
the asymmetry the bill is trying to address
the underlying structural problem is one of asymmetry. the tools required to clone a voice or replicate a likeness have become cheap and widely accessible — available to any individual with a consumer-grade laptop and access to publicly posted audio. the federal communications commission established in 2024 that ai-generated voices in robocalls are illegal under the telephone consumer protection act. but that ruling covers a narrow category of use. it does not address the full range of harms that voice cloning enables, and it does not create a private right of action for the person whose voice was taken.
the tools required to identify a violation, locate the responsible party across jurisdictions, and pursue legal remedy remain expensive, slow, and jurisdiction-dependent. the result is a market in which replication is cheap and accountability is not. that gap falls hardest on people without legal teams — which is most people. the no fakes act would give everyone the same federal right — a baseline that does not require a publicist or a law firm to invoke.
what comes next
as of spring 2026, the no fakes act remains in committee — introduced in the senate on april 9, 2025, heard by the subcommittee on privacy, technology, and the law on may 21, 2025, and not yet reported out for a floor vote. the grammys on the hill advocacy event in april 2026 brought artists and industry figures to washington to push for its passage alongside two companion bills — the train act, which would give copyright holders the ability to subpoena ai developers to determine whether their specific works were used to train a model, and the clear act, which would require companies to disclose to the copyright office which copyrighted works were included in their ai training datasets before commercial release.
none of the three has passed. the tools have continued to improve. the fbi warned in 2025 that criminals were using ai-cloned voices to impersonate senior government officials. the ftc has documented the grandparent scam — in which a cloned voice of a grandchild calls an elderly relative claiming to be in trouble — as one of the fastest-growing categories of elder fraud. the gap between what the law was written for and what the technology makes possible is not a celebrity problem. it is a structural one, and it is not shrinking on its own.