As artificial intelligence capabilities advance at breakneck speed, the proliferation of deepfake technology has emerged as one of the digital age’s most pressing challenges. These AI-generated forgeries can convincingly replicate a person’s face, voice, and body, creating content that ranges from harmless parody to devastating misinformation and harassment. While many governments struggle to respond effectively, Denmark has introduced legislation that could serve as a blueprint for protecting citizens in the age of AI.
A Novel Legal Framework
In June 2025, the Danish government announced a proposed amendment to the country’s Copyright Act that takes an innovative approach to combating deepfakes: granting citizens copyright protection over their own likeness. Under this framework, Danish residents would gain legal authority to request the removal of nonconsensual deepfake content from digital platforms. The legislation extends protection to cover unauthorized artificial recreations of artistic performances, with artists able to demand compensation for misuse of their image—a right that would extend for 50 years beyond the artist’s death.
Danish Culture Minister Jakob Engel-Schmidt has secured cross-party support for the bill, which was submitted for public consultation in July 2025. If enacted as expected by the end of 2025, Denmark will become one of the first nations to explicitly grant copyright protection for personal likeness as a defense against AI-generated content.
The Crisis Communications Imperative
From a crisis management perspective, Denmark’s legislation addresses a critical gap in reputation repair strategies. Crisis communications professionals have witnessed firsthand how deepfakes can devastate personal and corporate reputations within hours. When malicious actors create convincing fake videos or audio recordings, victims face an uphill battle: the fabricated content spreads rapidly across platforms while removal requests languish in bureaucratic limbo without clear legal standing.
Traditional public relations approaches to reputation management—issuing denials, providing counter-evidence, engaging media outlets—often prove insufficient against deepfakes. The human brain processes visual information faster than text, meaning a fake video makes a more lasting impression than any written rebuttal. By the time crisis communications teams mobilize a response, the damage is already done. Worse, the very act of responding can amplify awareness of the fake content, a phenomenon known as the “Streisand effect.”
Denmark’s copyright framework fundamentally changes this dynamic by shifting from reactive crisis management to proactive reputation repair. Rather than relying solely on public relations strategies to counter false narratives, victims gain legal mechanisms to remove the source of reputational harm. This approach recognizes that effective reputation management in the digital age requires legal tools, not just communications tactics.
The legislation also establishes clearer accountability for platforms, creating enforceable obligations that crisis communications professionals can leverage. When representing clients facing deepfake attacks, PR teams can now cite specific legal violations and demand timely removal, rather than pleading with platform moderators who operate under vague community guidelines. This transforms reputation repair from persuasion to enforcement.
Building a Template for Human-Centered Regulation
What makes Denmark’s initiative particularly valuable is its recognition that technology-enabled reputational threats require legal solutions, not just better messaging. The law doesn’t restrict AI technology itself but creates clear consequences for its misuse—a distinction that preserves innovation while protecting individuals.
For public relations professionals, this legislation represents a paradigm shift. Crisis management has traditionally focused on maintaining narratives and managing stakeholder perceptions. Denmark’s approach acknowledges that when digital content can be fabricated with frightening authenticity, narrative control is insufficient. Legal remedies must complement communications strategies.
The provision for financial compensation also introduces a deterrent effect that public relations efforts alone cannot achieve. Organizations and individuals thinking about creating or sharing deepfakes must now weigh potential legal liability alongside reputational risks—transforming crisis communications from damage control into preventable incidents.
The Way of The Future
As other nations consider similar legislation, the implications for reputation management are profound. Denmark demonstrates that protecting personal and professional reputations in the AI era requires comprehensive frameworks combining legal protections, platform accountability, and enforcement mechanisms. This is what effective, humane AI legislation looks like: empowering victims, establishing clear consequences, and providing crisis communications professionals with the tools necessary to truly repair reputations.