Introduction: When Seeing Is No Longer Believing
For decades, courts have treated video and audio recordings as some of the most persuasive forms of evidence. That assumption is now under serious threat. The rise of deepfakes and AI generated media has introduced a new evidentiary crisis where fabricated content can appear indistinguishable from reality.
In South Africa, this shift is not theoretical. Reports of AI driven fraud, voice cloning, identity manipulation, and impersonation scams are increasing. At the same time, a global legal phenomenon known as the “liar’s dividend” is emerging, where individuals dismiss authentic evidence as fake to avoid accountability.
This evolving landscape requires a fundamental rethink of how courts, investigators, and legal practitioners approach digital evidence, authentication, and admissibility.
1. The Collapse of Traditional Assumptions About Digital Evidence
South Africa’s legal framework, including the Electronic Communications and Transactions Act (ECTA), the Cybercrimes Act, POPIA, and the common law, was developed in an era where digital records were generally presumed to reflect real world events accurately.
Deepfakes disrupt this foundation.
The core issue is no longer simply whether evidence has been altered, but whether it is genuine at all. As a result, courts must now interrogate not just the content itself, but the origin, processing history, and integrity of the data.
2. Authenticity in the Age of Artificial Intelligence
2.1 Authenticity Can No Longer Be Presumed
Previously, a witness confirming a recording or the apparent realism of a video could be sufficient. That approach is no longer reliable.
Courts now require stronger proof that the content originated from a legitimate source, that it has not been manipulated, and that the systems used to produce or store it are trustworthy.
Under ECTA, admissibility depends on whether a data message has remained complete and unaltered. However, AI tools complicate this standard. Where software enhances footage, reconstructs images, or modifies audio, the result may no longer qualify as an unaltered record.
Such material should be treated as secondary or derivative evidence, with the original file remaining central to the court’s analysis.
2.2 A Double Burden for Legal Practitioners
Modern litigation involving digital material now imposes two parallel responsibilities, verifying the authenticity of the original evidence and scrutinising any AI tools used to analyse, enhance, or interpret that evidence.
This requires detailed documentation, including the software used, processing steps applied, parameters or prompts entered, and preservation of both original and processed versions.
Without this level of transparency, the evidential value of digital material may be significantly weakened.
2.3 Integrity of Data Under South African Law
South African courts already consider the reliability of systems and processes when assessing electronic evidence. Deepfakes introduce a new level of risk by enabling subtle manipulation that may not be immediately detectable.
This means that traditional safeguards are no longer sufficient. Courts will increasingly expect technical validation of digital files, forensic examination of metadata, and independent verification of authenticity.
3. Legal Remedies: Available but Difficult to Enforce
South African law does provide mechanisms to address harmful synthetic media, including offences under the Cybercrimes Act, common law claims such as fraud or infringement of personality rights, and POPIA violations relating to misuse of personal information.
However, enforcement remains a challenge. Deepfake creators often operate anonymously, across borders, and using widely accessible tools capable of producing large volumes of content.
4. POPIA and the Limits of Automated Decision Making
Section 71 of POPIA places an important restriction on the use of artificial intelligence. Decisions that have legal or similarly significant consequences cannot be made solely through automated processing.
This is particularly relevant in forensic investigations, disciplinary proceedings, employment decisions, and regulatory enforcement.
AI tools may assist in identifying patterns or anomalies, but human oversight must be meaningful and independent. Decision makers must critically assess how the AI reached its conclusions, the limitations of the technology, and whether alternative explanations exist.
5. International Developments: A Glimpse of What Lies Ahead
Courts and regulators globally are already grappling with deepfake related disputes, offering important lessons for South Africa.
In the United Kingdom, manipulated audio evidence was exposed through forensic analysis, highlighting the need for judicial awareness of synthetic media.
In the United States, attempts to dismiss authentic recordings as potential deepfakes have raised concerns about misuse of the technology as a defence strategy.
Courts have also begun rejecting evidence enhanced by AI tools where the technology introduces speculative or reconstructed elements, particularly where methodologies are not transparent.
Judicial bodies internationally have expressed concern about reliance on AI generated material, particularly as technological indicators of manipulation become less obvious.
At the same time, verifying suspected deepfakes often requires advanced forensic expertise, making litigation more expensive and potentially limiting access to justice.
6. Practical Guidance for Legal Practitioners
6.1 Strengthening Evidentiary Integrity
To ensure admissibility and reliability, practitioners should work from verified forensic copies, generate and preserve cryptographic hashes, maintain a clear chain of custody, and document every step of analysis.
AI generated outputs should be treated as leads rather than conclusions and must always be independently verified.
6.2 Responsible Use of Generative AI
AI tools can assist with analysing large datasets, identifying inconsistencies, and summarising evidence.
However, practitioners must disclose when AI has been used, ensure outputs are explainable, and clearly separate machine generated insights from professional conclusions.
6.3 Preparing for Future Legal Standards
Globally, new rules are emerging that may soon influence South African practice, including mandatory disclosure of AI generated evidence, shifting evidentiary burdens where deepfakes are suspected, and stricter duties on legal practitioners to verify authenticity.
South African courts are likely to follow similar trends.
7. The Emerging Standard for Digital Evidence
The threshold for proving authenticity is rising. Legal practitioners must now integrate traditional evidentiary principles with technical methods such as metadata analysis, digital forensics, cryptographic verification, AI detection tools, and detailed audit trails.
This shift introduces greater complexity but is essential in an environment where fabrication is increasingly sophisticated and accessible.
Conclusion: Adapting to a New Evidentiary Reality
Deepfakes have fundamentally altered the landscape of digital evidence. The challenge is no longer limited to detecting manipulation. It extends to proving authenticity in a world where doubt itself can be manufactured.
For South African practitioners, the way forward is clear. Adopt stricter verification protocols, ensure transparency in the use of AI tools, and maintain robust human oversight in all decision making processes.
Those who adapt early will be better equipped to navigate this new terrain, safeguard evidentiary integrity, and uphold the credibility of the legal system.




