Tech

South Africa withdraws national AI policy after citing at least 6 out of 67 findings that it is an idea named after AI.

The TL;DR

South Africa’s Minister of Communications, Solly Malatsi, withdrew the draft national AI policy after News24 discovered that at least 6 out of 67 quotes were AI-generated views, quoting false headlines from real newspapers. The policy was approved by Cabinet in March and published for public comment. Malatsi called it an “unacceptable lapse” and promised consequences. The scandal leaves South Africa without an AI governance framework and raises questions about the institution’s capacity to regulate the technology.

South Africa’s Department of Communications and Digital Technology has spent months drafting a national intelligence policy. It proposed the National AI Commission, the AI ​​Ethics Board, the AI ​​Regulatory Authority, the AI ​​Ombudsman, the National AI Safety Center, and the AI ​​Insurance Superfund. It outlined five pillars of AI governance: capacity, responsible governance, responsible and inclusive AI, cultural preservation, and human-centered deployment. Adopt a risk-based approach modeled on the EU AI Act. The Cabinet passed the draft on 25 April. The Government Gazette published it on 10 April for public comment. Then News24, a South African newspaper, checked the booklet and found that at least six of the 67 quotes in the document were missing. The journals were real. There were no titles. Authors credited with basic research on AI governance had never written the papers attributed to them. The editors of the South African Journal of Philosophy, AI & Society, and the Journal of Ethics and Social Philosophy have independently confirmed to News24 that the mentioned articles have never been published on their pages. The plausible explanation, according to Communications Minister Solly Malatsi, is that the editors used an AI generating tool and published the output without verifying a single reference. Government policy designed to manage artificial intelligence was undermined by artificial intelligence that failed to govern.

To withdraw

Malatsi announced his resignation on April 27, calling the false statements an “unacceptable lapse” that “jeopardized the integrity and credibility of the reformed policy.” He said that the management of the results will follow those who are responsible for writing and quality assurance. “This failure is not just a technical problem,” said the minister. The chairman of the parliament’s position committee gave a summary assessment, suggesting that the department “skipped using ChatGPT this time” when reorganizing. The document will be revised before it is reissued for public comment, but no timeline has been given. South Africa currently lacks a formal governance framework for AI at a time when governments around the world are grappling with the issue of how to manage AI, and the country’s credibility as an honest participant in that conversation has taken a hit that will go beyond policy reviews.

The scandal is not just that fake quotes appeared in a government document. That’s because they appeared in a government document on artificial intelligence, written by the ministry responsible for the country’s digital technology strategy, at the exact time when debates about the dominance of AI around the world are being fought in Brussels, Washington, and Beijing. The EU AI law, the regulatory framework with the greatest interest in artificial intelligence, faces delayed standards and an implementation timeline pushed back to 2027 for high-risk programs. The United States has no federal AI legislation and is watching states legislate independently while the White House tries to override their efforts. China has enacted AI laws but applies them selectively. In this situation, South Africa issued a policy that could not continue with censorship.

The pattern

South African citations translated with ideas is the worst case of a problem that has spread silently across institutions that use AI to produce research and writing. A study published in Nature found that 2.6 percent of academic papers published by 2025 will contain at least one abstract that is thought to be missing, up from 0.3 percent in 2024. If that rate captures nearly all seven million scholarly texts from 2025, more than 110,000 papers contain invalid references. GPTZero, a Canadian startup, analyzed more than 4,000 research papers accepted at NeurIPS 2025, one of the world’s leading AI conferences, and found more than 100 citations in at least 53 papers. In a separate multi-model study, only 26.5 percent of AI-generated bibliographic references were completely accurate. The problem with structure: large language models generate quotes by predicting possible tokens rather than finding information. They don’t look at the papers. They predict what the citation should look like based on patterns in their training data, and when the prediction is confident enough, they produce a reference that reads like an endorsement but points to nothing.

TNW City Coworking Space – Where your best work happens

A workplace designed for growth, collaboration, and endless networking opportunities at the heart of technology.

The case of South Africa is different not because the technology was found to be illegal, which is a documented and natural limitation of the AI ​​produced, but because the misleading ideas were published in an official government policy document approved by the Cabinet without confirming the references. The writing process involves civil servants, subject consultations and ministerial reviews. Dumisani Sondlo, who heads AI policy at the department, previously described the policy development as “an act of admitting that we don’t know enough.” That admission did not go as far as admitting that the tool used to help frame the policy itself was unreliable. Six fake quotes identified by News24 were seized. Whether the additional citations in the 67 document references are authentic has not been publicly confirmed. The entire literature is now suspect, and by extension, so is the analytical basis on which policy proposals are built.

Results

The immediate effect is that South Africa’s AI dominance timeline has been reset. The policy framework, which was intended to position the country as a leader in the proper adoption of AI on the African continent, will need to be restructured, rethought, and resubmitted. The damage to institutional credibility goes beyond the policy itself. If the department responsible for regulating AI cannot verify the authenticity of the sources in the policy document, the question becomes whether it has the power to vet the AI ​​programs it proposes to regulate. The policy envisions a multi-regulator model where AI governance and human oversight will be centralized within existing regulatory frameworks rather than centralized under a single authority. That model requires each participating administrator to have sufficient technical understanding to evaluate AI systems in their field. The shame of the hallucination does not inspire confidence that the coordinating department meets that threshold.

The broader lesson is not that governments should avoid using AI in policy development. That AI failure mode is not powerful. It doesn’t crash. It does not show an error message. Produces fluent, well-formatted, confident text that closely resembles the output of a skilled researcher. The misquotation of South Africa’s AI policy was clearly not a mistake. They were audible. They quote real magazines. They said that work comes from real people. They follow the formatting conventions of academic references. The only catch was to check if each one actually existed, a task that requires some form of human verification that the AI ​​should make unnecessary. The growing public distrust of AI is absurd. It is a technological response that is at once powerful to design national policy and unreliable enough to build evidence upon which policy is based. South Africa’s disappointment is one, but a fundamental failure, to use AI without the ability to verify its outcome, is not. It happens in universities, law firms, newsrooms, and government departments around the world. South Africa is the first government to publish receipts. The challenges of regulating AI are real, but they start with a requirement that the South African ministry has not met: understanding what the technology does before trying to write its laws.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button