May 6, 2026 South Africa has withdrawn its Draft National Artificial Intelligence Policy after officials discovered that several academic references cited in the document did not exist. Communications Minister Solly Malatsi said at least six of the 67 cited research papers were likely generated by AI and included in the draft without human verification.
The draft policy had been released for public comment and was intended to establish a national framework for AI governance in South Africa. It included proposals for a national AI commission, an AI ethics board, and a dedicated AI regulatory authority, alongside incentives such as grants, subsidies and tax breaks aimed at accelerating AI adoption across the country.
Malatsi said the inclusion of fabricated references undermined the credibility of the entire document and forced the government to restart the process. “I want to reassure the country that we are treating this matter with the gravity it deserves. There will be consequence management for those responsible for drafting and quality assurance,” he said.
The minister described the issue as more than a simple technical error. According to Malatsi, the hallucinated citations exposed a broader governance problem around the use of generative AI in official policy work. He said AI-generated material should never be published without proper human oversight and verification.
The incident is particularly notable because the draft itself focused heavily on regulating AI systems and promoting responsible deployment of large language models and generative AI technologies. The government had positioned the proposal as part of a broader effort to build AI infrastructure and oversight mechanisms while encouraging innovation through public-private partnerships.
The withdrawn policy would have created one of the more ambitious AI governance frameworks on the African continent. Beyond oversight bodies, it outlined plans to support AI adoption across industries through infrastructure investment and regulatory coordination.
Officials now plan to revise the document, remove the fabricated references and publish a corrected version for renewed public consultation. Most of the substantive policy proposals are expected to remain intact once the draft is reissued.
The situation also reflects a growing global problem tied to generative AI tools. Hallucinated citations and fabricated legal or academic references have increasingly appeared in government reports, consulting work and court filings. The issue has become common enough that large consulting firms, including Deloitte, are reportedly encountering AI-generated inaccuracies in client and public-sector documentation on a routine basis.
The controversy highlights one of the central tensions surrounding generative AI adoption: the same tools governments and businesses hope to regulate and deploy are also introducing new risks into the drafting of official documents. In this case, the failure happened inside a policy framework specifically intended to promote responsible AI use.
