February 17, 2026 The U.K. government plans to close a legal loophole that has allowed AI chatbots to generate illegal content, bringing them fully under the country’s Online Safety Act. Prime Minister Sir Keir Starmer said the changes will ensure that no technology platform is exempt from rules designed to protect children online.
The proposed reforms would extend existing obligations, originally aimed at social media firms, to AI chatbot providers, forcing them to comply with illegal content standards under the 2023 law. Ministers said the move is intended to strengthen child protection measures as generative AI tools become more widely used by young people.
Under the plan, chatbot developers would be required to prevent harmful outputs and meet stricter accountability standards already imposed on major platforms. The government said the change would close a regulatory gap that emerged as AI systems evolved faster than existing online safety rules.
Officials also signalled broader measures under review, including potential age limits for social media use and restrictions on features such as infinite scrolling that critics say can amplify harm among younger users. A consultation with technology firms and child safety groups is expected to shape the next phase of regulation.
The reforms are tied to updates to the Online Safety Act, including proposals such as “Jools’ Law,” which would require platforms to preserve children’s data in cases involving deaths linked to online harm. Advocates say the provision could help families access information in investigations and improve accountability.
The announcement reflects mounting political pressure to move faster on digital safety as lawmakers respond to rising concerns about AI-generated material and youth exposure online. Critics, however, argue the government risks moving too slowly, with some opposition voices calling for outright bans on under-16 social media access.
The U.K. push also aligns with similar moves across the world. Countries including Australia have already introduced strict age-based limits for social platforms, and regulators worldwide are increasingly scrutinizing AI tools alongside traditional tech firms.
If implemented, the new rules would mark one of the first major attempts to formally regulate chatbots under existing online safety frameworks.
