Site icon Tech Newsday

AI chatbots go haywire at security competition

In a conference room at Howard University, AI chatbots were put to the test by a team of hackers. The hackers were able to induce the chatbots to expose private medical information, cough up instructions for how to rob a bank, and speculate that a job candidate would have weaker interpersonal skills based on their name.

The hackers were participating in the first public “red teaming” event for AI language models. The Generative Red Team Challenge, hosted by Def Con’s AI Village, garnered endorsement from the White House.

Red teaming is a security practice in which experts attempt to find vulnerabilities in a system. In this case, the hackers were trying to find ways to exploit the AI chatbots.

This episode, which attracted a small gathering of students and AI enthusiasts from Washington, D.C., on July 19, offered a glimpse into a grander spectacle set to unfold at Def Con in Las Vegas.

The results of the red teaming event are a worrying sign for the future of AI. The chatbots were able to be tricked into generating harmful and discriminatory content. This suggests that AI chatbots are not yet ready for widespread use.

The organizers of the red teaming event are calling on AI developers to take steps to make their chatbots more secure. They are also calling on policymakers to regulate the development and use of AI.

The sources for this piece include an article in DataCenterkKowledge.

Exit mobile version