Site icon Tech Newsday

OpenAI head of AI safety defects to rival Anthropic

Jan Leike, formerly the superalignment leader at OpenAI, has joined rival AI company Anthropic. This move comes after Leike left OpenAI earlier in May, following a public critique of the company’s safety practices. His new role at Anthropic will involve leading a team focused on scalable oversight, weak-to-strong generalization, and automated alignment research.

This development is noteworthy as it underscores ongoing tensions within OpenAI regarding the safe deployment of AI technologies. The company has recently attempted to address these issues by announcing a new safety committee that reports directly to its board.

Meanwhile, Anthropic, which was established by former OpenAI employees who were concerned about the company’s trajectory after partnering with Microsoft, is gaining attention for its commitment to responsible AI development. Anthropic’s Claude chatbot, backed by tech giants like Amazon and Google, aims to provide more accountable responses to user prompts.

Leike’s move highlights a broader debate in the AI community about the direction and ethical implications of advanced AI development. As AI companies continue to navigate these complex challenges, the industry’s approach to safety and ethical considerations remains a critical issue.

Exit mobile version