An OpenAI safety research lead departed for Anthropic

An OpenAI safety research lead departed for Anthropic

Andrea Vallone joins Anthropic’s alignment team

Andrea Vallone, who previously led OpenAI’s safety research team, has departed the company to join Anthropic. She will be working under Jan Leike, who also left OpenAI in May 2024 due to concerns over the company’s prioritization of product development over safety processes.

Background on OpenAI’s safety research

Vallone was the head of OpenAI’s model policy and safety research team, focusing on critical issues such as how chatbots respond when users show signs of mental health struggles. Her work has been central to shaping AI safety protocols in large language models.

Anthropic’s alignment team

At Anthropic, Vallone will join the alignment team, which is dedicated to ensuring that AI systems behave safely and align with human values. This move reflects the growing trend of talent movement between leading AI research labs.

Industry context

The AI research sector has seen a significant revolving door of leadership, with key figures frequently shifting between organizations. This trend highlights ongoing debates about the balance between innovation and safety in AI development.

來源:https://www.theverge.com/ai-artificial-intelligence/862402/openai-safety-lead-model-policy-departs-for-anthropic-alignment-andrea-vallone

返回頂端