An OpenAI safety research lead departed for Anthropic

An OpenAI safety research lead departed for Anthropic

Andrea Vallone joins Anthropic’s alignment team

Andrea Vallone, who previously led OpenAI’s safety research team, has departed the company to join Anthropic. She will be working under Jan Leike, who also left OpenAI in May 2024 due to concerns over the company’s prioritization of product development over safety processes.

Background on OpenAI’s safety research

Vallone was responsible for leading OpenAI’s research on model policy and safety, particularly in addressing user mental health concerns during AI interactions. Her work focused on ensuring that AI systems respond appropriately when users show signs of distress.

Anthropic’s alignment team

At Anthropic, Vallone will join the alignment team, which aims to develop AI systems that are safe, reliable, and aligned with human values. This move reflects the ongoing talent movement within the AI research community.

來源:https://www.theverge.com/ai-artificial-intelligence/862402/openai-safety-lead-model-policy-departs-for-anthropic-alignment-andrea-vallone

返回頂端