Skip to main content
OpenAI

Safety

Safety at every step

We start by teaching our AI right from wrong, filtering harmful content and responding with empathy.

We conduct internal evaluations and work with experts to test real-world scenarios, enhancing our safeguards.

We use real-world feedback to help make our AI safer and more helpful.

Safety doesn’t stop

Building safe AI isn’t one and done. Every day is a chance to make things better. And every step helps anticipate, evaluate, and prevent risk.

How we think about safety and alignment

Leading the way in safety

We collaborate with industry leaders and policymakers on the issues that matter most.

Conversations with OpenAI researchers

Get inside OpenAI with our series that breaks down a range of topics around safety and more.