OpenAI’s mission is to ensure AGI benefits all of humanity, and to fulfill this mission we need to meet people where they are all over the world.
AI is increasingly recognized as critical national infrastructure, on a par with electricity. Governments and institutions around the world want to ensure their citizens and economies can benefit from the AI era by having access to the most capable systems available.
For AI to deliver on that promise, it also needs to be locally relevant. That means speaking in local languages and with local accents, respecting local laws, and reflecting cultural norms and values.
Only a small number of countries, however, are in a position to develop frontier AI models themselves. For most, the challenge is not how to build a model from scratch, but how to adapt the best available AI so it works for their specific context. This is something we consistently hear from governments around the world: they want sovereign AI they can build with us, not just systems translated into their language.
Through our OpenAI for Countries initiative, we have been exploring how localization could work in practice. The goal is to allow for localized AI systems, while still benefiting from a global, frontier-level model.
We are currently piloting a localized version of ChatGPT for students in Estonia as part of our ChatGPT Edu work, incorporating local curricula and pedagogical approaches. We are also exploring pilot localisation efforts with other countries. As part of our commitment to transparency in how AI is researched and deployed, we are sharing more detail on how localization works.
Our Model Spec is a public document that sets out how we intend our models to behave. We train our models to follow the Spec, and continuously refine it via a collaborative, whole-of-OpenAI process that incorporates what our teams are hearing from people around the world. The Spec speaks to the gamut of ways our models are used, ranging from ChatGPT, to experiences developers build on our platform, to other contexts. These rules, which apply everywhere our models are deployed, define clear boundaries on what can and cannot be changed and our commitment to be transparent about changes.
The Model Spec includes “red-line principles(opens in a new window)” that apply to all deployments, including those under the OpenAI for Countries program. In them, we emphasize that “human safety and human rights are paramount to OpenAI’s mission,” and make clear that:
- We will not allow our models to enable severe harms such as acts of violence, weapons of mass destruction, terrorism, persecution or mass surveillance.
- We will not allow our models to be used for targeted or scaled exclusion, manipulation, for undermining human autonomy, or eroding participation in civic processes.
- We are committed to safeguarding individuals’ privacy in their interactions with AI.
When OpenAI provides a first-party experience directly to consumers like ChatGPT we also commit that through it:
- People should have easy access to trustworthy safety-critical information from our models.
- Customization, personalization, and localization will not override the binding rules throughout the Model Spec. This includes the objective point-of-view(opens in a new window) principle, meaning localization may affect language or tone, but it cannot change the substance or balance of facts presented.
- People should have transparency into the important rules and reasons behind our models’ behavior, e.g., any content omitted due to legal requirements will be transparently indicated to the user in each model response, specifying the type of information removed and the rationale for its removal, without disclosing the redacted content itself. Similarly, any information added will also be transparently identified.
As we explore localized, sovereign AI through OpenAI for Countries, we are committed to keep on sharing more about what we learn, and to evolving our approach transparently.


