The UK Online Safety Act
OpenAI is committed to complying with the UK Online Safety Act(opens in a new window). The UK Online Safety Act imposes duties on relevant service providers relating to illegal content and content harmful to children. We work hard to promote responsible use of our products and keep our users safe.
How we protect users
We aim to balance delivering helpful and accessible information to all users while mitigating the risks of online harm. We use a range of procedures and tools to protect users from illegal and harmful content.
Illegal content
This includes measures to prevent, detect, respond to, and take enforcement action against illegal content, such as terrorism content, child sexual exploitation and abuse content and other illegal content.
We aim to review and remove illegal content as swiftly as possible when we become aware of it, whether via our own proactive detection methods, or from reports from third parties, including our users. This helps to prevent users encountering such content and minimises the length of time illegal content is present on the service.
More information on our moderation and enforcement processes is set out in our Transparency & Content Moderation page. Please see our Reporting content page(opens in a new window), for details about how you can report content, including illegal content, on our services.
Harmful content
We aim to provide a safe online experience for all our users, and take action to protect all users (including users under 18) from harmful content that violates our policies. This includes content the UK Online Safety Act(opens in a new window) recognises as content harmful to children*.
When we become aware of such content, we take appropriate action, balancing the importance of protecting our users and ensuring they have access to information. For example:
- On Sora: Our policies prohibit users from sharing videos and images on Sora’s public feeds that may be harmful to other users on the service. This includes content the UK Online Safety Act recognises as content harmful to children*. (Please see our policy on Creating images and videos for further detail). We take action to enforce our policies and help prevent violative videos and images from appearing on Sora’s public feeds. When we identify violations of these policies, we aim to take swift action to remove such content from Sora’s public feeds. If you think you have encountered such content on Sora, please report it to us(opens in a new window) and we will investigate.
- On ChatGPT Search: We take action to protect users from encountering harmful content via search results, while aiming to ensure our users have access to the information most relevant to their search query. ChatGPT search is designed to refuse or safely complete queries that seek pornography, instructions or encouragement for suicide and self-harm behaviors, including behavior related to eating disorders, realistic graphic violence, and other content harmful to children. Safe completing redirects the user’s query to offer a safer alternative output that minimizes harm and is consistent with our terms and policies. ChatGPT search may also leverage third party providers, such as Microsoft Bing, for the images it displays and it uses the safe search version of those APIs designed to block age-inappropriate content.
If you think you have encountered such content on our services, please report it to us(opens in a new window) and we will investigate. We are committed to user safety and will take action to help prevent violative URLs or responses being provided to other users.
There are also third party resources recommended by Ofcom(opens in a new window) available to you, if you or someone you know has experienced serious harm online.
Our use of proactive technology
We use proactive technology to help prevent users encountering illegal content and harmful content on our services. This includes the use of model training and policies, content classifiers, reasoning models, hash-matching, blocklists, and other automated systems to identify content that may violate our terms or policies.
More information, including about our moderation and enforcement processes is set out in our Transparency & Content Moderation page.
Our compliance with the Online Safety Act
If you’re in the UK and think OpenAI isn't complying with its obligations under the UK Online Safety Act or has used proactive technology to moderate content in a way that is not compliant with our terms, you can report this to us via our UK Online Safety Act Reporting Form.
We will review your report and consider how your feedback may help us improve our processes. We aim to review reports within 10 business days although this process may take longer for more complex reports. We’ll follow up with you only if we need more information or have additional information to share with you.
Appealing content moderation decisions
If we take enforcement action based on your content or activity (including following our use of proactive technology), and you think we have a mistake, you can report this to us and appeal our decision. Further information on how to appeal is set out on our Transparency & Content Moderation page.
We aim to review appeals promptly, though more complex cases may take longer.
* Under the UK Online Safety Act content harmful to children includes pornography, content which encourages or promotes suicide, self-harm or eating disorders, hate speech, bullying content, content which depicts realistic graphic violence, content which encourages or promotes serious violence against a person, dangerous challenges likely to result in serious injury, content that actively encourages taking physically harmful drugs or substances, or content that promotes body stigma and depression.
** Including our duties relating to: illegal content; content harmful to children; content reporting processes; freedom of expression or privacy; or you think we have used proactive technology to moderate content in a way that is not compliant with our terms.