Our aim with the Sora feed is simple: help people learn what’s possible, and inspire them to create. Here are some of core starting principles to bring this vision to life:
- Optimize for creativity. We’re designing ranking to favor creativity and active participation, not passive scrolling. We think this is what makes Sora joyful to use.
- Put users in control. The feed ships with steerable ranking, so you can tell the algorithm exactly what you’re in the mood for. Parents can also turn off feed personalization and control continuous scroll for their teens through ChatGPT parental controls.
- Prioritize connection. We want Sora to help people strengthen and form new connections, especially through fun, magical Cameo flows. Connected content will be favored over global, unconnected content.
- Balance safety and freedom. The feed is designed to be widely accessible and safe. Robust guardrails aim to prevent unsafe or harmful generations from the start and we block content that may violate our Usage Policies. At the same time, we also want to leave room for expression, creativity, and community. We know recommendation systems are living, breathing things. As we learn from real use, we’ll adjust the details—in service of these principles.
Our recommendation algorithms are designed to give you personalized recommendations that inspire you and others to be creative. Each individual has unique interests and tastes so we’ve built a personalized system to best serve this mission.
To personalize your Sora Feed, we may consider signals like:
- Your activity on Sora: This may include activity including your posts, followed accounts, liked and commented posts, and remixed content. It may also include the general location (such as the city) from which your device accesses Sora, based on information like your IP address.
- Your ChatGPT data: We may consider your ChatGPT history, but you can always turn this off in Sora’s Data Controls, within Settings.
- Content engagement signals: This may include signals such as views, likes, comments, instructions to “see less content like this,” and remixes.
- Author signals: This may include follower count, other posts, and past post engagement.
- Safety signals: Whether or not the post is considered violative or appropriate.
We may use these signals to predict if this content is something you may like to see and riff off of.
Parents are also able to turn off feed personalization and manage continuous scroll for their teens using parental controls in ChatGPT.
Keeping the Sora Feed safe and fun for everyone means walking a careful line: protect users from harmful content, while leaving enough freedom for creativity to thrive.
We may remove content that violates our Global Usage Policies. Additionally, content deemed inappropriate for users may be removed from Feed and other sharing platforms (such as user galleries and side characters) in accordance with our Sora Distribution Guidelines. This includes:
- Graphic sexual content;
- Graphic violence or content promoting violence;
- Extremist propaganda;
- Hateful content;
- Content that promotes or depicts self harm or disordered eating;
- Unhealthy dieting or exercise behaviors;
- Appearance-based critiques or comparisons;
- Bullying content;
- Dangerous challenges likely to be imitated by minors;
- Content glorifying depression;
- Promotion of age-restricted goods or activities including illegal drugs or harmful substances; and
- Low quality content where the primary purpose is engagement bait;
- Content that recreates the likeness of living individuals without their consent, or deceased public figures in contexts where their likeness is not permitted for use;
- Content that may infringe on the intellectual property rights of others.
Our first layer of defense is at the point of creation. Because every post is generated within Sora, we can build in strong guardrails that prevent unsafe or harmful content before it’s made. If a generation bypasses these guardrails, we may remove the sharing of that content.
Beyond generation, the feed is designed to be appropriate for all Sora users. Content that may be harmful, unsafe, or age-inappropriate is filtered out for teen accounts. We use automated tools to scan all feed content for compliance with our Global Usage Policies and feed eligibility. These systems are continuously updated as we learn more about new risks. If you see something you think does not follow our Usage policies, you can report it.
We complement this with human review. Our team monitors user reports and proactively checks feed activity to catch what automation may miss. If you see something you think does not follow our Usage Policies, you can report it.
But safety isn’t only about strict filters. Too many restrictions can stifle creativity, while too much freedom can undermine trust. We aim for a balance: proactive guardrails where the risks are highest, combined with a reactive “report + takedown” system that gives users room to explore and create while ensuring we can act quickly when problems arise. This approach has served us well in ChatGPT’s 4o image generation model, and we’re building on that philosophy here.
We also know we won’t get this balance perfect from day one. Recommendation systems and safety models are living, evolving systems, and your feedback will be essential in helping us refine them. We look forward to learning together and improving over time.


