Skip to main content
OpenAI

December 1, 2025

CompanySafety

Funding grants for new research into AI and mental health

Introducing a new program to award up to $2 million to support independent safety and well-being research.

Loading…
Update January 28, 2026:

Grant applications are now closed. We were excited and encouraged to receive more than 1,000 high-quality entries from both established and emerging researchers from around the world, making this one of our largest calls for research grants to date. Each submission was carefully reviewed by our team of experts, and we have notified all applicants whose proposals are being funded. The depth and creativity of the proposals reflect the growing momentum in this field, and given the high volume of interest in the program we are actively exploring ways to expand and build on this work in the future.

We’re announcing a call for applications to fund research proposals that explore the intersection of AI and mental health. As AI becomes more capable and ubiquitous, we know that people will increasingly use it in more personal areas of their lives.

We continue to strengthen how our models recognize and respond to signs of mental and emotional distress. Working closely with leading experts, we’ve trained our models to respond more appropriately during sensitive conversations and have shared detailed updates for how those improvements are performing. While we’ve made meaningful progress on our own models and interventions, this remains an emerging area of research across the industry. 

As part of our broader safety investments, we are opening a call for research submissions to support independent researchers outside of OpenAI, helping to spark new ideas, deepen understanding, and accelerate innovation across the ecosystem. These grants are designed to support foundational work that strengthens both our own safety efforts, and the wider field. 

We’ve done research in some of these areas including Investigating Affective Use and Emotional Well-being on ChatGPT(opens in a new window) and Healthbench, and are focused on deepening our understanding to inform our safety and well-being work.

We believe that continuing to support independent research on AI and mental health will help improve our collective understanding of this emerging field and help fulfill our mission to ensure that AGI benefits all of humanity.

What we’re funding 

We're seeking research project proposals that deepen our understanding of the overlap of AI and mental health—both the potential risks and benefits—and help build a safer, more helpful AI ecosystem for everyone. We are particularly interested in interdisciplinary research that combines technical researchers with either mental health experts and those with lived experience. 

Successful projects will produce clear deliverables (datasets, evals, rubrics) or generate actionable insights (like synthesized views from people with lived experience, descriptions of how mental health symptoms manifest in a specific culture, research on language and slang used to discuss mental health topics that classifiers may miss) that can inform OpenAI’s safety work and the AI and mental health community overall.

How to apply

Submissions are open today through December 19, 2025. A panel of internal researchers and experts will review applications on a rolling basis and notify selected proposals on or before January 15th, 2026. Follow this link to apply(opens in a new window).

FAQ

We present these potential topics of exploration as examples, but this is not meant to be a comprehensive list of all potential research directions. Successful proposals can pertain to topics that are not included on this list. 

Potential areas of interest include:

  • How expressions of distress, delusion, or other mental health-related language vary across cultures and languages, and how these differences affect detection or interpretation by AI systems

  • Perspectives from individuals with lived experience on what feels safe, supportive, or harmful when interacting with AI-powered chatbots

  • How mental healthcare providers currently use AI tools, including what is effective, what falls short, and where safety risks emerge

  • The potential of AI systems to promote healthy, pro-social behaviors and reduce harm

  • The robustness of existing AI model safeguards to vernacular, slang, and under-represented linguistic patterns—particularly in low-resource languages

  • How AI systems should adjust tone, style, and framing when responding to youth and adolescents to ensure that guidance feels age-appropriate, respectful, and accessible, with deliverables such as evaluation rubrics, style guidelines, or annotated examples of effective vs. ineffective phrasing across age groups

  • How stigma associated with mental illness may surface in language model recommendations or interaction styles

  • How AI systems interpret or respond to visual indicators related to body dysmorphia or eating disorders, including the creation of ethically collected, annotated multimodal datasets and evaluation tasks that capture common real-world patterns of distress

  • How AI systems can provide compassionate, sensitive support to individuals experiencing grief -- helping them process loss, maintain connections, and access coping resources -- along with deliverables such as exemplar response patterns, tone/style guidelines, or evaluation rubrics for assessing supportive grief-related interactions