OpenAI B2B Signals
The AI advantage is beginning to compound.
Today we’re introducing B2B Signals, a business extension of OpenAI Signals that measures how AI is diffusing across organizations. It analyzes privacy-preserving usage patterns from Enterprise accounts to show how AI adoption is taking shape inside businesses.
The first phase of enterprise AI adoption was defined by access: which employees had access to AI tools, how many seats had been deployed, and whether employees were experimenting. Those questions still matter, but they no longer fully capture how AI is being used inside organizations. As adoption matures, the more important question is whether firms are using AI deeply enough to keep pace with the frontier.
Earlier this year, OpenAI highlighted a capability overhang across countries: some economies are putting today’s AI tools to work more deeply, while others are using only a fraction of what current systems can do. B2B Signals finds a similar pattern inside the enterprise.
Frontier firms—those operating at the 95th percentile of AI use—use more intelligence per worker, adopt advanced tools more intensively, and embed AI more deeply into workflows. The AI advantage is beginning to compound for some firms, and the difference increasingly comes from depth of use. B2B Signals tracks the behaviors and patterns that drive deeper use, giving organizations a clearer view of how to translate intelligence into business value.
Key Takeaways
- The AI advantage is starting to compound: Frontier firms now use 3.5x the intelligence per worker compared to typical firms, up from 2x a year ago.
- Frontier firms use AI more deeply, not just more often: Message volume only explains 36% of the gap between frontier and typical firms. The majority of the frontier advantage comes from deeper usage.
- Agentic workflows are becoming a marker of frontier adoption: The gap is largest in advanced agentic tools, with frontier firms sending 16x as many Codex messages as typical firms.
- Firms can close the frontier gap through organizational change: To catch up, firms need to measure depth of use, prioritize governance, invest in enablement, scale what works, and move from chat-based assistance to delegated work with agents.
Depth
The AI advantage is starting to compound and firms using AI most deeply are increasing their lead
Seat deployment is only the starting point for enterprises. The clearer signal is whether employees are using AI for deeper, more complex work. This chart compares tokens generated per worker at the frontier, defined as the 95th percentile, with the typical firm, defined as the 50th percentile.
Tokens are an imperfect measure of business value. A short response can be highly valuable, and a long response can be low value. But token volume helps measure how much work employees are asking AI to do, making it a useful proxy for the depth of AI use and the amount of intelligence employees demand from AI.
The frontier firm demands 3.5x as much intelligence per worker as the typical firm. This gap has increased from 2x in April 2025, suggesting that firms using AI most deeply are widening their lead and are better positioned to translate new AI capabilities into deeper, more complex work.
The majority of the frontier advantage comes from deeper usage, rather than higher message volume
The frontier firm demands substantially more intelligence per worker than the typical firm, but most of the gap is not explained by message volume alone. This chart decomposes the 3.5x frontier advantage and finds that if the typical firm sent messages at the same rate as the frontier, it would only close 36% of the 3.5x gap.
The remaining gap is associated with deeper usage. Workers at the frontier ask AI to take on more complex work, provide models with richer context, and generate more substantive outputs.
Breadth
The frontier advantage is largest in advanced agentic tools, demonstrated by a 16x Codex usage gap
The frontier advantage is largest for tools that support more advanced workflows. Codex shows the largest gap, with the frontier firm sending 16x as many messages per worker as the typical firm. ChatGPT Agent, Apps in ChatGPT, Deep Research, and GPTs also show relatively large gaps, suggesting that the frontier is better at leveraging tools that help workers code, delegate multi-step tasks, apply company context, and conduct more complex research.
By contrast, more general-purpose and accessible tools such as User Upload, Search, and Data Analysis show a smaller frontier advantage. These tools are easier for most firms to use because they extend familiar workflows. The frontier advantage is most pronounced in advanced and agentic tools, where adoption requires more expertise, connections to workplace knowledge and tools, and greater comfort delegating work to AI.
The largest frontier advantage is in education and learning
The frontier advantage is largest for education and learning tasks, where the frontier firm sends 7x as many messages per worker as the typical firm. At the frontier, firms use AI to help employees build skills and learn new topics. They also use AI to improve their understanding of AI itself, including what it can do, how to use it well, and where it can fit into existing workflows. The size of the gap suggests that the typical firm may underutilize AI as a tool for workforce learning and development.
Coding also shows a large frontier advantage, with the frontier firm sending 4x as many messages per worker as the typical firm. This is consistent with the broader gap in advanced and agentic tool use. How-to guidance and writing and communication have the smallest frontier gaps, likely because these tasks are more accessible and familiar uses of AI.
Closing the capability overhang requires enablement, not just access. OpenAI’s enterprise resources and OpenAI Academy include practical guides, training materials, and deployment resources to help teams adopt AI with confidence.
AI use is broadest in writing but function-specific trends are emerging
Writing and communication remain the most common uses of ChatGPT. However, usage patterns vary by function and are often tied to each function’s core responsibilities. 60% of IT & Security messages are concentrated in how-to and procedural guidance, almost half of Software Development and Data Science & Engineering messages are related to coding, and a tenth of Finance messages are related to analysis and calculation.
These patterns are consistent with broader evidence that frontier models are improving on economically valuable workplace tasks. GDPval, an evaluation of real-world knowledge work across 44 occupations, measures performance on tasks that produce practical work outputs such as documents, spreadsheets, slides, diagrams, and multimedia. As AI becomes more capable, enterprise usage appears to be extending toward tasks that are more closely tied to each function’s core work.
Task type by business context
| Business context | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| ChatGPT tasks | ||||||||||||
| Writing & communication | ||||||||||||
| How-to & procedural guidance | ||||||||||||
| Information | ||||||||||||
| Analysis & calculations | ||||||||||||
| Advice | ||||||||||||
| Creative media | ||||||||||||
| Commerce | ||||||||||||
| Coding | ||||||||||||
| Education & learning | ||||||||||||
Reach
Industry leadership is not one-dimensional: different sectors lead across ChatGPT, Codex, and the API
There is no single AI adoption leaderboard. Industry rankings vary depending on the measure used. Professional, Scientific, and Technical Services ranks first in both Codex adoption and API intensity, indicating relatively advanced use in developer and product-integrated workflows. Finance and Insurance leads in ChatGPT adoption due to large-scale deployments, while Educational Services has the highest message intensity, suggesting deeper per-person usage. Retail Trade and Health Care rank highly in API intensity, despite lower rankings on other measures.
These differences suggest that industry leadership is not one-dimensional. Some sectors appear to be adopting AI through technical and developer workflows, while others are scaling through broad ChatGPT adoption or more intensive end-user usage.
Industry ranking by AI adoption metric
| Industries | ||||
|---|---|---|---|---|
| Finance and Insurance | 1+1 | 10-4 | 30 | 60 |
| Information | 2-1 | 20 | 20 | 4-1 |
| Professional, Scientific, and Technical Services | 30 | 10 | 10 | 10 |
| Arts, Entertainment, and Recreation | 40 | 4-1 | 50 | 3+1 |
| Utilities | 50 | 80 | 90 | 90 |
| Construction | 6-1 | 50 | 10-1 | 10-1 |
| Real Estate and Rental and Leasing | 7-1 | 7+1 | 11-1 | 80 |
| Manufacturing | 8-1 | 3+1 | 40 | 70 |
| Health Care and Social Assistance | 90 | 90 | 6+1 | 50 |
| Retail Trade | 10-2 | 11-1 | 7-1 | 20 |
| Public Administration | 11-1 | 6+1 | 80 | 11-1 |
Enterprises are moving API use into production workflows and customer-facing applications
Companies are increasingly using the API to integrate models directly into products, services, and internal systems. Common production use cases include in-app assistants, coding and developer tools, customer support, research workflows, and workflow automation.
These deployments show how enterprise AI is moving beyond experimentation into repeatable workflows with measurable operational impact. Across customer examples, firms are using OpenAI models to accelerate knowledge work, improve engineering throughput, and build AI-powered experiences for customers and employees.
Top API use cases by industry
Professional services
Knowledge assistants and search (e.g., Q&A tools, research assistants, internal knowledge assistants)
Customer and sales support (e.g., customer support, voice and chat agents, sales assistance)
Data analysis, summarization, and extraction (e.g., company data analysis, market intelligence, transaction labeling and reconciliation)
Coding and developer tools (e.g., model evaluation tools, coding assistants, workflow automation tools)
Finance and insurance
Data analysis, summarization, and extraction (e.g., data extraction, receipt and expense analysis, investment research)
Document and workflow generation (e.g., automated expense management, research-summary generation, workflow optimization)
Knowledge assistants and search (e.g., investment strategy assistants, policy search, role-specific assistants.)
Customer and service support (e.g., customer support Voice and chat agents, personal banking assistants, sentiment classification)
Information
Coding and developer tools (e.g., coding assistants, software testing tools, web automation tools)
Knowledge assistants and search (e.g., in-product assistants, internal search tools, documentation assistants)
Customer and service support (e.g., customer support voice and chat agents, multi-channel customer-service automation)
Content, media, and design generation (e.g., brand asset generation, marketing tools)
Cisco uses Codex to speed up complex software work across a large enterprise engineering organization. In production workflows, Codex helped reduce build times by about 20%, save 1,500+ engineering hours per month, and increase defect-resolution throughput by 10-15x. As Cisco’s team put it, the biggest gains came when they treated Codex as “part of the team.”
Rakuten deployed Codex across engineering operations and software delivery, reducing mean time to recovery by approximately 50%, and enabling teams to resolve production issues twice as fast. Rakuten also uses Codex for automated code review and vulnerability checks aligned to internal standards, helping accelerate releases without compromising security. On complex projects, Codex can turn partial requirements into working full-stack implementations, compressing timelines from quarters to weeks.
Balyasny Asset Management uses OpenAI to accelerate investment research across a large, specialized knowledge-work organization. Its proprietary AI research platform is used by roughly 95% of investment teams and helps compress research workflows from days to hours. For example, a central-bank speech analysis workflow that previously took two days now takes about 30 minutes, helping analysts reason faster across filings, transcripts, research reports, and market data.
Visit our customer stories page for more examples.
What organizations can do to reach the frontier
OpenAI works with enterprises across industries, functions, and stages of AI maturity, giving us visibility into how adoption develops from experimentation to production. Across these deployments, the firms making the most progress tend to focus less on access alone and more on the organizational systems needed to use AI deeply: measurement, governance, enablement, scaling impact, and agentic deployment.
Five practices stand out as practical steps any organization can start taking today to deepen AI adoption.
- Measure depth of use in addition to access.
The relevant signal is not only how many employees have AI accounts, but whether teams are using AI more substantively over time. Organizations should track whether AI use is becoming more frequent, more complex, and more closely tied to valuable workflows. - Build governance that makes agentic AI deployable.
Leading firms are not avoiding governance. They are using it to make agentic AI more deployable. Firms need clear rules for where agents can operate, what information they can use, when they should advise rather than act, and how humans review higher-risk decisions. Frontier firms are defining these standards as part of the deployment process, so governance becomes a way to expand adoption safely rather than slow it down. - Treat enablement as core infrastructure, not a side project.
As AI capabilities improve, both workers and organizations need systems that help them keep pace. Frontier firms do not treat enablement as a one-time training push. They build continuous learning into deployment through role-specific training, use-case workshops, hackathons, internal champion networks, dedicated experimentation time, and shared repositories of workflows, best practices, and skills. - Identify your frontier teams and scale their impact.
In many organizations, the most advanced usage is concentrated in a small number of teams. Those teams can reveal which workflows, habits, and operating models are working. Leaders should identify these teams, understand and scale the conditions behind their success, and help them share insights and examples of deeper AI use with the rest of the firm. - Move beyond chat to delegating work.
Enterprise AI is shifting from chat assistants to work that can be delegated to agents. Software engineering illustrates this trend, but delegated work is spreading across functions. With Codex, engineers can hand off a defined task, give the agent the context it needs, let it work across files, codebases, and tools, then review the result and refine the workflow with feedback. Frontier firms are encouraging workers to delegate tasks to AI rather than simply using AI as a static assistant.
All analyses in this report are based on de-identified, aggregated enterprise usage data. Message content was classified using automated systems, and no OpenAI employee reviewed individual enterprise, business, or API customer data as part of this analysis.
If you’d like to explore the full findings or learn how to bring AI into your organization responsibly, we’d love to connect.
Discover more

OpenAI Signals
A hub for data, research, analysis, and stories from the OpenAI Economic Research and Global Affairs Teams.

Signals consumer data
Browse the data to see global consumer ChatGPT adoption patterns, geographic distribution, and work and non-work use.

Research and analysis
Research and analysis on how AI is being adopted and its impact on the economy and society.