Introducing GPT-5.1 for developers

OpenAI News
Introducing GPT-5.1 for developers

Today we’re releasing GPT‑5.1 in the API platform, the next model in the GPT‑5 series that balances intelligence and speed for a wide range of agentic and coding tasks. GPT‑5.1 dynamically adapts how much time it spends thinking based on the complexity of the task, making the model significantly faster and more token-efficient on simpler everyday tasks. The model also features a “no reasoning” mode to respond faster on tasks that don’t require deep thinking, while maintaining the frontier intelligence of GPT‑5.1.

To make GPT‑5.1 even more efficient, we’re releasing extended prompt caching for up to 24 hour cache retention, driving faster responses for follow-up questions at a lower cost. Our Priority Processing⁠(opens in a new window) customers will also experience noticeably faster performance with GPT‑5.1 over GPT‑5.

On coding, we’ve worked closely with startups like Cursor, Cognition, Augment Code, Factory, and Warp to improve GPT‑5.1’s coding personality, steerability, and code quality. In general, GPT‑5.1 feels more intuitive to use for coding and more communicative with user-facing updates as it completes tasks.

Finally, we’re introducing two new tools with GPT‑5.1: an `apply_patch` tool designed to edit code more reliably and a shell tool to let the model run shell commands.

GPT‑5.1 is the next advancement in the GPT‑5 series, and we plan to continue to invest in more intelligent and capable models to help developers build reliable agentic workflows.

## Efficient reasoning across tasks

#### Adaptive reasoning

To make GPT‑5.1 faster, we overhauled the way we trained it to think. On straightforward tasks, GPT‑5.1 spends fewer tokens thinking, enabling snappier product experiences and lower token bills. On difficult tasks that require extra thinking, GPT‑5.1 remains persistent, exploring options and checking its work in order to maximize reliability.

Balyasny Asset Management⁠(opens in a new window) said GPT‑5.1 "outperformed both GPT‑4.1 and GPT‑5 in our full dynamic evaluation suite, while running 2-3x faster than GPT‑5." They also said across their tool-heavy reasoning tasks, GPT‑5.1 “consistently used about half as many tokens as leading competitors at similar or better quality.” Similarly, AI insurance BPO Pace⁠(opens in a new window) also tested the model and said their agents run "50% faster on GPT‑5.1 while exceeding accuracy of GPT‑5 and other leading models across our evals."

_GPT‑5.1 varies its thinking time more dynamically than GPT‑5. On a representative distribution of ChatGPT tasks, GPT‑5.1 is much faster at the easier tasks, even at high reasoning effort._

As an example, when asked "show an npm command to list globally installed packages", GPT‑5.1 answers in 2 seconds instead of 10 seconds.

## GPT-5 (Medium) takes ~250 tokens (~10 seconds)

show an npm command to list globally installed packages

npm list -g --depth=0

## GPT-5.1 (Medium) takes ~50 tokens (~2 seconds)

show an npm command to list globally installed packages

You can list globally installed npm packages with:

The first one is usually what you want.

#### New “no reasoning” mode

Developers can now use GPT‑5.1 without reasoning by setting reasoning_effort to 'none'. This makes the model behave like a non-reasoning model for latency-sensitive use cases, with the high intelligence of GPT‑5.1 and added bonus of performant tool-calling. Relative to GPT‑5 with 'minimal' reasoning, GPT‑5.1 with no reasoning is better at parallel tool calling (which itself increases end-to-end task completion speed), coding tasks, following instructions, and using search tools—and supports web search⁠(opens in a new window) in our API platform. Sierra⁠(opens in a new window) shared that GPT‑5.1 on “no reasoning” mode showed a “20% improvement on low-latency tool calling performance compared to GPT‑5 minimal reasoning” in their real-world evals.

With the introduction of 'none' as a value in reasoning_effort, developers now have even more flexibility and control over the balance between speed, cost, and intelligence for their use case. GPT‑5.1 defaults to 'none', which is ideal for latency-sensitive workloads. We recommend developers choose 'low' or 'medium' for tasks of higher complexity and 'high' when intelligence and reliability matter more than speed.

#### Extended prompt caching

Extended caching improves reasoning efficiency by allowing prompts to remain active in the cache for up to 24 hours, rather than the few minutes supported today. With a longer retention window, more follow-up requests can leverage cached context—resulting in lower latency, reduced cost, and smoother performance for long-running interactions such as multi-turn chat, coding sessions, or knowledge retrieval workflows.

Prompt cache pricing remains unchanged, with cached input tokens 90% cheaper than uncached tokens, and no additional charge for cache writes or storage. To use extended caching with GPT‑5.1, add the parameter `“prompt_cache_retention='24h'”` on the Responses or Chat Completions API. See the prompt caching docs⁠(opens in a new window) for more detail.

GPT‑5.1 builds on GPT‑5’s coding capabilities with a more steerable coding personality, less overthinking, improved code quality, better user-targeted update messages (preambles) during sequences of tool calls, and more functional frontend designs—especially at low reasoning effort.

On simpler coding tasks like quick code edits, GPT‑5.1’s faster speeds make it easier to iterate back and forth. GPT‑5.1’s faster speeds on simple tasks don’t degrade performance on difficult tasks. On SWE-bench Verified, GPT‑5.1 works even longer than GPT‑5 and reaches 76.3%.

_In__SWE-bench Verified⁠_⁠_, a model is given a code repository and issue description, and must generate a patch to solve the issue. Labels indicate reasoning effort. Accuracy is averaged across all 500 problems. All models used a harness with JSON-based apply\_patch tool._

We got early feedback on GPT‑5.1 from a handful of coding companies. Here are their impressions:

> "GPT 5.1 isn’t just another LLM—it’s genuinely agentic, the most naturally autonomous model I’ve ever tested. It writes like you, codes like you, effortlessly follows complex instructions, and excels in front-end tasks, fitting neatly into your existing codebase. You can really unlock its full potential in the Responses API and we're excited to offer it in our IDE."

—Denis Shiryaev, Head of AI DevTools Ecosystem, JetBrains

## New tools in GPT‑5.1

We’re introducing two new tools with GPT‑5.1 to help developers get the most out of the model in the Responses API: a freeform `apply_patch`tool to make code edits even more reliable without the need for JSON escaping, and a shelltool that lets the model write commands to run on your local machine.

#### Apply_patch tool

The freeform `apply_patch` tool lets GPT‑5.1 create, update, and delete files in a codebase using structured diffs. Instead of just suggesting edits, the model emits patch operations that an application applies and reports back on, enabling iterative, multi-step code editing workflows.

To use the `apply_patch` tool in the Responses API, include it in the tools array with `"tools": [{“type”: “apply_patch”}]` and either include file content in your input or give the model tools for interacting with your file system. The model will generate `apply_patch_call` items for creating, updating, or deleting files that contain diffs that you apply on your file system. For more information on how to integrate with the apply_patch tool, check out our developer documentation⁠(opens in a new window).

The shell tool allows the model to interact with a local computer through a controlled command-line interface. The model proposes shell commands; a developer’s integration executes them and returns the outputs. This creates a simple plan-execute loop that lets models inspect the system, run utilities, and gather data until they can finish the task.

To use the shell tool in Responses API, developers can include it in the tools array with "`tools": [{“type”: “shell”}]`. The API will generate `"shell_call"` items that include the shell commands to execute. Developers execute the commands in the local environment and pass back the execution results in the `"shell_call_output"`item in the next API request. Learn more in our developer documentation⁠(opens in a new window).

## Pricing and availability

GPT‑5.1 and gpt-5.1-chat-latest are available to developers on all paid tiers in the API. Pricing and rate limits⁠(opens in a new window) are the same as GPT‑5. We're also releasing `gpt-5.1-codex` and `gpt-5.1-codex-mini` in the API. While GPT‑5.1 excels at most coding tasks, gpt-5.1-codex models are optimized for long-running, agentic coding tasks in Codex or Codex-like harnesses.

Developers can start building using our GPT‑5.1 developer documentation⁠(opens in a new window) and model prompting guide⁠(opens in a new window). We don’t currently plan to deprecate GPT‑5 in the API and will give developers advanced notice if and when we decide to do so.

We’re committed to iteratively deploying the most capable, reliable models for real agentic and coding work—models that think efficiently, iterate quickly, and handle complex tasks while keeping developers in flow. With adaptive reasoning, stronger coding performance, clearer user-facing updates, and new tools like `apply_patch` and `shell`, GPT‑5.1 is designed to help you build with less friction. And we’re continuing to invest heavily here: you can expect more capable agentic and coding models in the weeks and months ahead.

## Appendix: Model evaluations

EvaluationGPT‑5.1 (high)GPT‑5 (high) SWE-bench Verified

(all 500 problems)76.3%72.8% GPQA Diamond

(no tools)88.1%85.7% AIME 2025

(no tools)94.0%94.6% FrontierMath

(with Python tool)26.7%26.3% MMMU 85.4%84.2% Tau 2-bench Airline 67.0%62.6% Tau 2-bench Telecom*95.6%96.7% Tau 2-bench Retail 77.9%81.1% BrowseComp Long Context 128k 90.0%90.0%

_* For Tau_ _2_ _-bench Telecom, we gave GPT‑5.1 a short, generically helpful prompt to improve its performance._

Introducing GPT-5.4 mini and nano Company Mar 17, 2026

Why Codex Security Doesn’t Include a SAST Report Product Mar 16, 2026

New ways to learn math and science in ChatGPT Product Mar 10, 2026

Our Research * Research Index * Research Overview * Research Residency * OpenAI for Science * Economic Research

Latest Advancements * GPT-5.3 Instant * GPT-5.3-Codex * GPT-5 * Codex

Safety * Safety Approach * Security & Privacy * Trust & Transparency

ChatGPT * Explore ChatGPT(opens in a new window) * Business * Enterprise * Education * Pricing(opens in a new window) * Download(opens in a new window)

Sora * Sora Overview * Features * Pricing * Sora log in(opens in a new window)

API Platform * Platform Overview * Pricing * API log in(opens in a new window) * Documentation(opens in a new window) * Developer Forum(opens in a new window)

For Business * Business Overview * Solutions * Contact Sales

Company * About Us * Our Charter * Foundation * Careers * Brand

Support * Help Center(opens in a new window)

More * News * Stories * Livestreams * Podcast * RSS

Terms & Policies * Terms of Use * Privacy Policy * Other Policies

(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)(opens in a new window)

OpenAI © 2015–2026 Manage Cookies

English United States

Originally published on OpenAI News.