OpenAI, the creators of ChatGPT, unveiled their latest AI model GPT‑5.5 on Thursday, claiming the model outperforms previous frontier models with gains in agentic coding, document related work and early scientific research, while consuming lesser tokens.
According to OpenAI, GPT-5.5 is their “strongest coding model” and “smartest AI model” to date, which has rolled it out to Plus, Pro, Business, and Enterprise users in ChatGPT and Codex, and GPT‑5.5 Pro is rolling out to Pro, Business, and Enterprise users in ChatGPT. OpenAI said that GPT‑5.5 “excels” at writing and debugging code, creating documents and spreadsheets, operating software and moving across tools until a task is finished.
“In ChatGPT, GPT‑5.5 Thinking unlocks faster help for harder problems, with smarter and more concise answers to help you move through complex work more efficiently. It excels at professional work like coding, research, information synthesis and analysis, and document-heavy tasks, especially when using plugins. Because the model is better at understanding intent, it can move more naturally through the full loop of knowledge work: finding information, understanding what matters, using tools, checking the output, and turning raw material into something useful.,” read a statement from OpenAI.
The company has claimed that GPT‑5.5 matches GPT‑5.4 per-token latency in real-world serving, while performing at a much higher level of intelligence.
“On Terminal-Bench 2.0, which tests complex command-line workflows requiring planning, iteration, and tool coordination, it achieves a state-of-the-art accuracy of 82.7%. On SWE-Bench Pro, which evaluates real-world GitHub issue resolution, it reaches 58.6%, solving more tasks end-to-end in a single pass than previous models. On Expert-SWE, our internal frontier eval for long-horizon coding tasks with a median estimated human completion time of 20 hours, GPT‑5.5 also outperforms GPT‑5.4.,”The company also claimed that GPT‑5.5 uses significantly fewer tokens to complete the same Codex tasks, as compared to their previous models, making it more efficient as well as more capable,” read the statement.
Also Read : One Day, Three Exits: After Building at Scale, Leaders Step Away from OpenAI



