OpenAI’s blog post claims that GPT-5 beats its previous models on several coding benchmarks, including SWE-Bench Verified (scoring 74.9 percent), SWE-Lancer (GPT-5-thinking scored 55 percent), and Aider Polyglot (scored 88 percent), which test the model’s ability to fix bugs, complete freelance-style coding tasks, and work across multiple programming languages.
During the press briefing on Wednesday, OpenAI post-training lead Yann Dubois prompted GPT-5 to “create a beautiful, highly interactive web app for my partner, an English speaker, to learn French.” He tasked the AI to include features like daily progress, a variety of activities like flashcards and quizzes, and noted that he wanted the app wrapped up in a “highly engaging theme.” After a minute or so, the AI-generated app popped up. While it was just one on-rails demo, the result was a sleek site that delivered exactly what Dubois asked for.
“It’s a great coding collaborator, and also excels at agentic tasks,” Michelle Pokrass, a post-training lead, says. “It executes long chains and tool calls effectively [which means it better understands when and how to use functions like web browsers or external APIs], follows detailed instructions, and provides upfront explanations of its actions.”
OpenAI also says in its blog post that GPT-5 is “our best model yet for health-related questions.” In three OpenAI health-related LLM benchmarks—HealthBench, HealthBench Hard, and HealthBench Consensus—the system card (a document that describes the product’s technical capabilities and other research findings) states that GPT-5-thinking outperforms previous models “by a substantial margin.” The thinking version of GPT-5 scored 25.5 percent on HealthBench Hard, up from o3’s 31.6 percent score. These scores are validated by two or more physicians, according to the system card.
The model also allegedly hallucinates less, according to Pokrass, a common issue for AI where it provides false information. OpenAI’s safety research lead Alex Beutel adds that they’ve “significantly decreased the rates of deception in GPT-5.”
“We’ve taken steps to reduce GPT-5-thinking’s propensity to deceive, cheat, or hack problems, though our mitigations are not perfect and more research is needed,” the system card says. “In particular, we’ve trained the model to fail gracefully when posed with tasks that it cannot solve.”
The company’s system card says that after testing GPT-5 models without access to web browsing, researchers found its hallucination rate (which they defined as “percentage of factual claims that contain minor or major errors”) 26 percent less common than the GPT-4o model. GPT-5-thinking has a 65 percent reduced hallucination rate compared to o3.
For prompts that could be dual-use (potentially harmful or benign), Beutel says GPT-5 uses “safe completions,” which prompts the model to “give as helpful an answer as possible, but within the constraints of remaining safe.” OpenAI did over 5,000 hours of red teaming, according to Beutel, and testing with external organizations to make sure the system was robust.
OpenAI says it now boasts nearly 700 million weekly active users of ChatGPT, 5 million paying business users, and 4 million developers utilizing the API.
“The vibes of this model are really good, and I think that people are really going to feel that,” head of ChatGPT Nick Turley says. “Especially average people who haven’t been spending their time thinking about models.”