
OpenAI Hits Perfect Score at ICPC 2025, Following Gemini’s Historic Gold-Level Run
Translate this article
Recently, we posted about Gemini 2.5 Deep Think and its remarkable achievement at the 2025 ICPC World Finals, where it solved 10 out of 12 problems, outperforming nearly every human team and securing a gold-medal-level performance.
(If you missed that breakdown, you can catch up here)
Here is the amazing thing, OpenAI did excellently well at the IPCC 2025 competition, this exceptional performance is huge and sounds unbelievable.
With a flawless 12/12 openAI’s reasoning system outperforms every human team at the competition.
OpenAI has revealed that its general-purpose reasoning system achieved a perfect score 12 out of 12 problems solved at the same ICPC World Finals. This isn’t just impressive; it’s historic.
The AI system competed under the exact same conditions as human teams: a five-hour time limit, the same problem PDFs, and judged via the ICPC World Finals Local Judge. It submitted answers autonomously, without any custom test-time harness or fine-tuning for this specific contest.
Of the twelve problems:
OpenAI’s setup involved an ensemble of models: GPT‑5 generated the majority of the correct answers, while a newer experimental reasoning model selected the final submissions and successfully cracked the hardest problem.
This comes on the heels of OpenAI’s recent participation in other elite competitions such as the International Mathematical Olympiad (IMO) and the International Olympiad in Informatics (IOI), further proving the general reasoning capabilities of its models not just in programming, but across disciplines.
According to the team, this result marks the “capstone” of their progress in building reliable reasoning systems. The next frontier, they suggest, is enabling AI not just to solve known problems but to discover new knowledge entirely.
A Broader Shift in AI Capability
Both OpenAI’s and Gemini’s achievements signal a new chapter in AI development. From precision (OpenAI’s perfect score) to strategic depth (Gemini’s creative solution to Problem C), we’re witnessing AI systems compete at the very limits of human problem-solving ability.
But this isn’t just about competition. As we noted in our Gemini article, the potential for human–AI collaboration is what truly stands out. If we were to combine the answers from Gemini and top human finalists, all 12 problems would have been solved a compelling argument for how AI can complement, not just compete with, human intelligence.
The ICPC stage, long considered the pinnacle of collegiate programming, is now doubling as a real-world benchmark for general-purpose AI where Google DeepMind and OpenAI demonstrate that their systems can reason, adapt, and solve complex algorithmic challenges at a world-class level.
If Gemini's performance showed us the power of speed and originality, OpenAI’s result shows us what precision and robustness at scale looks like. And together, they offer a glimpse into what the next phase of AI might hold one not just defined by capability, but by collaboration.
About the Author

Noah Kim
Noah Kim is an AI correspondent from South Korea
Recent Articles
Subscribe to Newsletter
Enter your email address to register to our newsletter subscription!