GPT-5.2-Codex

openai.com

475 points by meetpateltech 15 hours ago


mccoyb - 14 hours ago

If anyone from OpenAI is reading this -- a plea to not screw with the reasoning capabilities!

Codex is so so good at finding bugs and little inconsistencies, it's astounding to me. Where Claude Code is good at "raw coding", Codex/GPT5.x are unbeatable in terms of careful, methodical finding of "problems" (be it in code, or in math).

Yes, it takes longer (quality, not speed please!) -- but the things that it finds consistently astound me.

tananaev - 14 hours ago

I was very skeptical about Codex at the beginning, but now all my coding tasks start with Codex. It's not perfect at everything, but overall it's pretty amazing. Refactoring, building something new, building something I'm not familiar with. It is still not great at debugging things.

One surprising thing that codex helped with is procrastination. I'm sure many people had this feeling when you have some big task and you don't quite know where to start. Just send it to Codex. It might not get it right, but it's almost always good starting point that you can quickly iterate on.

aaa_aaa - an hour ago

I am suspecting there are some shills astroturfing for every LLM release. Or people are overreacting as a result of their unnecessary attachment.

kordlessagain - 11 hours ago

I’ve been using Codex CLI heavily after moving off Claude Code and built a containerized starter to run Codex in different modes: timers/file triggers, API calls, or interactive/single-run CLI. A few others are already using it for agentic workflows. If you want to run Codex securely (or not) in a container to test the model or build workflows, check out https://github.com/DeepBlueDynamics/codex-container.

It ships with 300+ MCP tools (crawl, Google search, Gmail/GCal/GDrive, Slack, scheduling, web indexing, embeddings, transcription, and more). Many came from tools I originally built for Claude Desktop—OpenAI’s MCP has been stable across 20+ versions so I prefer it.

I will note I usually run this in Danger mode but because it runs in a container, it doesn't have access to ENVs I don't want it messing with, and have it in a directory I'm OK with it changing or poking about in.

Headless browser setup for the crawl tools: https://github.com/DeepBlueDynamics/gnosis-crawl.

My email is in my profile if anyone needs help.

shanev - 14 hours ago

The GPT models, in my experience, have been much better for backend than the Claude models. They're much slower, but produce logic that is more clear, and code that is more maintainable. A pattern I use is, setup a Github issue with Claude plan mode, then have Codex execute it. Then come back to Claude to run custom code review plugins. Then, of course review it with my own eyes before merging the PR.

My only gripe is I wish they'd publish Codex CLI updates to homebrew the same time as npm :)

freedomben - 14 hours ago

The cybersecurity angle is interesting, because in my experience OpenAI stuff has gotten terrible at cybersecurity because it simply refuses to do anything that can be remotely offensive (as in the opposite of "defensive"). I really thought we as an industry had learned our lesson that blocking "good guys" (aka white-hats) from offensive tools/capabilities only empowers the gray-hat/black-hats and puts us at a disadvantage. A good defense requires some offense. I sure hope they change that.

tptacek - 15 hours ago

It's interesting that they're foregrounding "cyber" stuff (basically: applied software security testing) this way, but I think we've already crossed a threshold of utility for security work that doesn't require models to advance to make a dent --- and won't be responsive to "responsible use" controls. Zero-shotting is a fun stunt, but in the real world what you need is just hypothesis identification (something the last few generations of models are fine at) and then quick building of tooling.

Most of the time spent in vulnerability analysis is automatable grunt work. If you can just take that off the table, and free human testers up to think creatively about anomalous behavior identified for them, you're already drastically improving effectiveness.

mvkel - 8 hours ago

Fascinating to see the increasing acceptance of AI generated code in HN comments.

We've come a long way since gpt-3.5, and it's rewarding to see people who are willing to change their cached responses

CjHuber - 14 hours ago

Somehow Codex for me is always way worse than the base models.

Especially in the CLI, it seems that its so way too eager to start writing code nothing can stop it, not even the best Agents.md.

Asking it a question or telling it to check something doesn‘t mean it should start editing code, it means answer the question. All models have this issue to some degree, but codex is the worst offender for me.

simianwords - 4 hours ago

No one's saying this but this is around 40% costlier than the previous codex model. Price change is important.

NitpickLawyer - 15 hours ago

> In parallel, we’re piloting invite-only trusted access to upcoming capabilities and more permissive models for vetted professionals and organizations focused on defensive cybersecurity work. We believe that this approach to deployment will balance accessibility with safety.

Yeah, this makes sense. There's a fine line between good enough to do security research and good enough to be a prompt kiddie on steroids. At the same time, aligning the models for "safety" would probably make them worse overall, especially when dealing with security questions (i.e. analyse this code snippet and provide security feedback / improvements).

At the end of the day, after some KYC I see no reason why they shouldn't be "in the clear". They get all the positive news (i.e. our gpt666-pro-ultra-krypto-sec found a CVE in openBSD stable release), while not being exposed to tabloid style titles like "a 3 year old asked chatgpt to turn on the lights and chatgpt hacked into nasa, news at 5"...

larrymcp - 15 hours ago

Can anyone elaborate on what they're referring to here?

> GPT‑5.2-Codex has stronger cybersecurity capabilities than any model we’ve released so far. These advances can help strengthen cybersecurity at scale, but they also raise new dual-use risks that require careful deployment.

I'm curious what they mean by the dual-use risks.

k_bx - 14 hours ago

Codex code review has been astounding for my distributed team of devs. Very well spent money.

abshkbh - 13 hours ago

We have made this model even better at programming in Windows. Give it a shot :)

dworks - 12 hours ago

GPT 5.1 has been pure magic in VSCode via the Codex plugin. I can't tell any difference with 5.2 yet. I hope the Codex plugin gets feature parity with CC, Cursor, Kilo Code etc soon. That should increase performance a bit more through scaffolding.

I had assumed OpenAI was irrelevant, but 5.1 has been so much better than Gemini.

exacube - 15 hours ago

would love to see some comparison numbers to Gemini and Claude, especially with this claim:

"The most advanced agentic coding model for professional software engineers"