OpenAI Codex hands-on review

zackproser.com

169 points by fragmede 4 days ago


rmonvfer - 4 days ago

I was a Plus subscriber and upgraded to Pro just to test Codex, and at least in my experience, it’s been pretty underwhelming.

First, I don’t think they got the UX quite right yet. Having to wait for an undefined amount of time before getting a result is definitely not the best, although the async nature of Codex seems to alleviate this issue (that is, being able to run multiple tasks at once).

Another thing that bugs me is having to define an environment for the tool to be useful. This is very problematic because AFAIK, you can’t spin up containers that might be needed in tests, severely limiting its usefulness. I guess this will eventually change, but the fact that it’s also completely isolated from the internet seems limiting, as one of the reasons o3 is so powerful in ChatGPT is because it can autonomously research using the web to find updated information on whatever you need.

For comparison, I also use Claude a lot, and I’ve found it to work really well to find obscure bugs in a somewhat complex React application by creating a project and adding the GitHub repo as a source. What this allows me is to have a very short wait time, and the difference with Codex is just night and day. Gemini also allows you to do this now, and it works very well because of its massive context window.

All that being said, I do understand where OpenAI is going with this. I guess they want to achieve something like a real coworker (they even say that in their promotional videos for Codex) because you are supposed to give tasks to Codex and wait until it’s done, like a real human, but again, IMHO, it’s too “pull-request-focused”

I guess I’ll be downgrading to Plus again and wait a little to see where this ends up.

avital - 4 days ago

I work at OpenAI (not on Codex) and have used it successfully for multiple projects so far. Here's my flow:

- Always run more than one rollout of the same prompt -- they will turn out different

- Look through the parallel implementations, see which is best (even if it's not good enough), then figure out what changes to your prompt would have helped nudge towards the better solution.

- In addition, add new modifications to the prompt to resolve the parts that the model didn't do correctly.

- Repeat loop until the code is good enough.

If you do this and also split your work into smaller parallelizable chunks, you can find yourself spending a few hours only looping between prompt tuning and code review with massive projects implemented in a short period of time.

I've used this for "API munging" but also pretty deep Triton kernel code and it's been massive.

teekert - 4 days ago

“As I wrote about in Walking and talking with AI in the woods, ideally I'd like to start my morning in an office, launch a bunch of tasks, get some planning out of the way, and then step out for a long walk in nature.”

Wouldn’t we all want that, but it sounds like you can leave task launching and planning to an AI and go find another career.

ryanackley - 3 days ago

If you're building a React app using a popular UI framework, AI will seem like magic at how well it one-shots things.

To the author's point about one-shotting. I think it will be a real challenge pushing an AI coding workflow forward because of this problem. In my experience, AI seems to fall off a cliff when you ask it to write code using more obscure libraries and frameworks. It will always hallucinate something rather than admitting it has no knowledge of how something works.

micromacrofoot - 4 days ago

> Codex will support me and others in performing our work effectively away from our desks.

This feels so hopelessly optimistic to me, because "effectively away from our desks" for most people will mean "in the unemployment line"