> Codex is extremely, painfully, doggedly persistent in following every last character of them
I think this is because gpt-5 (or gpt-5.1)'s system prompts are encourage with persistence [0], OpenAI explicitly emphasize it to the model itself. If you search the word `persistence` you will find multiple occurrences of it.
```
<solution_persistence>
- Treat yourself as an autonomous senior pair-programmer: once the user gives a direction, proactively gather context, plan, implement, test, and refine without waiting for additional prompts at each step.
- Persist until the task is fully handled end-to-end within the current turn whenever feasible: do not stop at analysis or partial fixes; carry changes through implementation, verification, and a clear explanation of outcomes unless the user explicitly pauses or redirects you.
- Be extremely biased for action. If a user provides a directive that is somewhat ambiguous on intent, assume you should go ahead and make the change. If the user asks a question like "should we do x?" and your answer is "yes", you should also go ahead and perform the action. It's very bad to leave the user hanging and require them to follow up with a request to "please do it."
</solution_persistence>
```
I think this is because gpt-5 (or gpt-5.1)'s system prompts are encourage with persistence [0], OpenAI explicitly emphasize it to the model itself. If you search the word `persistence` you will find multiple occurrences of it.
``` <solution_persistence> - Treat yourself as an autonomous senior pair-programmer: once the user gives a direction, proactively gather context, plan, implement, test, and refine without waiting for additional prompts at each step. - Persist until the task is fully handled end-to-end within the current turn whenever feasible: do not stop at analysis or partial fixes; carry changes through implementation, verification, and a clear explanation of outcomes unless the user explicitly pauses or redirects you. - Be extremely biased for action. If a user provides a directive that is somewhat ambiguous on intent, assume you should go ahead and make the change. If the user asks a question like "should we do x?" and your answer is "yes", you should also go ahead and perform the action. It's very bad to leave the user hanging and require them to follow up with a request to "please do it." </solution_persistence> ```
[0] https://cookbook.openai.com/examples/gpt-5/gpt-5-1_prompting...