OpenAI Codex CLI + MCP users: beware of silent truncation in recent releases >=v0.56.0
↳ Aug 23 2025 (commit 957d449): Codex introduced a head+tail cap (256 lines / 10 KiB) on the stdout/stderr summary it feeds back into the conversation. Front-ends still see full logs, but the model only gets the truncated version.
↳ Oct 31 2025 (commit 1c8507b / PR #5979): a second cap was added that walks every tool-call content block (including MCP responses) and enforces that same ~10 KiB limit across the entire payload. Once the budget is exceeded, Codex drops the rest and replaces it with “[omitted …]”.
Impact just a few examples:
- Context7 or other retrieval plugins can’t deliver full research briefs; Codex only ingests the first/last slices and misses the core.
- Linear MCP issues/PRDs are chopped mid-plan, so agents never read the requirements.
- Build/test logs, migration scripts, or large code diffs lose the middle sections, forcing manual paging and repeated tool calls.
Workarounds
1. Pin to ≤ 0.55.0 if you rely on "large" tool outputs.
2. Upvote/comment on GitHub issue https://lnkd.in/dm2z_KFh so the engineering team sees how widespread the breakage is
I've been following the OpenAI Agents repo and came across this pull request.
A contributor submitted a massive, 1,200+ line PR to add support for the Agent-to-Agent (A2A) protocol. This wasn't a lazy contribution.
It included 30 tests, full documentation, and core infrastructure for agent interoperability.
What's notable is that the A2A protocol was initiated and specified by Google.
The response from the OpenAI maintainer showed zero appreciation for the significant effort. It was a cold, procedural shutdown:
"Hi @Kunmeer-SyedMohamedHyder, thanks for sending this pull request. However, we don't have immediate plans to add A2A support to this SDK. We can't say if or when we'll review it. For future major contributions, it would be appreciated if you could start with an issue for discussion first."
The PR was then marked as draft and auto-closed by a bot for inactivity.
This is a perfect example of the "open source" facade from large AI labs. An engineer does a huge amount of high-quality, free work to add a valuable, open interoperability standard. Instead of appreciation or discussion, they're met with a "we don't have plans" and a passive-aggressive "ask first next time."
It's hard not to see this as a "Not Invented By A Competitor" rejection. Why else would you show zero interest in a feature that so clearly benefits the entire agent ecosystem? It sends a strong message to any potential contributor: don't bother building anything that doesn't align with OpenAI's closed roadmap, especially if it has Google's name on it.
What's the point of hosting on GitHub if you treat high-effort community contributions like this?
I'm a huge fan of the Gemini CLI and the power it gives us to automate our workflows. But
I've always found it a bit challenging to discover new and interesting commands. I've spent a lot of time searching through GitHub repos and blog posts, and I always wished there was a central place to find the best commands.
It's a community-driven website where you can find a curated collection of high-quality Gemini CLI commands.
Here are some of the key features:
Smart Search: Quickly find the command you're looking for.
Powerful Filtering: Filter commands by category to narrow down your options.
Community Votes: Upvote your favorite commands to help them rise to the top.
Top Contributors Hall of Fame: We recognize and celebrate the amazing people who contribute to the collection.
NEW! Easy Drag-and-Drop Contributions: We've made it incredibly easy to share your own commands. Just log in with your GitHub account and drag and drop your .toml files directly onto the site.
I would love for you to check it out, use the commands, and, most importantly, contribute your own commands to the collection. With our new drag-and-drop feature, it only takes a few seconds to share your work with the community.
I've been blown away by the positive response from the community so far. I shared this on LinkedIn a little while ago, and the feedback has been amazing (you can see the post here:
↳ Aug 23 2025 (commit 957d449): Codex introduced a head+tail cap (256 lines / 10 KiB) on the stdout/stderr summary it feeds back into the conversation. Front-ends still see full logs, but the model only gets the truncated version.
↳ Oct 31 2025 (commit 1c8507b / PR #5979): a second cap was added that walks every tool-call content block (including MCP responses) and enforces that same ~10 KiB limit across the entire payload. Once the budget is exceeded, Codex drops the rest and replaces it with “[omitted …]”.
Problematic Commit: https://lnkd.in/dMGCBSvC
Problematic PR: https://lnkd.in/dznEYwYX
Impact just a few examples: - Context7 or other retrieval plugins can’t deliver full research briefs; Codex only ingests the first/last slices and misses the core. - Linear MCP issues/PRDs are chopped mid-plan, so agents never read the requirements. - Build/test logs, migration scripts, or large code diffs lose the middle sections, forcing manual paging and repeated tool calls.
Workarounds
1. Pin to ≤ 0.55.0 if you rely on "large" tool outputs. 2. Upvote/comment on GitHub issue https://lnkd.in/dm2z_KFh so the engineering team sees how widespread the breakage is