AI 编程 4.0 · 优秀 2026-03-04 · X

Investigating how Codex context compaction works

By @Kangwook_Lee (Kangwook Lee) · Tue Mar 03 22:06:52 +0000 2026 @Kangwook_Lee(Kangwook Lee)· 3 月 3 日星期二 22:06:52 +0000 2026 📊 ❤️ 864 🔁 82 🔖 1,960 👁️ 244,078 💬 16 📐 543 words 📐 543 个字 For non-codex models, the open-source Codex CLI compacts context locally: an LLM summarizes the conversation using a compaction prompt....

打开原文回到归档

English

Investigating how Codex context compaction works:

Hard to say. Maybe the encrypted blob carries something more than what this simple experiment can reveal, e.g. something specific about how tool results are compacted and restored. But I didn't bother to test further.

The question asks why Codex CLI uses two entirely different compaction paths - local LLM for non-codex models, encrypted API for codex models - when the underlying prompts are nearly identical, and why encrypt the summary at all.

中文

探究 Codex 上下文压缩的原理:

这很难说。也许加密的数据包包含了这个简单实验无法揭示的更多信息,比如工具结果如何被压缩和恢复的具体细节。但我不费心进一步测试了。

问题询问为什么 Codex CLI 使用两种完全不同的压缩路径——对于非 Codex 模型使用本地 LLM,对于 Codex 模型使用加密 API——当底层提示几乎相同时,以及为什么要加密摘要。

这个分析揭示了 Codex 在处理不同模型时的差异化策略,以及对数据完整性和安全性的考虑。