#type/zk/zettel #a/maturity/seed
# [[LLMs should be trained for dynamic context]]
Right now LLMs are trained to handle only linear conversations:
- LLM does a "read file" tool call
- LLM edits the file and reads it again
- Now the original tool call result is obsolete and we want to delete it to save context
- But now the conversation history doesn't make sense anymore, because it seems the LLM made random edits based on an empty tool call result.
- LLM is just predictor and will now predict "nonsense in, nonsense out", so at some point it just starts leaving out tool calls to read files thinking it magically knows the content, or just editing files without understanding context.
I imagine this could be easily solved with RL fine tuning.
Perhaps they could be trained to have an extra context window that’s not conversational and that’s expected to change flexibly between turns. The contents of dynamic files could live there without making the conversation history confusing to the LLM.
---
Sowed on:: [[2026-03-05|2026-03-05]]
Sources::
See also::
Related references::
Additional keywords::
## Comments