A new study using graph-based tasks reveals that large language models employ two parallel mechanisms for in-context learning rather than relying on a single approach. Through PCA analysis and causal intervention techniques, researchers show that LLMs simultaneously encode global topology information and local transition patterns, with late-layer circuits responsible for transferring structural preferences between different graph configurations.
Why it matters: Understanding how LLMs actually learn from context is fundamental to improving their reliability, interpretability, and ability to generalize to novel tasks—critical for AI safety and deployment.