pumpikano 21 hours ago

Cool! On a quick glance, it doesn't seem like the group/layer index is provided to the compression model. That might help a bit with fidelity at pretty low additional cost.

  • eigenvalue 21 hours ago

    Interesting, it certainly wouldn’t take up much additional space, but I wonder if it would have any real impact, since it seems somewhat orthogonal to finding a faithful low-dimensional encoding of the activations.

lurker919 20 hours ago

Strategy distillation seems like gradient update in a way? Or would that be at a higher abstract level.

  • eigenvalue 19 hours ago

    You’re right that it’s analogous in concept, but strategy distillation happens at a higher level: it encodes and transfers successful latent reasoning patterns as reusable “strategies,” without necessarily requiring direct gradient updates to the original model weights.

neuroelectron 20 hours ago

Gemini Pro thinks this is a April 1 prank and says the computational cost makes it infeasible.

  • eigenvalue 19 hours ago

    I can assure you it’s not a joke. Compute power is increasing at a ridiculous pace, and highly capable models are getting smaller and smaller, now at the 30b parameter size and under. So even if it wouldn’t be pragmatic now, it could become highly relevant in 4 or 5 years if trend lines continue at anything like the recent pace.

    • neuroelectron 17 hours ago

      Actually compute power is not increasing as fast as it once did and it's hitting a plateau thanks to thermodynamic limits.

trextrex 19 hours ago

Isn't that basically a recurrent neural network?

  • eigenvalue 19 hours ago

    I can see where you’re coming from, but not really. Unlike an RNN, the main transformer still processes sequences non-recurrently. The “sidecar” model just encodes internal activations into compressed latent states, allowing introspection and rollback without changing the underlying transformer architecture.

eigenvalue a day ago

I recently started thinking about what a shame it is that LLMs have no way of directly accessing their own internal states, and how potentially useful that would be if they could. One thing led to the next, and I ended up developing those ideas a lot further.

Transformers today discard internal states after each token, losing valuable information. There's no rollback, introspection, or replaying of their reasoning. Saving every activation isn't practical; it would require way too much space (hundreds of megabytes at least).

The insight here is that transformer activations aren't randomly scattered in high-dimensional space. Instead, they form structured, lower-dimensional manifolds shaped by architecture, language structure, and learned tasks. It's all sitting on a paper-thin membrane in N-space!

This suggested a neat analogy: just like video games save compact states (player location, inventory, progress flags) instead of full frames, transformers could efficiently save "thought states," reconstructable at any time. Reload your saved game, for LLMs!

Here's the approach: attach a small sidecar model alongside a transformer to compress its internal states into compact latent codes. These codes can later be decoded to reconstruct the hidden states and attention caches. The trick is to compress stuff a LOT, but not be TOO lossy.

What new capabilities would this enable? Transformers could rewind their thoughts, debug errors at the latent level, or explore alternative decision paths. RL agents could optimize entire thought trajectories instead of just outputs. A joystick for the brain if you will.

This leads naturally to the concept of a rewindable reasoning graph, where each compressed state is a node. Models could precisely backtrack, branch into alternate reasoning paths, and debug the causes of errors internally. Like a thoughtful person can (hopefully!).

Longer-term, it suggests something bigger: a metacognitive operating system for transformers, enabling AI to practice difficult reasoning tasks repeatedly, refine cognitive strategies, and transfer learned skills across domains. Learning from learning, if you will.

Ultimately, the core shift is moving transformers from stateless text generators into cognitive systems capable of reflective self-improvement. It's a fundamentally new way for AI to become better at thinking.

For fun, I wrote it up and formatted it as a fancy academic-looking paper, which you can read here:

https://raw.githubusercontent.com/Dicklesworthstone/llm_intr...

  • kridsdale1 21 hours ago

    Cool stuff. I celebrate all of this kind of thinking outside the succeeding paradigms.