Goodfire (London). Formerly cofounded Apollo Research.
My main research interests are mechanistic interpretability and inner alignment.
seems great for mechanistic anomaly detection! very intuitive to map ADP to surprise accounting (I was vaguely trying to get at a method like ADP here)
Agree! I'd be excited by work that uses APD for MAD, or even just work that applies APD to Boolean circuit networks. We did consider using them as a toy model at various points, but ultimately opted to go for other toy models instead.
(btw typo: *APD)
IMO most exciting mech-interp research since SAEs, great work
I think so too! (assuming it can be made more robust and scaled, which I think it can)
And thanks! :)
We're aware of model diffing work like this, but I wasn't aware of this particular paper.
It's probably an edge case: They do happen both to be in weight space and to be suggestive of weight space linearity. Indeed, our work was informed by various observations from a range of areas that suggest weight space linearity (some listed here).
On the other hand, our work focused on decomposing a given network's parameters. But the line of work you linked above seems more in pursuit of model editing and understanding the difference between two similar models, rather than decomposing a particular model's weights.
all in all, whether it deserved to be in the related work section is unclear to me. Seems plausible either way. The related work section was already pretty long, but it maybe deserves a section on weight space linearity, though probably not one on model diffing imo.
It would be interesting to meditate in the question "What kind of training procedure could you use to get a meta-SAE directly?" And I think answering this relies in part on mathematical specification of what you want.
At Apollo we're currently working on something that we think will achieve this. Hopefully will have an idea and a few early results (toy models only) to share soon.
So I believe I had in mind "active means [is achieved deliberately through the agent's actions]".
I think your distinction makes sense. And if I ever end up updating this article I would consider incorporating it. However, I think the reason I didn't make this distinction at the time is because the difference is pretty subtle.
The mechanisms I labelled as "strictly active" are the kind of strategy that it would be extremely improbable to implement successfully without some sort of coherent internal representations to help orchestrate the actions required to do it. This is true even if they've been selected for passively.
So I'd argue that they all need to be implemented actively (if they're to have a reasonable probability of success) but may be selected for passively or actively. I'm curious if you agree with this? If not, then I may have missed your argument.
(
For the convenience of other readers of this thread, the kind of strategies I labelled as strictly active are:
)
Extremely glad to see this! The Guez et al. model has long struck me as one of the best instances of a mesaoptimizer and it was a real shame that it was closed source. Looking forward to the interp findings!
I'm pretty sure that there's at least one other MATS group (unrelated to us) currently working on this, although I'm not certain about any of the details. Hopefully they release their research soon!
There's recent work published on this here by Chris Mathwin, Dennis Akar, and me. The gated attention block is a kind of transcoder adapted for attention blocks.
Nice work by the way! I think this is a promising direction.
Note also the similar, but substantially different, use of the term transcoder here, whose problems were pointed out to me by Lucius. Addressing those problems helped to motivate our interest in the kind of transcoders that you've trained in your work!
Trying to summarize my current understanding of what you're saying:
Yes all four sound right to me.
To avoid any confusion, I'd just add an emphasis that the descriptions are mathematical, as opposed semantic.
I'd guess you have intuitions that the "short description length" framing is philosophically the right one, and I probably don't quite share those and feel more confused how to best think about "short descriptions" if we don't just allow arbitrary Turing machines (basically because deciding what allowable "parts" or mathematical objects are seems to be doing a lot of work). Not sure how feasible converging on this is in this format (though I'm happy to keep trying a bit more in case you're excited to explain).
I too am keen to converge on a format in terms of Turing machines or Kolmogorov complexity or something else more formal. But I don't feel very well placed to do that, unfortunately, since thinking in those terms isn't very natural to me yet.
Hey Adam, thanks for your thoughts on this!
I think we're on the same page that we might not have the right framework to do computational science on brains or other intelligent systems. I think we might disagree on how far away current mainstream ideas are from being the right framework - I'd predict that, if we talked it out further, I'd say we're closer than you'd say we are. I don't know how far afield from current ideas that we need to look for the right framework, and I'd support work that looks even further afield than several inferential steps from current mainstream ideas. But I don't think the historical sluggish pace of computational neuroscience justifies search any particular inferential distance; more proximal solutions feel just as likely to be the next paradigm/wave than more distant solutions (maybe more likely given the social nature of what constitutes a paradigm/wave).
I really want to re-emphasize that I didn't call PD a new paradigm (or even a new 'wave') in the post. N.B.: "I’ll emphasize that these are early ideas and certainly do not yet constitute ‘Third-Wave Mech Interp’. "
Yeah I don't think PD throws away the majority of the ideas in the 2nd wave. It's designed primarily to solve the anomallies of the 2nd wave. It will therefore resemble 2nd wave ideas and we can draw analogies. But I think it's different in important ways. For one, I think it will probably help us be less confused about ideas like 'feature', 'representation', 'circuit', and so on.
Yes this is fair. These are still fairly deep neural networks, though (if we count time as depth), and they're examples of work that interprets ANNs on the lowest level using low-level analysis of weights and activations using e.g. dimensionality reduction and other methods mech interp folks might find familiar. But I agree it doesn't usually get put in the bucket of 'mech interp' though ultimately the boundary is fairly arbitrary. As a separate point, it is surprising how little of the neuroscience community has actually jumped onto mechanistically understanding more interesting models like inception v2 or LLMs despite the similarity of methods and object of study, which is a testament to the early mech interp pioneers since they saw a field where few others did.
I'm not actually sure if this is very action-relevant. I think in the past I might have said "mech interp practitioners should be more familiar with computational neuroscience/connectionism", since I think this might have save the mech interp community some time. But I don't think it would have saved a huge amount of time, and I think the mech interp has largely surpassed comp neuro as a source of interesting and relevant ideas. I think it's mostly useful as an exercise in situating mech interp ideas within the broader set of ideas of an eminently related field (comp neuro/connectionism). But I'll stress that many in the field see mech interp as better contextualized by other sets of broader ideas (e.g. as a subfield of interpretability/ML), and when viewing mech interp in light of those ideas, it might better be thought of as pre-paradigmatic. I think that's a completely compatible but different perspective from the one I tend to take, and just emphasizes the subjectiveness of the whole question of whether the field is paradigmatic or not.