
nev
@neverrixx
Followers
623
Following
948
Media
78
Statuses
290
Joined March 2019
We quantized all Gemma Scopes into 4 bits, reducing memory and storage requirements by about 4 times with ~20% higher variance unexplained. You can try the quantized versions and make your own quantized SAEs with DM me if you see bugs or have requests.
Sparse Autoencoders act like a microscope for AI internals. They're a powerful tool for interpretability, but training costs limit research. Announcing Gemma Scope: An open suite of SAEs on every layer & sublayer of Gemma 2 2B & 9B! We hope to enable even more ambitious work
1
3
28