
Eric Ho
@ericho_goodfire
Followers
963
Following
1K
Media
4
Statuses
180
Co-founder / CEO @GoodfireAI
San Francisco
Joined September 2011
Just wrote a piece on why I believe interpretability is AI’s most important frontier - we're building the most powerful technology in history, but still can't reliably engineer or understand our models. With rapidly improving model capabilities, interpretability is more urgent,
1
17
131
RT @davidbau: Announcing a deep net interpretability talk series!. Every week you will find new talks on recent research in the science of….
youtube.com
We're a research computing project cracking open the mysteries inside large-scale AI systems. The NSF National Deep Inference Fabric consists of a unique combination of hardware and software that...
0
22
0
RT @nickhjiang: What makes LLMs like Grok-4 unique?. We use sparse autoencoders (SAEs) to tackle queries like these and apply them to four….
0
16
0
most of the team @GoodfireAI did something similar, myself included. quit whatever they were working on to build towards AI safety (for us, through interpretability).
1
0
10
impressed by igor's conviction to move on to build in AI safety. as we get closer to AGI, i expect more and more people to realize that nothing matters more than making sure that we transition safely to a world with smarter than human level intelligences.
Today was my last day at xAI, the company that I helped start with Elon Musk in 2023. I still remember the day I first met Elon, we talked for hours about AI and what the future might hold. We both felt that a new AI company with a different kind of mission was needed. Building.
3
5
79
RT @lightspeedvp: 🗓️ Mark your calendars for August 26 and join us for a #GenSF meetup covering mechanistic interpretability in modern AI m….
0
8
0
RT @EkdeepL: Super excited to be joining @GoodfireAI! I'll be scaling up the line of work our group started at Harvard: making predictive a….
0
18
0
enjoyed this post - lasting improvements + safety will both happen by investing in interpretability.
For a @GoodfireAI/@AnthropicAI meet-up later this month, I wrote a discussion doc:. Assessing skeptical views of interpretability research. Spoiler: it's an incredible moment for interpetability research. The skeptical views sound like a call to action to me. Link just below.
0
0
6
RT @GoodfireAI: Results are in from our internal gpt-oss hackathon! A quick roundup of what we found: (1/8).
0
25
0
RT @CurtTigges: Some neat results from hacking on gpt-oss at the Goodfire internal hackathon this week:. 1. MoE experts are. actually exp….
0
6
0
RT @jack_merullo_: Could we tell if gpt-oss was memorizing its training data? I.e., points where it’s reasoning vs reciting? We took a quic….
0
50
0
for the @GoodfireAI hackathon, i built a tool to visualize which experts activate the most in gpt-oss! found that certain experts tend to fire in interpretable contexts, like in business, poems, and code
3
5
54
come hear @banburismus_ talk about interp if you're in SF!.
AI systems are becoming increasingly intelligent, but our understanding of why these systems behave the way they do remains limited. However, with mechanistic interpretability, researchers can take a look inside the black box and aim to uncover what drives the core mechanisms and
1
0
15
super excited for this partnership - I think interp +.@radicalai_inc materials expertise will unlock new ways of generating materials. maybe even inverse design.
AI already accelerates materials R&D, but understanding what models learn about structure-property relationships could yield greater efficiency, inverse design, & new scientific insights. That’s why we’re partnering with @radicalai_inc to bring interp to materials AI! (1/2).
2
0
10