magicailabs Profile Banner
Magic Profile
Magic

@magicailabs

Followers
16K
Following
66
Media
5
Statuses
28

Long-context, test-time compute, and e2e Reinforcement Learning to build a superhuman coding agent (that then builds the rest of AGI for us). Join us https://t.co/hGZKtUzsR3

San Francisco
Joined April 2022
Don't wanna be here? Send us removal request.
@magicailabs
Magic
1 year
LTM-2-Mini is our first model with a 100 million token context window. That’s 10 million lines of code, or 750 novels. Full blog: https://t.co/oFz4A9ynVZ Evals, efficiency, and more ↓
Tweet card summary image
magic.dev
Research update on ultra-long context models, our partnership with Google Cloud, and new funding.
169
430
3K
@magicailabs
Magic
1 year
Excited to announce we’re building an Applied Team focused on post-training. Come explore what's possible with our new (and still unreleased) LTM2 models and their 100M token context window. Apply here:
19
8
113
@magicailabs
Magic
1 year
Very excited to welcome @nvidia as Magic's latest investor! With their support, we’re looking forward to scaling long context and inference-time compute.
9
8
153
@magicailabs
Magic
1 year
With context solved, we now focus on unbounded inference-time compute as the next (and potentially last) breakthrough we believe is needed to build reliable AGI. Imagine if you could spend $100 and 10 minutes on one task and reliably get a great pull request for an entire
20
24
457
@magicailabs
Magic
1 year
Our LTM (Long Term Memory) mechanism needs >1,000x less compute and memory than Llama 3.1 405B’s attention. Llama 3.1 would need 638 H100s *per user* to store a 100M token KV cache. LTM needs a small fraction of one. SSMs, RNNs, and RAG all exploit weaknesses in evals like
22
28
402
@EricSteinb
Eric Steinberger
2 years
Very excited to welcome @karpathy as Magic's latest investor!
40
47
1K
@Hersh_Desai
Hersh Desai
2 years
The era of long context is upon us. The question is whether you want to be 1 of 1000 co-authors on the Gemini paper or 1 of <20 building at
Tweet card summary image
magic.dev
Magic is an AI company that is working toward building safe AGI to accelerate humanity’s progress on the world’s most important problems.
@JeffDean
Jeff Dean
2 years
Needle in a Haystack tests The tech report also details a number of microbenchmark “needle in a haystack” tests (modeled after @GregKamradt’s https://t.co/Hms5EalX1L) that probe the model’s ability to retrieve specific information from its context. For text, Gemini 1.5 Pro
3
6
89
@Hersh_Desai
Hersh Desai
2 years
I have been continuously in awe of the brilliance, tenacity, and kindness of @EricSteinb and the small but mighty team at https://t.co/n6hRyDIIir. So much so that we've decided to invest $100m! If you're interested in building the future, please do reach out to me or the team!
@natfriedman
Nat Friedman
2 years
https://t.co/jAMj9pAun4 has trained a groundbreaking model with many millions of tokens of context that performed far better in our evals than anything we've tried before. They're using it to build an advanced AI programmer that can reason over your entire codebase and the
6
7
70
@EricSteinb
Eric Steinberger
2 years
I love my team a lot and sometimes it’s stressful but life has never been so fulfilling. If you want to build AGI on a small team of people who care a lot with thousands of GPUs, please apply :)
Tweet card summary image
magic.dev
Magic is an AI company that is working toward building safe AGI to accelerate humanity’s progress on the world’s most important problems.
@magicailabs
Magic
2 years
We've raised $117M from @natfriedman and others to build an AI software engineer. Code generation is both a product and a path to AGI, requiring new algorithms, lots of CUDA, frontier-scale training, RL, and a new UI. We are hiring!
19
12
182
@magicailabs
Magic
2 years
If you want to solve very hard problems to build safe AGI on a small team with thousands of GPUs, come join us: https://t.co/xHaNwMszLA!
Tweet card summary image
magic.dev
Magic is an AI company that is working toward building safe AGI to accelerate humanity’s progress on the world’s most important problems.
4
5
39
@magicailabs
Magic
2 years
This round was led by @natfriedman & @danielgross with participation from @CapitalG and @eladgil, and will allow us to further scale up our models.
5
2
36
@magicailabs
Magic
2 years
We've raised $117M from @natfriedman and others to build an AI software engineer. Code generation is both a product and a path to AGI, requiring new algorithms, lots of CUDA, frontier-scale training, RL, and a new UI. We are hiring!
44
86
685
@goodside
Riley Goodside
3 years
5M tokens of context. Let that sink in. Yes, there's caveats. But consider what's to come: - Entire codebases in prompts - Novel-length spec docs as instructions - k-shots where k = 10K - Few-shots where each "shot" is 50K LoC → diff Those who declared the imminent death of
@magicailabs
Magic
3 years
Meet LTM-1: LLM with *5,000,000 prompt tokens* That's ~500k lines of code or ~5k files, enough to fully cover most repositories. LTM-1 is a prototype of a neural network architecture we designed for giant context windows.
19
81
622
@natfriedman
Nat Friedman
3 years
👀 https://t.co/jAMj9pAun4 showing a sneak peak of their 5M token context code model.
Tweet card summary image
magic.dev
Magic is an AI company that is working toward building safe AGI to accelerate humanity’s progress on the world’s most important problems.
@magicailabs
Magic
3 years
Meet LTM-1: LLM with *5,000,000 prompt tokens* That's ~500k lines of code or ~5k files, enough to fully cover most repositories. LTM-1 is a prototype of a neural network architecture we designed for giant context windows.
4
23
142
@EricSteinb
Eric Steinberger
3 years
AI with long-term memory! *A lot* of work left to do but happy to share a little more about what we've been up to. It's been incredibly fulfilling to work with a wonderful team and the trust of our backers towards this milestone. Thank you for the opportunity <3
@magicailabs
Magic
3 years
Meet LTM-1: LLM with *5,000,000 prompt tokens* That's ~500k lines of code or ~5k files, enough to fully cover most repositories. LTM-1 is a prototype of a neural network architecture we designed for giant context windows.
13
7
105
@magicailabs
Magic
3 years
What’s next? More compute. LTM Nets see more context than GPTs, but LTM-1 has fewer parameters than today’s frontier models, making it less smart. Knowing how drastically model scale improves the performance of GPTs, we're excited to see how far we can take LTM Nets.
0
3
57
@magicailabs
Magic
3 years
How? We tried to scale standard GPT context windows but quickly got stuck. So, we designed a new approach: the Long-term Memory Network (LTM Net). Training and serving LTM Nets required a custom ML stack, from GPU kernels to how we distribute the model across a cluster.
2
7
113
@magicailabs
Magic
3 years
Watch LTM-1 reuse and synthesize information across files:
2
6
51
@magicailabs
Magic
3 years
Watch LTM-1 generate complex suggestions:
1
8
79
@magicailabs
Magic
3 years
Meet LTM-1: LLM with *5,000,000 prompt tokens* That's ~500k lines of code or ~5k files, enough to fully cover most repositories. LTM-1 is a prototype of a neural network architecture we designed for giant context windows.
52
179
1K