magicailabs Profile Banner
Magic Profile
Magic

@magicailabs

Followers
16K
Following
66
Media
5
Statuses
28

Long-context, test-time compute, and e2e Reinforcement Learning to build a superhuman coding agent (that then builds the rest of AGI for us). Join us https://t.co/hGZKtUzsR3

San Francisco
Joined April 2022
Don't wanna be here? Send us removal request.
@magicailabs
Magic
1 year
LTM-2-Mini is our first model with a 100 million token context window. That’s 10 million lines of code, or 750 novels. Full blog: Evals, efficiency, and more ↓.
Tweet card summary image
magic.dev
Research update on ultra-long context models, our partnership with Google Cloud, and new funding.
166
436
3K
@magicailabs
Magic
9 months
Excited to announce we’re building an Applied Team focused on post-training. Come explore what's possible with our new (and still unreleased) LTM2 models and their 100M token context window. Apply here:
Tweet card summary image
magic.dev
Magic is an AI company that is working toward building safe AGI to accelerate humanity’s progress on the world’s most important problems.
18
8
112
@magicailabs
Magic
11 months
Very excited to welcome @nvidia as Magic's latest investor! With their support, we’re looking forward to scaling long context and inference-time compute.
9
8
152
@magicailabs
Magic
1 year
With context solved, we now focus on unbounded inference-time compute as the next (and potentially last) breakthrough we believe is needed to build reliable AGI. Imagine if you could spend $100 and 10 minutes on one task and reliably get a great pull request for an entire.
20
24
449
@magicailabs
Magic
1 year
Our LTM (Long Term Memory) mechanism needs >1,000x less compute and memory than Llama 3.1 405B’s attention. Llama 3.1 would need 638 H100s *per user* to store a 100M token KV cache. LTM needs a small fraction of one. SSMs, RNNs, and RAG all exploit weaknesses in evals like
Tweet media one
22
27
398
@magicailabs
Magic
1 year
RT @EricSteinb: Very excited to welcome @karpathy as Magic's latest investor!.
0
47
0
@magicailabs
Magic
2 years
RT @Hersh_Desai: The era of long context is upon us. The question is whether you want to be 1 of 1000 co-authors on the Gemini paper or 1 o….
Tweet card summary image
magic.dev
Magic is an AI company that is working toward building safe AGI to accelerate humanity’s progress on the world’s most important problems.
0
6
0
@magicailabs
Magic
2 years
RT @Hersh_Desai: I have been continuously in awe of the brilliance, tenacity, and kindness of @EricSteinb and the small but mighty team at….
0
7
0
@magicailabs
Magic
2 years
RT @EricSteinb: I love my team a lot and sometimes it’s stressful but life has never been so fulfilling. If you want to build AGI on a smal….
Tweet card summary image
magic.dev
Magic is an AI company that is working toward building safe AGI to accelerate humanity’s progress on the world’s most important problems.
0
12
0
@magicailabs
Magic
2 years
If you want to solve very hard problems to build safe AGI on a small team with thousands of GPUs, come join us: .
Tweet card summary image
magic.dev
Magic is an AI company that is working toward building safe AGI to accelerate humanity’s progress on the world’s most important problems.
4
5
41
@magicailabs
Magic
2 years
This round was led by @natfriedman & @danielgross with participation from @CapitalG and @eladgil, and will allow us to further scale up our models.
5
2
36
@magicailabs
Magic
2 years
We've raised $117M from @natfriedman and others to build an AI software engineer. Code generation is both a product and a path to AGI, requiring new algorithms, lots of CUDA, frontier-scale training, RL, and a new UI. We are hiring!
Tweet media one
44
87
690
@magicailabs
Magic
2 years
RT @goodside: 5M tokens of context. Let that sink in. Yes, there's caveats. But consider what's to come:.- Entire codebases in prompts.- N….
0
81
0
@magicailabs
Magic
2 years
RT @EricSteinb: AI with long-term memory!. *A lot* of work left to do but happy to share a little more about what we've been up to. It's b….
0
7
0
@grok
Grok
5 days
Turn old photos into videos and see friends and family come to life. Try Grok Imagine, free for a limited time.
708
1K
5K
@magicailabs
Magic
2 years
What’s next? More compute. LTM Nets see more context than GPTs, but LTM-1 has fewer parameters than today’s frontier models, making it less smart. Knowing how drastically model scale improves the performance of GPTs, we're excited to see how far we can take LTM Nets.
0
3
58
@magicailabs
Magic
2 years
How?. We tried to scale standard GPT context windows but quickly got stuck. So, we designed a new approach: the Long-term Memory Network (LTM Net). Training and serving LTM Nets required a custom ML stack, from GPU kernels to how we distribute the model across a cluster.
2
7
114
@magicailabs
Magic
2 years
Watch LTM-1 reuse and synthesize information across files:
2
6
52
@magicailabs
Magic
2 years
Watch LTM-1 generate complex suggestions:
1
8
79
@magicailabs
Magic
2 years
Meet LTM-1: LLM with *5,000,000 prompt tokens*. That's ~500k lines of code or ~5k files, enough to fully cover most repositories. LTM-1 is a prototype of a neural network architecture we designed for giant context windows.
52
181
1K