guzmanhe Profile Banner
Paco Guzmán Profile
Paco Guzmán

@guzmanhe

Followers
339
Following
114
Media
8
Statuses
403

Researcher in Language Technologies

San Francisco, CA
Joined March 2009
Don't wanna be here? Send us removal request.
@GarrettLord
Garrett Lord
4 months
🚀 GPT‑5 is here: deep reasoning + lightning speed. The leap toward AGI just got real. Recap + takeaways in thread.
22
21
79
@guzmanhe
Paco Guzmán
4 months
Who is coming to ACL? Let's meet!
1
1
5
@guzmanhe
Paco Guzmán
4 months
@joinHandshake
Handshake
4 months
"We're unlocking opportunities by participating at the forefront of what frontier AI labs are doing." @GarrettLord, sharing the latest on Handshake AI with @mamoonha and @Joubinmir on @KPGrit. Full episode links below.
1
0
5
@guzmanhe
Paco Guzmán
4 months
I’ve recently joined Handshake. We’re hiring!
0
0
13
@joinHandshake
Handshake
5 months
Introducing Handshake AI—the most ambitious chapter in our story. We leverage the scale of the largest early career network to source, train, and manage domain experts who test and challenge frontier models to failure for the top AI labs.
13
12
115
@arena
lmarena.ai
8 months
BREAKING: Meta's Llama 4 Maverick just hit #2 overall - becoming the 4th org to break 1400+ on Arena!🔥 Highlights: - #1 open model, surpassing DeepSeek - Tied #1 in Hard Prompts, Coding, Math, Creative Writing - Huge leap over Llama 3 405B: 1268 → 1417 - #5 under style control
@AIatMeta
AI at Meta
8 months
Today is the start of a new era of natively multimodal AI innovation. Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality. Llama 4 Scout • 17B-active-parameter model
79
373
2K
@Ahmad_Al_Dahle
Ahmad Al-Dahle
8 months
Introducing our first set of Llama 4 models! We’ve been hard at work doing a complete re-design of the Llama series. I’m so excited to share it with the world today and mark another major milestone for the Llama herd as we release the *first* open source models in the Llama 4
320
922
6K
@shishirpatil_
Shishir Patil
1 year
🦙 Excited to release LLAMA-3.3! 405B performance at 70B. Check it out !!
@Ahmad_Al_Dahle
Ahmad Al-Dahle
1 year
Introducing Llama 3.3 – a new 70B model that delivers the performance of our 405B model but is easier & more cost-efficient to run. By leveraging the latest advancements in post-training techniques including online preference optimization, this model improves core performance at
1
5
34
@hikushalhere
Kushal Lakhotia
1 year
Quantized Llama 3.2 1B/3B models are here! Blazing fast CPU inference at ~50 tokens/sec for 1B & ~20 tokens/sec for 3B while being competitive on quality with the respective bf16 versions. Very proud of the team. Can't wait to see what developers build with the foundation models.
@AIatMeta
AI at Meta
1 year
We want to make it easier for more people to build with Llama — so today we’re releasing new quantized versions of Llama 3.2 1B & 3B that deliver up to 2-4x increases in inference speed and, on average, 56% reduction in model size, and 41% reduction in memory footprint. Details
0
1
1
@Ahmad_Al_Dahle
Ahmad Al-Dahle
1 year
On device and small models are a really important part of the Llama herd so we are introducing quantized versions with significantly increased speed. These models have a 2-3x increased speedup – that is fastI Add a link if you want to share what you are building with Llama!
5
18
109
@guzmanhe
Paco Guzmán
1 year
We've just released new quantized Llama 3.2 models. the 1B is 50 tokens/s on mobile cpu. The best thing? minimal quality degradation. Read all on @sacmehtauw's post
@sacmehtauw
Sachin
1 year
We’ve released QUANTIZED Llama 3.2 1B/3B models. ⚡️FAST and EFFICIENT: 1B decodes at ~50 tok/s on a MOBILE PHONE CPU. ⚡️As ACCURATE as full-precision models. ⚡️Ready to CONSUME on mobile devices. Looking forward to on-device experiences these models will enable! Read more👇
0
1
9
@ariG23498
Aritra
1 year
My favourite bit about the Llama 3.2 release is the small models. Both 1B and 3B despite being quite small are very capable. While there are benchmarks that prove my point, I took them on a spin for something totally different, assisted decoding. [1/N] 🧵⤵️
12
56
702
@enmalik
Nahiyan
1 year
Excited to share Llama 3.2! It’s beyond gratifying opening up AI to the world. We’re looking forward to seeing how the community will use these accessible, lightweight, on-device 1B and 3B models. https://t.co/Fmz4nNu1Ji
0
2
10
@shishirpatil_
Shishir Patil
1 year
💥 LLAMA Models: 1B IS THE NEW 8B 💥 📢 Thrilled to open-source LLAMA-1B and LLAMA-3B models today. Trained on up to 9T tokens, we break many new benchmarks with the new-family of LLAMA models. Jumping right from my PhD at Berkeley, to train these models at @AIatMeta has been an
25
88
600
@Ahmad_Al_Dahle
Ahmad Al-Dahle
1 year
With today’s launch of our Llama 3.1 collection of models we’re making history with the largest and most capable open source AI model ever released. 128K context length, multilingual support, and new safety tools. Download 405B and our improved 8B & 70B here.
67
223
1K
@ThomasScialom
Thomas Scialom
1 year
The team worked really hard to make history, voila finally the Llama-3.1 herd of models...have fun with it! * open 405B, insane 70B * 128K context length, improved reasoning & coding capabilities * detailed paper https://t.co/0PNiMir9co
3
20
108
@astonzhangAZ
Aston Zhang
1 year
Our Llama 3.1 405B is now openly available! After a year of dedicated effort, from project planning to launch reviews, we are thrilled to open-source the Llama 3 herd of models and share our findings through the paper: 🔹Llama 3.1 405B, continuously trained with a 128K context
130
587
3K
@edunov
Sergey Edunov
1 year
2
7
56
@guzmanhe
Paco Guzmán
1 year
Finally, this happened :)
@costajussamarta
Marta R. Costa-jussa
1 year
It is been a long team journey, and our NLLB work is now published in Nature. Proud of having being part of successfully scaling translation to 200 languages:
4
0
23