guzmanhe Profile Banner
Paco Guzmán Profile
Paco Guzmán

@guzmanhe

Followers
327
Following
113
Media
8
Statuses
402

Researcher in Language Technologies

San Francisco, CA
Joined March 2009
Don't wanna be here? Send us removal request.
@guzmanhe
Paco Guzmán
5 days
RT @GarrettLord: 🚀 GPT‑5 is here: deep reasoning + lightning speed. The leap toward AGI just got real. Recap + takeaways in thread. https:….
0
27
0
@guzmanhe
Paco Guzmán
19 days
Who is coming to ACL? Let's meet!.
0
1
4
@grok
Grok
2 days
Generate videos in just a few seconds. Try Grok Imagine, free for a limited time.
693
3K
9K
@guzmanhe
Paco Guzmán
19 days
@joinHandshake
Handshake
22 days
"We're unlocking opportunities by participating at the forefront of what frontier AI labs are doing.". @GarrettLord, sharing the latest on Handshake AI with @mamoonha and @Joubinmir on @KPGrit. Full episode links below.
1
0
3
@guzmanhe
Paco Guzmán
1 month
I’ve recently joined Handshake. We’re hiring!.
0
0
12
@guzmanhe
Paco Guzmán
2 months
RT @joinHandshake: Introducing Handshake AI—the most ambitious chapter in our story. We leverage the scale of the largest early career netw….
0
12
0
@guzmanhe
Paco Guzmán
4 months
RT @lmarena_ai: BREAKING: Meta's Llama 4 Maverick just hit #2 overall - becoming the 4th org to break 1400+ on Arena!🔥. Highlights:.- #1 op….
0
378
0
@guzmanhe
Paco Guzmán
4 months
RT @Ahmad_Al_Dahle: Introducing our first set of Llama 4 models!. We’ve been hard at work doing a complete re-design of the Llama series. I….
0
934
0
@guzmanhe
Paco Guzmán
8 months
RT @shishirpatil_: 🦙 Excited to release LLAMA-3.3! 405B performance at 70B. Check it out !!.
0
5
0
@guzmanhe
Paco Guzmán
10 months
RT @hikushalhere: Quantized Llama 3.2 1B/3B models are here! Blazing fast CPU inference at ~50 tokens/sec for 1B & ~20 tokens/sec for 3B wh….
0
1
0
@guzmanhe
Paco Guzmán
10 months
RT @Ahmad_Al_Dahle: On device and small models are a really important part of the Llama herd so we are introducing quantized versions with….
0
18
0
@guzmanhe
Paco Guzmán
10 months
We've just released new quantized Llama 3.2 models. the 1B is 50 tokens/s on mobile cpu. The best thing? minimal quality degradation. Read all on @sacmehtauw's post.
@sacmehtauw
Sachin
10 months
We’ve released QUANTIZED Llama 3.2 1B/3B models. ⚡️FAST and EFFICIENT: 1B decodes at ~50 tok/s on a MOBILE PHONE CPU. ⚡️As ACCURATE as full-precision models. ⚡️Ready to CONSUME on mobile devices. Looking forward to on-device experiences these models will enable!. Read more👇.
0
1
9
@guzmanhe
Paco Guzmán
11 months
RT @ariG23498: My favourite bit about the Llama 3.2 release is the small models. Both 1B and 3B despite being quite small are very capable.….
0
56
0
@guzmanhe
Paco Guzmán
11 months
RT @enmalik: Excited to share Llama 3.2! It’s beyond gratifying opening up AI to the world. We’re looking forward to seeing how the communi….
0
2
0
@guzmanhe
Paco Guzmán
11 months
RT @shishirpatil_: 💥 LLAMA Models: 1B IS THE NEW 8B 💥. 📢 Thrilled to open-source LLAMA-1B and LLAMA-3B models today. Trained on up to 9T to….
0
90
0
@guzmanhe
Paco Guzmán
1 year
RT @Ahmad_Al_Dahle: With today’s launch of our Llama 3.1 collection of models we’re making history with the largest and most capable open s….
0
222
0
@guzmanhe
Paco Guzmán
1 year
RT @ThomasScialom: The team worked really hard to make history, voila finally the Llama-3.1 herd of models. have fun with it!. * open 405….
0
16
0
@guzmanhe
Paco Guzmán
1 year
RT @astonzhangAZ: Our Llama 3.1 405B is now openly available! After a year of dedicated effort, from project planning to launch reviews, we….
0
588
0
@guzmanhe
Paco Guzmán
1 year
RT @edunov:
0
7
0
@guzmanhe
Paco Guzmán
1 year
Finally, this happened :).
@costajussamarta
Marta R. Costa-jussa
1 year
It is been a long team journey, and our NLLB work is now published in Nature. Proud of having being part of successfully scaling translation to 200 languages:
4
0
23