Kianté Brantley Profile
Kianté Brantley

@xkianteb

Followers
2K
Following
2K
Media
12
Statuses
3K

Assistant Professor at Harvard | Fitness enthusiast | (He/Him/His)

Joined May 2009
Don't wanna be here? Send us removal request.
@xkianteb
Kianté Brantley
6 days
Wow.
@Walid_Magdy
Walid Magdy 🇵🇸
7 days
Where the first authors come from in #ACL2025 compared to #ACL2024?. The rise of the East and the significant decline of the West!. From the opening slides of ⁦@aclmeeting
Tweet media one
0
0
1
@xkianteb
Kianté Brantley
8 days
RT @andre_t_martins: The sparsemax paper reached 1000 citations now and it keeps to bear fruit. Two recent sparse attention examples: long-….
0
4
0
@xkianteb
Kianté Brantley
9 days
RT @robertarail: I’m building a new team at @GoogleDeepMind to work on Open-Ended Discovery!. We’re looking for strong Research Scientists….
0
260
0
@xkianteb
Kianté Brantley
11 days
RT @MehulDamani2: 🚨New Paper!🚨.We trained reasoning LLMs to reason about what they don't know. o1-style reasoning training improves accura….
0
268
0
@xkianteb
Kianté Brantley
11 days
RT @jacobandreas: 👉 New preprint! Today, many the biggest challenges in LM post-training aren't just about correctness, but rather consiste….
0
11
0
@xkianteb
Kianté Brantley
15 days
RT @polynoamial: Today, we at @OpenAI achieved a milestone that many considered years away: gold medal-level performance on the 2025 IMO wi….
0
544
0
@xkianteb
Kianté Brantley
17 days
RT @WenSun1: How can small LLMs match or even surpass frontier models like DeepSeek R1 and o3 Mini in math competition (AIME & HMMT) reason….
0
9
0
@xkianteb
Kianté Brantley
17 days
RT @kaiwenw_ai: I’m presenting two papers on value-based RL for post-training & reasoning on Friday at @ai4mathworkshop at #ICML2025!.1️⃣ Q….
0
16
0
@xkianteb
Kianté Brantley
18 days
RT @hankyang94: Sharing a project that’s kept me excited for months:. Five years ago, I tried projecting a 10000×10000 symmetric matrix ont….
0
36
0
@xkianteb
Kianté Brantley
18 days
RT @ysu_nlp: Huan and I are looking for a postdoc to join us on agent research (broadly defined: planning, reasoning, safety, memory, conti….
0
15
0
@xkianteb
Kianté Brantley
18 days
RT @WenSun1: Does RL actually learn positively under random rewards when optimizing Qwen on MATH? Is Qwen really that magical such that eve….
0
14
0
@xkianteb
Kianté Brantley
1 month
RT @yoavartzi: Check out our LMLM, our take on what is now being called a "cognitive core" (as far as branding go, this one is not bad) can….
Tweet card summary image
arxiv.org
Neural language models are black-boxes -- both linguistic patterns and factual knowledge are distributed across billions of opaque parameters. This entangled encoding makes it difficult to...
0
7
0
@xkianteb
Kianté Brantley
1 month
RT @JohnCLangford: A new opening for multimodal model research: . Please apply if interested.
0
11
0
@xkianteb
Kianté Brantley
1 month
RT @ShamKakade6: 1/6 Infinite-dim SGD in linear regression is the strawman model for studying scaling laws, critical batch sizes, and LR sc….
0
30
0
@xkianteb
Kianté Brantley
1 month
RT @owenoertell: Tired of over-optimized generations that stray too far from the base distribution?.We present SLCD: Supervised Learning ba….
0
10
0
@xkianteb
Kianté Brantley
1 month
RT @WenSun1: Instead of formalizing reward-guided fine-tuning diffusion models as (discrete or even continuous) MDPs and then using RL or c….
0
4
0
@xkianteb
Kianté Brantley
2 months
RT @WenSun1: A simple and efficient approach to RL for generative policies! Prior work typically requires massively extending the RL horizo….
0
5
0
@xkianteb
Kianté Brantley
2 months
RT @nico_espinosa_d: by incorporating self-consistency during offline RL training, we unlock three orthogonal directions of scaling:. 1. ef….
0
16
0