
Antoine Bosselut
@ABosselut
Followers
4K
Following
2K
Media
46
Statuses
1K
Helping machines make sense of the world. Asst Prof @ICepfl; Before: @stanfordnlp @allen_ai @uwnlp @MSFTResearch #NLProc #AI
Joined March 2013
Check out Eric's new preprint on how we can do more reliable reasoning over long-contexts up to 128k tokens!.
šļøCan we meta-learn test-time learning to solve long-context reasoning?. Our latest work, PERK, learns to encode long contexts through gradient updates to a memory scratchpad at test time, achieving long-context reasoning robust to complexity and length extrapolation while
0
0
9
RT @eric_zemingchen: šļøCan we meta-learn test-time learning to solve long-context reasoning?. Our latest work, PERK, learns to encode longā¦.
0
10
0
RT @zamir_ar: We benchmarked leading multimodal foundation models (GPT-4o, Claude 3.5 Sonnet, Gemini, Llama, etc.) on standard computer visā¦.
0
89
0
RT @SkanderMoalla: š Big time! We can finally do LLM RL fine-tuning with rewards and leverage offline/off-policy data!. ā You want rewards,ā¦.
0
36
0
RT @zamir_ar: We open-sourced the codebase of Flextok. Flextok is an image tokenizer that produces flexible-length token sequences and reprā¦.
0
82
0
RT @cndesabbata: Huge thanks to my incredible coauthors for making this possible š.@tedsumers, @bkhmsi, @abosselut, @cocosci_lab . Check ouā¦.
arxiv.org
Being prompted to engage in reasoning has emerged as a core technique for using large language models (LLMs), deploying additional inference-time compute to improve task performance. However, as...
0
1
0
Check out @silin_gao 's paper done in collaboration with on reinforcing abstract thinking in #Reasoning traces!.
NEW PAPER ALERT: Recent studies have shown that LLMs often lack robustness to distribution shifts in their reasoning. Our paper proposes a new method, AbstRaL, to augment LLMsā reasoning robustness, by promoting their abstract thinking with granular reinforcement learning.
1
0
8
RT @megamor2: What makes some jailbreak suffixes stronger than others?. We looked into the inner workings of GCG-like attacks and found a cā¦.
0
1
0
RT @Cohere_Labs: Global MMLU is revolutionizing multilingual AI. š. Recognized by Stanford HAI and adopted by top labs, it's the benchmarkā¦.
0
14
0
Check out Badr's work on specializing experts in MoE-style models to individually represent the operation of different brain networks.
šØNew Preprint!!. Thrilled to share with you our latest work: āMixture of Cognitive Reasonersā, a modular transformer architecture inspired by the brainās functional networks: language, logic, social reasoning, and world knowledge. 1/ š§µš
0
4
20
RT @negarforoutan: Got ideas for making LLMs more inclusive and culturally aware? šāØ.Submit to #MELT Workshop 2025!.Weāre all about multiliā¦.
0
2
0
RT @Noah_Xu_: LLMs secretly anchoring on context START for memorization? š¤ YES! Our new paper reveals "Positional Fragility" through a laā¦.
0
1
0
RT @EPFL_AI_Center: Many AI models speak dozens of languages, but do they grasp cultural context? š£ļøš The INCLUDE benchmark from EPFL's NLPā¦.
actu.epfl.ch
A team of international researchers led by EPFL developed a multilingual benchmark to determine Large Language Models ability to grasp cultural context.
0
4
0
RT @ICepfl: EPFL researchers have discovered key āunitsā in large AI models that seem to be important for language, mirroring the brainās lā¦.
0
3
0
RT @bkhmsi: Excited to be at #NAACL2025 in Albuquerque! Iāll be presenting our paper āThe LLM Language Networkā as an Oral tomorrow at 2:00ā¦.
0
11
0
RT @mismayilsoy: I am attending @naacl this week to present our paper. Come check out our poster at 14:00, Apr 30 in Hall 3 . @HaleSirin_ā¦.
0
3
0
RT @bryanklow: The following 3 events at @NUSingapore are part of the Singapore AI Research Week ( that is heldĀ inā¦.
0
4
0
RT @dyfan22: šØ AI is in legal hot water. Lawsuits over copyrighted training data are mounting ā and content owners are pulling out fast. Toā¦.
0
7
0
RT @bryanklow: We are pleased to invite you to NUS (@NUSingapore)-Swiss AI WorkshopĀ on Wed April 23 (a day before @iclr_conf #ICLR2025 in Sā¦.
0
3
0
RT @silin_gao: NEW PAPER ALERT: Generating visual narratives to illustrate textual stories remains an open challenge, due to the lack of knā¦.
0
11
0