
Michi Yasunaga
@michiyasunaga
Followers
4K
Following
751
Media
48
Statuses
310
RT @ren_hongyu: Check out the latest open models. Absolutely no competitor of the same scale. Towards intelligence too cheap to meter. http….
0
16
0
RT @zhaofeng_wu: Robust reward models are critical for alignment/inference-time algos, auto eval, etc. (e.g. to prevent reward hacking whic….
0
36
0
RT @gh_marjan: As Vision-Language Models (VLMs) grow more powerful, we need better reward models to align them with human intent. But how….
0
2
0
🔗 Check out the benchmark here: This is a joint work with @gh_marjan and @LukeZettlemoyer at @AIatMeta. Huge thanks to all who gave us feedback and support. [4/4].
github.com
Multimodal RewardBench. Contribute to facebookresearch/multimodal_rewardbench development by creating an account on GitHub.
0
0
3
RT @JunhongShen1: Introducing Content-Adaptive Tokenizer (CAT) 🐈! An image tokenizer that adapts token count based on image complexity, off….
0
47
0
RT @WeijiaShi2: Introducing 𝐋𝐥𝐚𝐦𝐚𝐅𝐮𝐬𝐢𝐨𝐧: empowering Llama 🦙 with diffusion 🎨 to understand and generate text and images in arbitrary sequen….
0
179
0
RT @liliyu_lili: We scaled up Megabyte and ended up with a BLT! . A pure byte-level model, has a steeper scaling law than the BPE-based mod….
0
10
0
RT @__JohnNguyen__: 🥪New Paper! 🥪Introducing Byte Latent Transformer (BLT) - A tokenizer free model scales better than BPE based models wit….
0
64
0
RT @gh_marjan: Everyone’s talking about synthetic data generation — but what’s the recipe for scaling it without model collapse? 🤔. Meet AL….
0
10
0
This is a joint work with @gh_marjan, Leonid Shamis, @violet_zct, @andrew_e_cohen, @jaseweston, @LukeZettlemoyer at @AIatMeta. Huge thanks to the collaborators and all who gave us feedback and support. [6/6].
0
0
6
RT @AkariAsai: 🚨 I’m on the job market this year! 🚨.I’m completing my @uwcse Ph.D. (2025), where I identify and tackle key LLM limitations….
0
121
0