
Faeze Brahman
@faeze_brh
Followers
2K
Following
4K
Media
44
Statuses
737
Research Scientist @allen_ai | Prev. Postdoc @allen_ai @uw | Ph.D. from UCSC | Former Intern @MSFTResearch , @allen_ai | Researcher in #NLProc, #ML #AI
Seattle, WA
Joined May 2018
RT @ABosselut: The next generation of open LLMs should be inclusive, compliant, and multilingual by design. Thatโs why we (@EPFL @ETH_en) bโฆ.
0
20
0
RT @2plus2make5: ๐จ New postdoc position in our lab @Berkeley_EECS! ๐จ (please retweet + share with relevant candidates). We seek applicantsโฆ.
0
45
0
RT @patrickqdasilva: ๐จ Participants wanted! ๐จ.๐ฌ We're looking for feedback on our new multi-domain research proposal evaluator. Be first toโฆ.
0
12
0
RT @jackclarkSF: This is a genuinely smart funding decision by NSF! @allen_ai has produced a bunch of valuable, widely-used models, regularโฆ.
0
16
0
RT @HannaHajishirzi: Huge thanks to @NSF & @NVIDIA for a $152M grant to support us to build the next Hubble Telescope of AI. Weโll push theโฆ.
0
14
0
RT @HannaHajishirzi: Check out MolmoAct, @allen_aiโs newest fully open model that can see and act๐คฉ.
0
1
0
RT @ericmitchellai: > GPT-5 is the first series of models that actually doesnโt hallucinate basically at all. *real-world utility-maxxing iโฆ.
0
101
0
RT @yuntiandeng: ๐New dataset release: WildChat-4.8M. 4.8M real user-ChatGPT conversations collected from our public chatbots:.- 122K fromโฆ.
huggingface.co
0
51
0
RT @Alibaba_Qwen: ๐ Introducing Qwen3-4B-Instruct-2507 & Qwen3-4B-Thinking-2507 โ smarter, sharper, and 256K-ready!. ๐น Instruct: Boosted geโฆ.
0
402
0
RT @ABosselut: The EPFL NLP lab is looking to hire a postdoctoral researcher on the topic of designing, training, and evaluating multilinguโฆ.
0
22
0
RT @PedramHosseini: ๐ฅ New Gemini models just landed on the Medical Sphere! Weโve added three to the lineup:. ๐ค Gemini 2.5 Pro: state-of-theโฆ.
0
12
0
RT @Wade_Yin9712: ๐ Scaling environments would be a key direction to training *GENERALIST* agents across ๐ค physical world and ๐ป digital worโฆ.
0
3
0
RT @liweijianglw: ๐ฅณ๐ฅณ๐ฅณJoin us at the tutorial of ๐๐ฎ๐๐ซ๐๐ซ๐๐ข๐ฅ๐ฌ ๐๐ง๐ ๐๐๐๐ฎ๐ซ๐ข๐ญ๐ฒ ๐๐จ๐ซ ๐๐๐๐ฌ: ๐๐๐๐, ๐๐๐๐ฎ๐ซ๐, ๐๐ง๐ ๐๐จ๐ง๐ญ๐ซ๐จ๐ฅ๐ฅ๐๐๐ฅ๐ ๐๐ญ๐๐๐ซ๐ข๐ง๐ ๐จ๐ ๐๐๐ ๐๐ฉ๐ฉ๐ฅ๐ข๐๐๐ญ๐ข๐จโฆ.
0
7
0
RT @MehulDamani2: ๐จNew Paper!๐จ.We trained reasoning LLMs to reason about what they don't know. o1-style reasoning training improves accuraโฆ.
0
266
0
RT @liujc1998: Happy to present OLMoTrace at #ACL2025NLP next week!! ๐ค. If you stop by the demo session on Tuesday, July 29, 10:30am-12pm,โฆ.
0
12
0
Lasha is an amazing mentor! Go work with her ๐ฅณ.
Life update: Iโm excited to share that Iโll be starting as faculty at the Max Planck Institute for Software Systems(@mpi_sws_) this Fall!๐. Iโll be recruiting PhD students in the upcoming cycle, as well as research interns throughout the year:
1
1
14
RT @StellaLisy: WHY do you prefer something over another?. Reward models treat preference as a black-box๐ถโ๐ซ๏ธbut human brains๐ง decompose deciโฆ.
0
75
0
RT @g_k_swamy: Recent work has seemed somewhat magical: how can RL with *random* rewards make LLMs reason? We pull back the curtain on thesโฆ.
0
72
0
RT @allen_ai: Introducing FlexOlmo, a new paradigm for language model training that enables the co-development of AI through data collaboraโฆ.
0
73
0