
Khai Loong Aw
@khai_loong_aw
Followers
266
Following
847
Media
6
Statuses
142
CS PhD @Stanford @NeuroAILab. AI, cognitive science, and neuroscience.
California, USA
Joined December 2022
Accepted to @COLM_conf! š. Instruction-tuning helps LLMs produce more human-like responses. Does it also make LLMs š¤ more similar to the human language system š§ ? . Threadš§µā¬ļø.Paper: w/ @SyrielleMontar1, @bkhmsi, mentored by @martin_schrimpf, @ABosselut.
Instruction-tuning Aligns LLMs to the Human Brain. paper page: Instruction-tuning is a widely adopted method of finetuning that enables large language models (LLMs) to generate output that more closely resembles human responses to natural language
5
10
31
RT @xuanalogue: I've struggled to announce this amidst so much dark & awful going on in the world, but with 1mo to go, I wanted to share thā¦.
0
48
0
RT @bkhmsi: šØNew Preprint!!. Thrilled to share with you our latest work: āMixture of Cognitive Reasonersā, a modular transformer architectuā¦.
0
83
0
RT @bkhmsi: Excited to be at #NAACL2025 in Albuquerque! Iāll be presenting our paper āThe LLM Language Networkā as an Oral tomorrow at 2:00ā¦.
0
11
0
RT @KlemenKotar: Sadly couldnāt make it, but check out @GretaTuckute and my prelim work on āmodel connectomesāāsparse initializations derivā¦.
0
5
0
RT @Rahul_Venkatesh: Check out our new paper on 3D scene understanding with autoregressive sequence modeling. Our framework unifies core taā¦.
0
3
0
RT @KlemenKotar: š Excited to share our new paper!. We introduce the first autoregressive model that natively handles:.š„ Novel view synthesā¦.
0
5
0
RT @dyamins: New paper on 3D scene understanding for static images with a novel large-scale video prediction model. .
0
8
0
RT @Rahul_Venkatesh: Excited to share our recent work on self-supervised discovery of motion concepts with counterfactual world modeling. Iā¦.
0
1
0
RT @sstj389: Extracting structure thatās implicitly learned by video foundation models _without_ relying on labeled data is a fundamental cā¦.
0
7
0
RT @dyamins: New paper on self-supervised optical flow and occlusion estimation from video foundation models. @sstj389 @jiajunwu_cs @SeKimā¦.
0
18
0
RT @dyamins: We just finished up Winter quarter CS375: Large-Scale Neural Network Models for Neuroscience. Check out the publicly availablā¦.
0
36
0
RT @byungdoh: Submissions for the 2025 Workshop on Cognitive Modeling and Computational Linguistics are due Feb. 16. I humbly request yourā¦.
0
10
0
RT @ARTartaglini: šØ New paper at @NeurIPSConf w/ @Michael_Lepori! Most work on interpreting vision models focuses on concrete visual featuā¦.
0
37
0
RT @akgokce0: Excited to share our latest preprint on scaling laws in primate vision modeling! . We trained and analyzed 600+ neural networā¦.
0
13
0
RT @martin_schrimpf: Applications now open for the Summer@EPFL program -- 3-month fellowship for Bachelor/Master stā¦.
0
16
0
RT @yule_gan: New paper at #NeurIPS2024!. In which we try to make a *small yet interpretable* model work. We use decision trees, which offeā¦.
0
37
0
RT @pliang279: š£ Announcing the name and theme of my new research group at MIT @medialab @MITEECS:. ***Multisensory Intelligence***.https:/ā¦.
0
50
0
RT @srush_nlp: Ev Fedorenko's Keynote at COLM. This talk is quite accessible for computer scientists interested iā¦.
0
27
0
RT @COLM_conf: Today we are posting videos from COLM Day 1. These include keynotes from Chris Manning and a panel on Ethics and Society, asā¦.
0
20
0
RT @alex_ander: Nice quick read with an important point: even if a model predicts brain data well it doesn't mean the model uses the same mā¦.
0
26
0