kushin_m Profile Banner
Kushin Mukherjee Profile
Kushin Mukherjee

@kushin_m

Followers
486
Following
21K
Media
151
Statuses
2K

Postdoctoral researcher @StanfordPsych | prev PhD @UWPsych, research intern @Apple  | He/Him

Palo Alto, CA
Joined November 2016
Don't wanna be here? Send us removal request.
@kushin_m
Kushin Mukherjee
1 year
So stoked to have this journal club highlight piece out in @NatRevPsych!🌟✏️. I provide a glimpse into one of my favorite papers by @judyefan & co. relating sketch production to object recognition!.Current piece: Fan et al. (2018):
Tweet media one
3
8
38
@kushin_m
Kushin Mukherjee
8 days
RT @Miamiamia0103: Calling all digital artists 🧑‍🎨 Have you ever forgotten to put objects on separate layers?. Introducing InkLayer, a segm….
0
34
0
@grok
Grok
2 days
What do you want to know?.
122
26
262
@kushin_m
Kushin Mukherjee
13 days
RT @dyamins: Come to our CCN workshop!
0
4
0
@kushin_m
Kushin Mukherjee
21 days
RT @fredhohman: Come try the fastest and easiest embedding visualization yet! 📊.
0
1
0
@kushin_m
Kushin Mukherjee
21 days
Happening now at Salon 3!!.
@kushin_m
Kushin Mukherjee
23 days
Do cars have wheels? Of course! Do tigers have necks? Of course! While folks know both these facts, they’re not likely to mention the latter. To learn what implications this has for how we measure semantic knowledge, come to our talk T-09-1 on Thursday at 2:15 pm @ #CogSci2025 🧵
Tweet media one
0
1
1
@kushin_m
Kushin Mukherjee
22 days
RT @dyamins: It was a steep climb in the "early days" (~2012) up the gradient of better ImageNet categorization towards better visual syste….
0
6
0
@kushin_m
Kushin Mukherjee
23 days
We hope this is continued evidence that AI and cog sci can mutually support each other! Thanks to our awesome research team of @siddsuresh97 , Tyler Giallanza, Xizheng Yu, Mia Patil, Jon Cohen, and Tim Rogers!.Paper📜- Code🧑‍💻- (end).
Tweet card summary image
github.com
Contribute to Knowledge-and-Concepts-Lab/llm-norms-cogsci2025 development by creating an account on GitHub.
0
0
0
@kushin_m
Kushin Mukherjee
23 days
Stay tuned as we scale this approach up for a wider array of features!.Also! This work received both the computational modeling prize in Applied Cognition @ CogSci 🏆AND a Best Paper Award at the Workshop on Bidirectional Human-AI alignment @ ICLR 🥇earlier this year! (10/11).
1
0
0
@kushin_m
Kushin Mukherjee
23 days
Not only did judgments align more often with the AI-enhanced NOVA choice relative to the human-only norms choice, but people also preferred NOVA choices over the decisions that embeddings like FastText predicted! NOVA best predicts people’s semantic similarity judgments. (9/11)
Tweet media one
1
0
0
@kushin_m
Kushin Mukherjee
23 days
To put NOVA to the test, we had subjects complete a triadic similarity judgment task where one option was ‘correct’ in terms of human-only norms and the other was correct in terms of the enhanced NOVA norms. Which norm dataset would people’s decisions be more aligned with? (8/11)
Tweet media one
1
0
0
@kushin_m
Kushin Mukherjee
23 days
In addition the high d’ scores, visual inspection shows that NOVA (right) shows a greater degree of categorical structure among concepts relative to human only ratings (left). (7/11)
Tweet media one
1
0
0
@kushin_m
Kushin Mukherjee
23 days
We find that the concept x feature matrix we get as a result has much higher feature density (purple histograms) relative to the sparse human-only matrix (yellow bars) in terms of both number of features per concept and the number of concepts that share features. (6/11)
Tweet media one
1
0
0
@kushin_m
Kushin Mukherjee
23 days
We take a signal detection approach on a subset of concept-feature ratings, where we have solid ground truth, and find that verifying judgments from open-sourced models with only a fraction of judgments from a frontier model (GPT-4o) leads to solid d’ measures from LLMs. (5/11)
Tweet media one
1
0
0
@kushin_m
Kushin Mukherjee
23 days
So we decided to take the best of both worlds and created NOVA (Norms Optimized Via AI), a norm dataset that combines human and language model judgments to get the reliability of human ratings with the benefits of scale that LLMs confer. (4/11)
Tweet media one
1
0
0
@kushin_m
Kushin Mukherjee
23 days
Three broad challenges have hampered the creation of large-scale semantic norm datasets. (1) Coverage over concepts is sparse (2) coverage over features and density of concept-feature ratings are sparse and (3) LLM-only norms hallucinate and could be unreliable. (3/11)
Tweet media one
1
0
0
@kushin_m
Kushin Mukherjee
23 days
Semantic feature norms – datasets of concepts and the features they possess – are notoriously difficult to curate at scale. We asked whether we can use language models in a verifiable way to create a feature norm dataset that is predictive of human behavior. (2/11)
Tweet media one
1
0
1
@kushin_m
Kushin Mukherjee
23 days
Do cars have wheels? Of course! Do tigers have necks? Of course! While folks know both these facts, they’re not likely to mention the latter. To learn what implications this has for how we measure semantic knowledge, come to our talk T-09-1 on Thursday at 2:15 pm @ #CogSci2025 🧵
Tweet media one
1
7
29
@kushin_m
Kushin Mukherjee
23 days
RT @allisonchen_227: Does how we talk about AI matter? Yes, yes it does! In our #Cogsci2025 paper, we explore how different messages affect….
0
13
0
@kushin_m
Kushin Mukherjee
23 days
RT @ErikBrockbank: "36 Questions That Lead To Love" was the most viewed article in NYT Modern Love. Excited to share new results investigat….
0
3
0
@kushin_m
Kushin Mukherjee
23 days
RT @verona_teo: Excited to share our new work at #CogSci2025!. We explore how people plan deceptive actions, and how detectives try to see….
0
11
0
@kushin_m
Kushin Mukherjee
23 days
RT @kristinexzheng: Linking student psychological orientation, engagement & learning in intro college-level data science. New work @cogsci_….
0
4
0