tiancheng_hu Profile Banner
Tiancheng Hu Profile
Tiancheng Hu

@tiancheng_hu

Followers
945
Following
377
Media
32
Statuses
221

PhD student @CambridgeLTL @Cambridge_Uni. @Apple Scholar, @Gates_Cambridge Scholar. Previously @MSP_UTD @UT_Dallas @ETH_en @EPFL_en. Interested in NLP and CSS

Joined July 2021
Don't wanna be here? Send us removal request.
@tiancheng_hu
Tiancheng Hu
10 days
Centaur (a model of general cognition tuned from 160 multi-step psych experiment data) @marcel_binz @cpilab.
0
0
0
@tiancheng_hu
Tiancheng Hu
10 days
This work complements other fantastic work and data in the space:.Twin-2K-500 (2k individual answering 500+ questions) .Generative Agent Simulations of 1,000 People (2h interview as seeds for simulation) @joon_s_pk @msbernst.
1
0
0
@tiancheng_hu
Tiancheng Hu
10 days
Our unique focus: we're not replicating static profiles (like survey answers). We're simulating a cognitive process - how an individual processes new information and reacts emotionally.
1
0
0
@tiancheng_hu
Tiancheng Hu
10 days
Working on LLM social simulation and need data?.Excited to announce our iNews paper is accepted to #ACL2025! 🥳 It's a large-scale dataset for predicting individualized affective responses to real-world, multimodal news. 🤗 Data:
@tiancheng_hu
Tiancheng Hu
4 months
Ever notice how something that makes your blood boil barely registers with your friend? Our emotional reactions aren't universal at all—they're deeply personal. And AI needs to understand that. Excited to share our new paper: "iNews" 🧵 (1/8)
Tweet media one
2
7
31
@tiancheng_hu
Tiancheng Hu
1 month
RT @pals_nlp_wrkshp: Join us at @emnlpmeeting for: . "Tailoring AI: Exploring Active and Passive LLM Personalization" 🎯🧠. To answer, when s….
0
16
0
@tiancheng_hu
Tiancheng Hu
2 months
RT @caiqizh: 🔥 We teach LLMs to say how confident they are on-the-fly during long-form generation. 🤩No sampling. No slow post-hoc methods.….
0
22
0
@tiancheng_hu
Tiancheng Hu
3 months
RT @Gates_Cambridge: 95 new scholars will form the Class of 2025, marking a quarter century of the scholarship's existence - .
0
4
0
@tiancheng_hu
Tiancheng Hu
4 months
RT @CambridgeLTL: Extremely happy to share that our PhD student @tiancheng_hu received the Apple Scholars in AI/ML PhD Fellowship! 🎉 The fe….
0
3
0
@tiancheng_hu
Tiancheng Hu
4 months
RT @bminixhofer: We created Approximate Likelihood Matching, a principled (and very effective) method for *cross-tokenizer distillation*!….
0
28
0
@tiancheng_hu
Tiancheng Hu
4 months
RT @gvrkiran: Most emotion detection models treat affect like a universal constant. But emotions are deeply personal. This paper has a d….
0
5
0
@tiancheng_hu
Tiancheng Hu
4 months
Super nice tool to understand what exactly are the preferences that we're aligning to, and the differences between models!.
@arduinfindeis
Arduin Findeis
4 months
🕵🏻💬 Introducing Feedback Forensics: a new tool to investigate pairwise preference data. Feedback data is notoriously difficult to interpret and has many known issues – our app aims to help!. Try it at Three example use-cases 👇🧵
0
0
3
@tiancheng_hu
Tiancheng Hu
4 months
RT @li_chengzu: Hey everyone, I'm so excited to share my recent interview on Imagine while Reasoning in Space: Multimodal Visualization-of-….
0
6
0
@tiancheng_hu
Tiancheng Hu
4 months
iNews applications:.• LLM personalization.• Affective computing.• Human behavior simulation.• Social computing.• and many more! (8/8). We are particularly grateful to @CamLangsci for funding support and special thanks to @gvrkiran.
0
1
2
@tiancheng_hu
Tiancheng Hu
4 months
Few-Shot:.• "Early ascent phenomenon": performance dips with few examples, then improves.• Persona info consistently helps, even at 32-shot (reaching 44.4% accuracy). • Image few-shot prompting scales worse than text, despite zero-shot advantage. (7/8)
Tweet media one
1
0
0
@tiancheng_hu
Tiancheng Hu
4 months
Zero-Shot LLM Prediction:.• Persona info boosts accuracy across models (up to 7% gain!). • Image inputs generally outperform text inputs in zero-shot. • Gemini 1.5 Pro + image + persona = best zero-shot performance (still only 40% accuracy though). (6/8)
Tweet media one
1
0
0
@tiancheng_hu
Tiancheng Hu
4 months
These persona variables explain up to 15.2% of annotation variance—more than any existing subjective NLP dataset! Individual differences aren't noise—they're systematic patterns we can model. (5/8).
1
0
0
@tiancheng_hu
Tiancheng Hu
4 months
What makes iNews unique? We don't aggregate responses. We capture personal reactions AND collect comprehensive annotator characteristics (i.e. demographics, personality, media habits). (4/8).
1
0
0
@tiancheng_hu
Tiancheng Hu
4 months
We're introducing iNews: a large-scale dataset capturing the inherent subjectivity of how people respond emotionally to real news content. 2,899 Facebook posts (screenshot so multimodal!) × 291 diverse annotators = rich, subjective affective data. (3/8)
Tweet media one
1
0
0
@tiancheng_hu
Tiancheng Hu
4 months
Current AI systems are often trained with the assumption that we all feel the same about content, but psychology shows we don't. Our emotions vary by age, gender, personality, politics & countless other factors. (2/8).
1
0
0
@tiancheng_hu
Tiancheng Hu
4 months
Ever notice how something that makes your blood boil barely registers with your friend? Our emotional reactions aren't universal at all—they're deeply personal. And AI needs to understand that. Excited to share our new paper: "iNews" 🧵 (1/8)
Tweet media one
1
2
10