
Tiancheng Hu
@tiancheng_hu
Followers
991
Following
432
Media
32
Statuses
231
PhD student @CambridgeLTL @Cambridge_Uni. @Apple Scholar, @Gates_Cambridge Scholar. Previously @MSP_UTD @UT_Dallas @ETH_en @EPFL_en. Interested in NLP and CSS
Joined July 2021
SimBench: Benchmarking the Ability of Large Language Models to Simulate Human Behaviors, SRW Oral, Monday, July 28, 14:00-15:30
0
0
6
I will be presenting: iNews: A Multimodal Dataset for Modeling Personalized Affective Responses to News, Poster Session 1, Monday, July 28, 11:00-12:30; Also at LAW workshop
1
0
6
Heading to Vienna today to attend #ACL2025NLP! Let's chat if you are interested in LLM social simulation, personalization, character training and human-centered AI!
1
3
40
🗣️ Excited to share our new #ACL2025 Findings paper: “Just Put a Human in the Loop? Investigating LLM-Assisted Annotation for Subjective Tasks” with @jad_kabbara and @dkroy. Arxiv: https://t.co/FeWQLQxt5K Read about our findings ⤵️
arxiv.org
LLM use in annotation is becoming widespread, and given LLMs' overall promising performance and speed, simply "reviewing" LLM annotations in interpretive tasks can be tempting. In subjective...
1
9
55
Centaur (a model of general cognition tuned from 160 multi-step psych experiment data) https://t.co/X6IFC29lbx
@marcel_binz @cpilab
0
0
0
This work complements other fantastic work and data in the space: Twin-2K-500 (2k individual answering 500+ questions) https://t.co/ewmpEr1aWm, Generative Agent Simulations of 1,000 People (2h interview as seeds for simulation) https://t.co/ti24rBkco7
@joon_s_pk @msbernst
1
0
0
Our unique focus: we're not replicating static profiles (like survey answers). We're simulating a cognitive process - how an individual processes new information and reacts emotionally.
1
0
0
Working on LLM social simulation and need data? Excited to announce our iNews paper is accepted to #ACL2025! 🥳 It's a large-scale dataset for predicting individualized affective responses to real-world, multimodal news. https://t.co/w4qLZaUXKv 🤗 Data:
huggingface.co
Ever notice how something that makes your blood boil barely registers with your friend? Our emotional reactions aren't universal at all—they're deeply personal. And AI needs to understand that. Excited to share our new paper: "iNews" 🧵 (1/8) https://t.co/w4qLZaUXKv
2
7
32
Join us at @emnlpmeeting for: "Tailoring AI: Exploring Active and Passive LLM Personalization" 🎯🧠 To answer, when should LLMs personalize? What role do users play in LLM-personalization? 📅 Deadline Aug. 1 📝 Details in thread 🧵👇 #EMNLP2025 #LLM #AI #personalization 1/5
2
17
20
🔥 We teach LLMs to say how confident they are on-the-fly during long-form generation. 🤩No sampling. No slow post-hoc methods. Not limited to short-form QA! ‼️Just output confidence in a single decoding pass. ✅Better calibration! 🚀 20× faster runtime. arXiv:2505.23912 👇
2
22
40
95 new scholars will form the Class of 2025, marking a quarter century of the scholarship's existence - https://t.co/vCu7LVm1xi
#GatesCambridge25 #scholarship @Cambridge_Uni @gatesfoundation @GatesAlumni
gatescambridge.org
95 new scholars will form the Class of 2025, marking a quarter century of the scholarship's existence
0
5
16
Extremely happy to share that our PhD student @tiancheng_hu received the Apple Scholars in AI/ML PhD Fellowship! 🎉 The fellowship will support his research on LLM-based simulation and LLM personalisation. Congratulations again, @tiancheng_hu! 🥳 https://t.co/hR3y5BPbAa
machinelearning.apple.com
Apple is proud to announce the 2025 recipients of the Apple Scholars in AIML PhD fellowship.
0
3
14
We created Approximate Likelihood Matching, a principled (and very effective) method for *cross-tokenizer distillation*! With ALM, you can create ensembles of models from different families, convert existing subword-level models to byte-level and a bunch more🧵
2
28
89
Most emotion detection models treat affect like a universal constant. But emotions are deeply personal. This paper has a dataset that captures affective responses to news posts along with annotator personas. A goldmine for personalization research. https://t.co/Iw8cK4XxOa
1
5
32
Super nice tool to understand what exactly are the preferences that we're aligning to, and the differences between models!
🕵🏻💬 Introducing Feedback Forensics: a new tool to investigate pairwise preference data. Feedback data is notoriously difficult to interpret and has many known issues – our app aims to help! Try it at https://t.co/4HubCg52Pi Three example use-cases 👇🧵
0
0
3
Hey everyone, I'm so excited to share my recent interview on Imagine while Reasoning in Space: Multimodal Visualization-of-Thought with @samcharrington for the @twimlai podcast. Check it out! https://t.co/TyFn1quHa5 from @twimlai
twimlai.com
Today, we're joined by @li_chengzu, PhD student at the @CambridgeLTL to discuss his recent paper, “Imagine while Reasoning in Space: Multimodal Visualization-of-Thought.” We explore the motivations behind MVoT, its connection to prior work like TopViewRS, and its relation to
0
6
22
iNews applications: • LLM personalization • Affective computing • Human behavior simulation • Social computing • and many more! (8/8) We are particularly grateful to @CamLangsci for funding support and special thanks to @gvrkiran.
0
1
2
Few-Shot: • "Early ascent phenomenon": performance dips with few examples, then improves • Persona info consistently helps, even at 32-shot (reaching 44.4% accuracy). • Image few-shot prompting scales worse than text, despite zero-shot advantage. (7/8)
1
0
0
Zero-Shot LLM Prediction: • Persona info boosts accuracy across models (up to 7% gain!). • Image inputs generally outperform text inputs in zero-shot. • Gemini 1.5 Pro + image + persona = best zero-shot performance (still only 40% accuracy though). (6/8)
1
0
0
These persona variables explain up to 15.2% of annotation variance—more than any existing subjective NLP dataset! Individual differences aren't noise—they're systematic patterns we can model. (5/8)
1
0
0