
Yichen Jiang
@YichenJiang9
Followers
727
Following
219
Media
22
Statuses
135
Newly hooded PhD at UNC-Chapel Hill (@uncnlp) | @Apple AI/ML PhD Fellow | #NLProc | Working on Compositionality.
Chapel Hill, NC
Joined June 2019
RT @yilin_sung: 🚀 New Paper: RSQ: Learning from Important Tokens Leads to Better Quantized LLMs. We show that not all tokens should be trea….
0
39
0
Check out these other awesome works from my labmates & I will present my poster virtually on Aug22 --> "Inducing Systematicity in Transformers by Attending to Structurally Quantized Embeddings (". Detailed thread:Â #ACL2024nlp.
🚨 Check out an exciting batch of papers this week at #ACL2024!. Say hi to some of our awesome students & collaborators who are attending in person, and feel free to ask about our postdoc openings too 🙂. Topics: .-- multi-agent reasoning collaboration.-- structured
0
8
19
RT @swarnaNLP: 🚨 New: my last PhD paper 🚨. Introducing System-1.x, a controllable planning framework with LLMs. It draws inspiration from D….
0
67
0
RT @mohitban47: Having a great time at #LxMLS in Lisbon + meeting awesome people & exploring the beautiful city 🙂 (highly recommended ML sc….
0
26
0
RT @swarnaNLP: We are going to present MAGDi at #ICML2024. If you are attending, say hi to @EliasEskin and @mohitban47 to know more about t….
0
6
0
RT @shoubin621: Check out 2 useful updates on CREMA! 🚨. (1a) A new modality-sequential modular training for generalizable and efficient rea….
0
20
0
Welcome to UNC NLP! I’m sure you will have a lot of fun doing interesting projects and living in a warmer place full of great college sports matches 😀 Best of luck!.
🥳Some of the first papers I read at the start of my ML journey were @mohitban47's papers on multimodal language understanding, and after a great couple of years at Northeastern working on vision-language, I'm excited to joining his lab at @uncnlp as a PhD student to work on
1
2
8
Check out this work by my labmates on how to make LLMs not overly confident on bad answers. Spoiler 🚨: they made the model less confident on wrong data and more confident on correct ones.
🚨 Excited to share our new work on **confidence calibration** in LLMs!. LLMs are often badly calibrated & overconfident, explicitly (eg. "I'm 100% sure") and implicitly, eg. giving details/authoritative tone. We address both w/ a pragmatic speaker-listener multi-agent method.🧵
1
4
10
RT @jaeh0ng_yoon: 🚨New paper👉RACCooN: remove/add/change video content effortlessly/interactively via our MLLM+Video Diffusion (V2P2V) frame….
0
33
0
RT @shoubin621: 🚨 Introducing VideoTree! Captioning + LLMs can perform well on long-video QA, but dense frame captioning leads to inefficie….
0
50
0
🎉Excited to announce that SQ-Transformer is accepted to #ACL2024nlp!. We induce systematicity & achieve stronger generalization in Transformers (w/o pretraining on complex data) by structurally quantizing word embedding & regularizing attention outputs. @XiangZhou14 @mohitban47.
We show Transformers generalize on complex data by using shared attention patterns for similar structures. BUT how to avoid overfitting on low-complexity data?. 🚨SQ-Transformer explicitly quantizes embeddings structurally & learns systematic attention. 🧵
1
23
54
RT @swarnaNLP: Agentic workflows with LLMs are now getting popular for solving complex tasks!. In one of the early works on this topic -- R….
0
9
0
Also, after 10 unforgettable years at Chapel Hill, 2 of those generously sponsored by @Apple Scholars in AIML PhD Fellowship, "I'm going to take my talents to Seattle and join Apple AIML". I will continue to do research in efficient and safe AI that generalizes compositionally.
Last weekend, I graduated from @unccs, 10 years after I wrote my first line of code in COMP 116. I'm super grateful to my advisor @mohitban47, labmates, intern mentors, and many others. Y'all can see how excited I was as I threw my cap out of the frame to the 2nd floor.
7
5
49
Last weekend, I graduated from @unccs, 10 years after I wrote my first line of code in COMP 116. I'm super grateful to my advisor @mohitban47, labmates, intern mentors, and many others. Y'all can see how excited I was as I threw my cap out of the frame to the 2nd floor.
🎉🎓 Congratulations to these awesome new+old MURGeLab graduates on their hooding ceremony --> PhDs @peterbhase @yichenjiang9 @adyasha10 @swarnanlp (+ last year's @byryuer and @xiangzhou14, who joined us for this year's commencement) & MS @abhayzala7 🥳 . Was a fun celebration
8
5
45
RT @mohitban47: 🚨 Check out an exciting set of #ICLR2024 papers/spotlights✨ this week at @iclr_conf (+workshop on reliable+responsible AI)!….
0
28
0
RT @EliasEskin: 🎉 Excited that ReGAL has been accepted to @icmlconf #ICML2024! We use LLM-guided refactoring to discover reusable abstracti….
0
16
0
RT @hanlin_hl: Can we design an efficient & versatile framework to reuse+adapt existing pretrained ControlNets to accurately guide any vide….
0
55
0
RT @jmin__cho: Can we adaptively generate training environments with LLMs to help small embodied RL game agents learn useful skills that th….
0
62
0
RT @JialuLi96: Can we teach multiple skills to a text-to-image (T2I) model (w/o expensive annotations), while minimizing knowledge conflict….
0
41
0
RT @adyasha10: Can LLMs keep track of very long conversations?. We evaluate 'conversational memory' of LLMs via 3 tasks on our dataset of m….
0
58
0