Lianhui Qin @ ICLR 2024 Profile Banner
Lianhui Qin @ ICLR 2024 Profile
Lianhui Qin @ ICLR 2024

@Lianhuiq

Followers
4,225
Following
399
Media
16
Statuses
397

Incoming Assistant Professor at UCSD CSE. Currently postdoc at AI2 Mosaic. NLP, ML, AI. I’m recruiting PhD students.

Seattle
Joined October 2018
Don't wanna be here? Send us removal request.
Explore trending content on Musk Viewer
Pinned Tweet
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 months
🚀 PhD Opportunities in AI @ucsd_cse for Fall '24🌞 I'm recruiting PhD students with a passion for Large Language Models (LLMs), reasoning, generation, and AI for science. I'll be attending #NeurIPS2023 and #AAAI2024 . Happy to catch up there. ☕️ #AIResearch #PhD #UCSD #LLMs
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
[Personal news] 📢So excited to share that I’ve just graduated @uwcse and will be joining @UCSanDiego @ucsd_cse 🌊☀️as an Assistant Professor in Fall 2024. Meanwhile I’m doing a postdoc @allen_ai . Look forward to working with students and colleagues on NLP, ML, etc.
Tweet media one
53
35
823
7
90
496
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
[Personal news] 📢So excited to share that I’ve just graduated @uwcse and will be joining @UCSanDiego @ucsd_cse 🌊☀️as an Assistant Professor in Fall 2024. Meanwhile I’m doing a postdoc @allen_ai . Look forward to working with students and colleagues on NLP, ML, etc.
Tweet media one
53
35
823
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
We're thrilled to introduce 🧊COLD decoding, a general constrained text generation approach via energy based modeling🌞. We can plug in any differentiable constraints into an energy function, and apply Langevin Dynamics for efficient sampling. 🥳 paper
Tweet media one
6
94
510
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 months
📢Introducing ❄️COLD-Attack⚔️, a unified framework for controllable jailbreaking of LLMs. Thanks to the controllability, COLD-Attack enables new jailbreak scenarios that are hard to detect🧐: 1⃣revising a user query adversarially with minimal paraphrasing 2⃣inserting stealthy…
Tweet media one
4
48
236
@Lianhuiq
Lianhui Qin @ ICLR 2024
4 years
🤔How can a pre-trained left-to-right LM do nonmonotonic reasoning, that requires conditioning on a future constraint⏲️? Our #emnlp2020 paper introduces DELOREAN🚘: an unsupervised backpropagation-based decoding strategy that considers both past context and future constraints.
Tweet media one
2
40
218
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 months
🧑‍🔬LLMs for complex Chemistry reasoning!🧪 Interestingly, we found LLMs (GPT-4) have already encoded lots of ⚗️Chemistry knowledge. 🤔What is really missing is a structured process to elicit the right knowledge, and use the knowledge to perform grounded reasoning. A very…
@Siru_Ouyang
Siru Ouyang
2 months
🚀Announcing StructChem: A simple yet effective prompting strategy, unlocking the power of LLMs for complex chemistry reasoning. This task requires: - Extensive domain knowledge - Precise scientific computing - Compositional step-by-step reasoning Paper: …
Tweet media one
2
23
109
1
23
149
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
COLD decoding accepted by #NeurIPS2022 It enables generating arbitrarily constrained text with pretrained LMs, through continuous text approximation, energy-based modeling, and Langevin dynamics Checkout the latest version Code
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
We're thrilled to introduce 🧊COLD decoding, a general constrained text generation approach via energy based modeling🌞. We can plug in any differentiable constraints into an energy function, and apply Langevin Dynamics for efficient sampling. 🥳 paper
Tweet media one
6
94
510
1
18
127
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 years
What if Harry Potter had been a Vampire? Our @emnlp2019 paper, “Counterfactual Story Reasoning and Generation”, presents the TimeTravel dataset that tests causal reasoning capabilities over natural language narratives.1/2 Paper: from @uwcse and @allen_ai
Tweet media one
2
36
117
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
I’ll be at #ICML 🏝️between 26th and 30th and give an invited talk about differentiable and structured text reasoning at workshop Sampling and Optimization in Discrete Space (SODS) on 29th ☕️🍻Excited to meet old and new friends! Ping me if you’re around
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
[Personal news] 📢So excited to share that I’ve just graduated @uwcse and will be joining @UCSanDiego @ucsd_cse 🌊☀️as an Assistant Professor in Fall 2024. Meanwhile I’m doing a postdoc @allen_ai . Look forward to working with students and colleagues on NLP, ML, etc.
Tweet media one
53
35
823
1
2
80
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 years
Excited to announce our #ACL2019 work on "Conversing by Reading" . To produce conversation responses that are grounded and contentful, we present a new end-to-end approach to that jointly models response generation and on-demand machine reading. 1/2
Tweet media one
1
16
77
@Lianhuiq
Lianhui Qin @ ICLR 2024
8 months
🚀 Join us at #AAAI2024 for an enlightening workshop on LLMs & Causality! 🏹 🔗 website: 🗣️ Don't miss our stellar lineup of speakers! @yudapearl @MihaelaVDS @emrek @AndrewLampinen @osazuwa @guyvdb #LLMs #Vancouver
Tweet media one
@AleksanderMolak
Aleksander Molak (CausalPython.io)
8 months
Tweet media one
2
16
44
0
9
46
@Lianhuiq
Lianhui Qin @ ICLR 2024
3 years
#emnlp2020 Check out 🚗DELOREAN in our Back to the Future paper🎥 Come say hi at Zoom Q&A S4: Nov 16 17:00-18:00 PST / Nov 17, 1:00-2:00 UTC⏳ Paper: Video: Code: w @YejinChoinka @VeredShwartz @ABosselut 👇
Tweet media one
@Lianhuiq
Lianhui Qin @ ICLR 2024
4 years
🤔How can a pre-trained left-to-right LM do nonmonotonic reasoning, that requires conditioning on a future constraint⏲️? Our #emnlp2020 paper introduces DELOREAN🚘: an unsupervised backpropagation-based decoding strategy that considers both past context and future constraints.
Tweet media one
2
40
218
1
7
47
@Lianhuiq
Lianhui Qin @ ICLR 2024
6 months
Will drive to UCLA from UCSD today! So excited to meet old and new friends there! 🌊🏝️
@socalnlp
SoCal NLP Symposium
6 months
#SoCalNLP2023 is this Friday!!! 🏝 Check out our schedule of invited speakers and accepted posters! 👉🏽
0
5
34
1
0
39
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
Given any off-the-shelf left-to-right 🤖language models (eg. GPT2), 🧊COLD can enable it to do diverse constrained generation and non-monotonic reasoning tasks 👀, by plugging in, e.g., 1⃣lexical constraint, 2⃣coherence constraints, 3⃣minimal-edit constraint, ...
Tweet media one
1
5
33
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
Super exciting news!!
@wellecks
Sean Welleck
2 years
Honored to receive a Best Paper award at NAACL 2022 for NeuroLogic A*esque Decoding, with an awesome team @GXiming @PeterWestTM @liweijianglw @wittgen_ball @DanielKhashabi @Ronan_LeBras @Lianhuiq @YoungjaeYu3 @rown @nlpnoah @YejinChoinka !
2
23
99
2
1
26
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
So proud of @YejinChoinka ! Also the BEST phd advisor!!!
@uwcse
Allen School
2 years
Someone once chased down @UW #UWAllen @allen_ai #MacFellow @YejinChoinka at a conference to tell her studying commonsense #AI was a “fool’s errand.” Years later, they sought her advice re: teaching a class on that very topic.¯\_(ツ)_/¯ #NLProc #visionary
1
12
83
0
0
26
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 years
We introduce a new large conversation dataset grounded in external web pages (2.8M turns, 7.4M sentences of grounding). Joint work w/ my MSR mentors @JianfengGao0217 , Michel Galley, collaborators @chris_brockett , @AllenLao , Xiang Gao, Bill Dolan, and my advisor @Yejin 2/2
Tweet media one
1
5
24
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 months
🌟Check out all the other faculty in the UCSD NLP Group at []. We welcome co-advised research projects, fostering a collaborative approach in advancing the field of natural language processing.
1
2
21
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 years
Welcome to stop by, say hi and take a look at our demon poster( #07 )! See you at 13:50 - 15:30. (Location: Basilica) @ACL2019_Italy @ZhitingHu
Tweet media one
Tweet media two
1
7
19
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
Though flexible, energy-based modeling for discrete text is notoriously difficult for sampling. Inspired by our DeLorean work(), we develop an efficient gradient-based sampling procedure based on Langevin Dynamics and differentiable representations of text.
2
2
18
@Lianhuiq
Lianhui Qin @ ICLR 2024
4 years
(Update Github link)🧐 📢 #emnlp2020 "Back to the Future: Unsupervised Backprop-based Decoding for Counterfactual and Abductive Commonsense Reasoning" 🚘DELOREAN(DEcoding for nonmonotonic LOgical REAsoNing) Paper: Github:
Tweet media one
0
3
14
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
Once a soft text sample is obtained, we discretize it to get the desired text. 🤔To ensure fluency during discretization, we propose a simple "top-k filtering" method: at each step the LM (GPT2) provides top-k candidate tokens and we pick the best using the soft sample (logits)😀
1
1
12
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
Huge thanks to my advisor @YejinChoinka , and @LukeZettlemoyer @etzioni @JianfengGao0217 @ABosselut , Fei Xia, colleagues @uwnlp , collaborators @Microsoft @GoogleAI @Meta @allen_ai , and my family and friends!
1
0
12
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
We use COLD on various challenging text generation problems: 1) Lexically Constrained Decoding, 2) Abductive Reasoning, 3) Counterfactual Story Rewriting. Comparing w/ previous discrete search & differentiable reasoning approaches show the flexibility of COLD with strong perf.
2
0
10
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 months
Tweet media one
0
0
10
@Lianhuiq
Lianhui Qin @ ICLR 2024
1 year
Really Cool Work!! 👍
@ZhitingHu
Zhiting Hu
1 year
🤯Machine Learning has many paradigms: (un/self)supervised, reinforcement, adversarial, knowledge-driven, active, online learning, etc. Is there an underlying 'Standard Model' that unifies & generalizes this bewildering zoo? Our @TheHDSR paper presents an attempt toward it 1/
Tweet media one
1
45
236
0
0
8
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
🥳🤩☃️😉Thanks to our awesome coauthors: @wellecks @DanielKhashabi @YejinChoinka
0
0
7
@Lianhuiq
Lianhui Qin @ ICLR 2024
1 year
Can Twitter be a toolken?
@ZhitingHu
Zhiting Hu
1 year
⚒️ToolkenGPT 🔥Now any LMs can use massive tools, no finetuning, no prompt length limit 💡Calling a tool is as natural as generating a word token--treat tools as token (“toolken”) embeddings. Expand toolset by plug more toolkens 🤔Can embed millions of tools for LMs in future?
Tweet media one
2
21
138
0
0
6
@Lianhuiq
Lianhui Qin @ ICLR 2024
1 year
Congrats Yejin!!!
@allen_ai
Allen Institute for AI
1 year
We're thrilled to learn that @YejinChoinka has been selected as an ACL Fellow for 2022, a highly prestigious recognition of her extraordinary contributions to the field of computational linguistics. Congratulations Yejin!
Tweet media one
2
18
221
0
0
6
@Lianhuiq
Lianhui Qin @ ICLR 2024
1 year
Interesting work about language model and world model
@johnjnay
John Nay
1 year
Embodied Agent Experiences Enhance LLMs -Deploy LLM agent in simulator of physical world to acquire diverse experience via goal-oriented planning -Finetune LLM on that exp to teach acting in world -Improves over base on 18 downstream tasks by 64% on avg
Tweet media one
6
77
334
0
3
4
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 months
COLD-Attack includes three main steps: 1⃣ Energy function formulation: Specify energy functions properly to capture the attack constraints such as fluency, stealthiness, sentiment, and left-right-coherence. 2⃣Langevin dynamics sampling: Run Langevin dynamics recursively for…
Tweet media one
1
1
4
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
It was a great pleasure to work with my dear coauthors @YejinChoinka @wellecks @DanielKhashabi 😊☺️
0
0
4
@Lianhuiq
Lianhui Qin @ ICLR 2024
1 year
So cute!
@ZhitingHu
Zhiting Hu
1 year
Phew.. What a game!
Tweet media one
1
0
71
0
0
4
@Lianhuiq
Lianhui Qin @ ICLR 2024
6 months
@jungokasai I’m attending! Look forward seeing you there!!
0
0
4
@Lianhuiq
Lianhui Qin @ ICLR 2024
1 year
LIMA, such a cute name. ☺️
@violet_zct
Chunting Zhou
1 year
How do you turn a language model into a chatbot without any user interactions? We introduce LIMA: a LLaMa-based model fine-tuned on only 1,000 curated prompts and responses, which produces shockingly good responses. * No user data * No mode distillation * No RLHF
Tweet media one
27
233
1K
0
0
4
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
@followML_ Thanks for checking it out! Yes, this is our V1 on arxiv. You may want to check the latest version here: .
1
1
3
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 months
While maintaining a great success rate to diverse LLMs, COLD-Attack achieves strong stealthiness with lower perplexity (PPL) compared to previous methods.
Tweet media one
0
0
3
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
@ABosselut @uwcse @UCSanDiego @ucsd_cse @allen_ai Thank you Antoine!! Hope to see you in person in the future!
0
0
2
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
@jmhessel @uwcse @UCSanDiego @ucsd_cse @allen_ai Thank you Jack!! Congrats on the best paper!!
0
0
2
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
@swabhz @uwcse @UCSanDiego @ucsd_cse @allen_ai Thank you Swabha!! So nice to have you around!!
0
0
2
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
1
0
2
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 months
@rajammanabrolu which one? !!
1
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
@limufar Thanks for the pointers (and congrats on the nice papers!). We'll add discussions!
1
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
3 years
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 years
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 months
@hmd_palangi Thank you Hamid! 😄
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
@mdredze Congratulations Mark!!
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
4 years
Happening now @West 208+209
@ZhitingHu
Zhiting Hu
4 years
Come join the #NeurIPS2019 workshop on Learning with Rich Experience. Note the location: West 208+209. Look fwd to the super exciting talks by @RaiaHadsell @tommmitchell JeffBilmes @pabbeel @YejinChoinka & TomGriffiths, and the contributed presentations:
Tweet media one
0
6
20
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
9 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 months
@jmhessel Feel better soon!!
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
3 years
@VeredShwartz @UBC_CS Congratulations, Vered!!!
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
2 years
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
@universeinanegg @uwcse @UCSanDiego @ucsd_cse @allen_ai Thank you Ari!!! We should catch up in person sometime at conferences!!
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 years
Oops, I meant @YejinChoinka
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
1 year
@Meng_CS Congratulations!!
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
5 years
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
11 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
11 months
@fredahshi @UWCheritonCS @VectorInst Congratulations, Freda!!! 🎉🎉🎉🎊🎊🎊
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
@_kumarde @rajammanabrolu @uwcse @UCSanDiego @ucsd_cse @allen_ai Thank you Deepak! Looking forward to working with you!!
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
3 years
@ABosselut @ICepfl @EPFL Congratulations Antoine!!!
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
10 months
0
0
1
@Lianhuiq
Lianhui Qin @ ICLR 2024
4 years
1
0
1