nlp_usc Profile Banner
USC NLP Profile
USC NLP

@nlp_usc

Followers
4K
Following
432
Media
4
Statuses
353

The NLP group at @USCViterbi. @DaniYogatama+@_jessethomason_+@jieyuzhao11+@robinomial+@swabhz+@xiangrenNLP at @CSatUSC + researchers @USC_ICT, @USC_ISI.

Los Angeles, CA
Joined May 2018
Don't wanna be here? Send us removal request.
@nlp_usc
USC NLP
3 years
The USC NLP faculty is 2x of its size in two years! We're thrilled to have @DaniYogatama and @swabhz joining us this Fall (@jieyuzhao11 starting in Fall'23), along with five other CS new hires! ๐Ÿ™Œ ๐Ÿ”– https://t.co/9H3M06SUYq
6
19
115
@xiangrenNLP
Sean Ren
6 months
Thrilled for the Best Paper Award runner-up at #NAACL2025! ๐Ÿฅณ Even when answers are incorrect, people may rely more on LLMs if they use warm and emphatic expressions! We analyze the risks of human over-reliance on LLM expressions of uncertainty: https://t.co/E6289Kx5vV w/
@KaitlynZhou
Kaitlyn Zhou
1 year
How can we best measure the consequences of LLM overconfidence? โœจNew preprintโœจ on measuring the risks of human over-reliance on LLM expressions of uncertainty: https://t.co/Wqhdwatvlp w/@JenaHwang2 @xiangrenNLP @nouhadziri @jurafsky @MaartenSap @stanfordnlp @allen_ai #NLPproc
11
14
82
@xiangrenNLP
Sean Ren
6 months
Proud of my student @huihan_li and intern Arnav presenting their #ICLR2025 work on attributing culture-conditioned generation to LLMโ€™s training corpora. Fun time meeting many friends. Ping me if you want to chat about model security, interpretability and human-LM interaction!
2
6
62
@huihan_li
Huihan Li @ COLM
11 months
Heading to #EMNLP2024, down to chat! Excited to present our work (Wed 10:30am) on systematic data generation in long-tail (low confidence) distribution for more challenging evaluation. ๐Ÿงต๐Ÿ‘‡ ๐Ÿ“ฐ: https://t.co/NuTrjE1Znz ๐Ÿ’ป: https://t.co/Vyi9gyPPGa ๐Ÿ”–: https://t.co/AmmpZJjg2G
2
8
54
@SaharaLabsAI
Sahara AI | SaharaAI.com ๐Ÿ”†
1 year
Proud moment seeing our CEO & Co-Founder @xiangrenNLP alongside his @nlp_usc students at @aclmeeting. Supporting the next generation of thought leaders in AI is exactly what drives us forward.
21
19
120
@xiangrenNLP
Sean Ren
1 year
Join us at the co-located <AI / ALL> summit on Aug 15, with the social party in the evening! https://t.co/jDqtdpjDr7 co-hosted with @SCB10X_OFFICIAL @SambaNovaAI sponsored by @awscloud participated by folks @AIatMeta @google @CohereForAI @togethercompute
Tweet card summary image
luma.com
Co-located with ACL 2024 The Summit: Future of Equitable and Inclusive AI aims to cultivate a deep understanding of current challenges andโ€ฆ
@xiangrenNLP
Sean Ren
1 year
Arriving in Bangkok for @aclmeeting! ๐Ÿ˜ƒ Will be sharing our recent work on logical scaffolding, model uncertainty expression & multi-hop entailment inference w/ folks @nlp_usc + @KaitlynZhou +friends @allen_ai I'm also helping on the <AI / ALL> summit w/ @SaharaLabsAI ๐Ÿ‘‡๐Ÿ‘‡
2
7
30
@xiangrenNLP
Sean Ren
1 year
Find us at the posters! Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs w/ @siyuan___wang @YejinChoinka et al Relying on the Unreliable: The Impact of Language Models' Reluctance to Express Uncertainty w/ @KaitlynZhou, @MaartenSap et al.
1
1
7
@xiangrenNLP
Sean Ren
1 year
Arriving in Bangkok for @aclmeeting! ๐Ÿ˜ƒ Will be sharing our recent work on logical scaffolding, model uncertainty expression & multi-hop entailment inference w/ folks @nlp_usc + @KaitlynZhou +friends @allen_ai I'm also helping on the <AI / ALL> summit w/ @SaharaLabsAI ๐Ÿ‘‡๐Ÿ‘‡
1
13
50
@KaitlynZhou
Kaitlyn Zhou
1 year
Excited to see everyone soon at #acl2024 in Bangkok! I'll be presenting our work, Relying on the Unreliable: The Impact of Language Models' Reluctance to Express Uncertainty https://t.co/3lROnoBecm Poster session 3 on Aug 12 at 16:00! W/ @MaartenSap @JenaHwang2 @xiangrenNLP
2
8
59
@qinyuan_ye
Qinyuan Ye
1 year
Introducing ๐—Ÿ๐—ถ๐—ณ๐—ฒ๐—น๐—ผ๐—ป๐—ด ๐—œ๐—–๐—Ÿ and ๐—ง๐—ฎ๐˜€๐—ธ ๐—›๐—ฎ๐˜†๐˜€๐˜๐—ฎ๐—ฐ๐—ธ, a new approach for evaluating long-context LMs, featuring ever-changing task streams that controllably fill the context window, and NIAH-style visualization for easy diagnosis. ๐Ÿ“œ https://t.co/QW0Az5Fv8A ๐Ÿงต
5
29
147
@xiangrenNLP
Sean Ren
1 year
Congratulations to the GDM @GoogleDeepMind team on their best paper award at #ICML2024 & Appreciate @afedercooper's shout out to our concurrent paper ๐Ÿ™Œ If you are into the topic of recovering model info through just its output logits, check out our paper led by @mattf1n too!
5
8
35
@XisenJ
Xisen Jin
1 year
๐ŸงLMs forget upstream knowledge when continuously fine-tuned. When fine-tuned on new data, can we forecast what upstream examples will be forgotten? ๐ŸฅณExcited to share our #ICML Spotlight paper on forecasting example forgetting! ๐Ÿ”—Project page: https://t.co/1az8DPFz3X
2
9
31
@ssanyal8
Soumya Sanyal
1 year
New paper ๐Ÿšจ Looking for a strong, open-sourced entailment-verification model to verify your model generations for consistency? โœ… You can now use the ๐Ÿค—model https://t.co/5Gau4pA7IE for this! Our FlanT5-xxl finetuned model can predict entailment errors better than GPT3.5 and
Tweet card summary image
huggingface.co
1
5
29
@mattf1n
Matthew Finlayson
2 years
Wanna know gpt-3.5-turbo's embed size? We find a way to extract info from LLM APIs and estimate gpt-3.5-turboโ€™s embed size to be 4096. With the same trick we also develop 25x faster logprob extraction, audits for LLM APIs, and more! ๐Ÿ“„ https://t.co/NdYU8ZhuVH Hereโ€™s how 1/๐Ÿงต
6
81
360
@xiangrenNLP
Sean Ren
2 years
Absolutely thrilled to receive this honor. Rarely for a researcher could have their first PhD publication win a Test of Time Award (for 10 years of its cumulative impact). Iโ€™m super grateful for the chance to collaborate with Xiao on this fun project โ€” turns out to be a
@CSatUSC
USC Thomas Lord Department of Computer Science
2 years
Congratulations, @xiangrenNLP! Recently honored as an MIT Technology Review Innovators Under 35 (Asia-Pacific) AND received a Test of Time research paper award from the ACM International Conference on Web Search and Data Mining (WSDM) this week! ๐Ÿ‘ https://t.co/kU5TQrb7o0
9
5
106
@johntzwei
Johnny Tian-Zheng Wei
2 years
To detect if your data was used for LLM pretraining, consider using data watermarks: https://t.co/hn2YuueDN8 Detection can be framed as hypothesis testing (statistical guarantees!), if you contributed multiple training documents and watermarked them before public release. ๐Ÿงต
1
11
78
@xiangrenNLP
Sean Ren
2 years
Arrived at NOLA for #NeurIPS2023๐Ÿ”ฅ Exciting time to chat about limits/science of LLMs, โ€œslowโ€ reasoning & explainability. Join our posters for a fun disucssion๐Ÿป Ads: USC CS is hiring tenured track AI faculty + USC NLP is looking for strong PhD students. Talk to us!
1
5
47
@linluqiu
Linlu Qiu
2 years
How good are LMs at inductive reasoning? How are their behaviors similar to/contrasted with those of humans? We study these via iterative hypothesis refinement. We observe that LMs are phenomenal hypothesis proposers, but they also behave as puzzling inductive reasoners: (1/n)
2
72
262
@BrihiJ
Brihi Joshi
2 years
Throwback to when @xiangrenNLP and our lab made our wishlist and dream research directions to discuss in our lab meeting โ€” very helpful in contextualising our work in the age of LLMs!! ๐Ÿ™Œ๐Ÿผ @nlp_usc is such a great place to do research ๐Ÿซถ
@m2saxon
Michael Saxon
2 years
Inspired by @xiangrenNLP opening his talk with a wish list! I'll have to do the same Also, I agree that a grand challenge for AI right now is that ็ฉบๆฐ—ใŒ่ชญใ‚ใชใ„
0
2
31
@nlp_usc
USC NLP
2 years
We're excited to attend #SocalNLP today! ICYMI, sunny southern California is a fantastic place to do #NLProc, come check out what USC NLP [ https://t.co/Z16jR4liAP] has been working on lately! And did we say we're hiring PhD students this fall? ๐ŸŒด๐Ÿ–๏ธโ˜€๏ธ
Tweet card summary image
nlp.usc.edu
USC NLP.
@socalnlp
SoCal NLP Symposium
2 years
#SoCalNLP2023 is this Friday!!! ๐Ÿ Check out our schedule of invited speakers and accepted posters! ๐Ÿ‘‰๐Ÿฝ https://t.co/Vw8W4V9QRI
0
2
18
@huihan_li
Huihan Li @ COLM
2 years
Feeling hard generating challenging evaluation data for LLMs? Check our work๐Ÿ‘‡! Introducing LINK๐Ÿ”—, the first framework for systematically generating data in the long-tail distribution, guided by symbolic rules https://t.co/NuTrjE1Znz w/@nlp_usc @ai2_mosaic ๐Ÿงตโฌ‡๏ธ #NLProc [1/n]
1
24
99