
USC NLP
@nlp_usc
Followers
4K
Following
432
Media
4
Statuses
353
The NLP group at @USCViterbi. @DaniYogatama+@_jessethomason_+@jieyuzhao11+@robinomial+@swabhz+@xiangrenNLP at @CSatUSC + researchers @USC_ICT, @USC_ISI.
Los Angeles, CA
Joined May 2018
The USC NLP faculty is 2x of its size in two years! We're thrilled to have @DaniYogatama and @swabhz joining us this Fall (@jieyuzhao11 starting in Fall'23), along with five other CS new hires! ๐ ๐ https://t.co/9H3M06SUYq
6
19
115
Thrilled for the Best Paper Award runner-up at #NAACL2025! ๐ฅณ Even when answers are incorrect, people may rely more on LLMs if they use warm and emphatic expressions! We analyze the risks of human over-reliance on LLM expressions of uncertainty: https://t.co/E6289Kx5vV w/
How can we best measure the consequences of LLM overconfidence? โจNew preprintโจ on measuring the risks of human over-reliance on LLM expressions of uncertainty: https://t.co/Wqhdwatvlp w/@JenaHwang2 @xiangrenNLP @nouhadziri @jurafsky @MaartenSap @stanfordnlp @allen_ai #NLPproc
11
14
82
Proud of my student @huihan_li and intern Arnav presenting their #ICLR2025 work on attributing culture-conditioned generation to LLMโs training corpora. Fun time meeting many friends. Ping me if you want to chat about model security, interpretability and human-LM interaction!
2
6
62
Heading to #EMNLP2024, down to chat! Excited to present our work (Wed 10:30am) on systematic data generation in long-tail (low confidence) distribution for more challenging evaluation. ๐งต๐ ๐ฐ: https://t.co/NuTrjE1Znz ๐ป: https://t.co/Vyi9gyPPGa ๐: https://t.co/AmmpZJjg2G
2
8
54
Proud moment seeing our CEO & Co-Founder @xiangrenNLP alongside his @nlp_usc students at @aclmeeting. Supporting the next generation of thought leaders in AI is exactly what drives us forward.
21
19
120
Join us at the co-located <AI / ALL> summit on Aug 15, with the social party in the evening! https://t.co/jDqtdpjDr7 co-hosted with @SCB10X_OFFICIAL @SambaNovaAI sponsored by @awscloud participated by folks @AIatMeta @google @CohereForAI @togethercompute
luma.com
Co-located with ACL 2024 The Summit: Future of Equitable and Inclusive AI aims to cultivate a deep understanding of current challenges andโฆ
Arriving in Bangkok for @aclmeeting! ๐ Will be sharing our recent work on logical scaffolding, model uncertainty expression & multi-hop entailment inference w/ folks @nlp_usc + @KaitlynZhou +friends @allen_ai I'm also helping on the <AI / ALL> summit w/ @SaharaLabsAI ๐๐
2
7
30
Find us at the posters! Can LLMs Reason with Rules? Logic Scaffolding for Stress-Testing and Improving LLMs w/ @siyuan___wang @YejinChoinka et al Relying on the Unreliable: The Impact of Language Models' Reluctance to Express Uncertainty w/ @KaitlynZhou, @MaartenSap et al.
1
1
7
Arriving in Bangkok for @aclmeeting! ๐ Will be sharing our recent work on logical scaffolding, model uncertainty expression & multi-hop entailment inference w/ folks @nlp_usc + @KaitlynZhou +friends @allen_ai I'm also helping on the <AI / ALL> summit w/ @SaharaLabsAI ๐๐
1
13
50
Excited to see everyone soon at #acl2024 in Bangkok! I'll be presenting our work, Relying on the Unreliable: The Impact of Language Models' Reluctance to Express Uncertainty https://t.co/3lROnoBecm Poster session 3 on Aug 12 at 16:00! W/ @MaartenSap @JenaHwang2 @xiangrenNLP
2
8
59
Introducing ๐๐ถ๐ณ๐ฒ๐น๐ผ๐ป๐ด ๐๐๐ and ๐ง๐ฎ๐๐ธ ๐๐ฎ๐๐๐๐ฎ๐ฐ๐ธ, a new approach for evaluating long-context LMs, featuring ever-changing task streams that controllably fill the context window, and NIAH-style visualization for easy diagnosis. ๐ https://t.co/QW0Az5Fv8A ๐งต
5
29
147
Congratulations to the GDM @GoogleDeepMind team on their best paper award at #ICML2024 & Appreciate @afedercooper's shout out to our concurrent paper ๐ If you are into the topic of recovering model info through just its output logits, check out our paper led by @mattf1n too!
5
8
35
๐งLMs forget upstream knowledge when continuously fine-tuned. When fine-tuned on new data, can we forecast what upstream examples will be forgotten? ๐ฅณExcited to share our #ICML Spotlight paper on forecasting example forgetting! ๐Project page: https://t.co/1az8DPFz3X
2
9
31
New paper ๐จ Looking for a strong, open-sourced entailment-verification model to verify your model generations for consistency? โ
You can now use the ๐คmodel https://t.co/5Gau4pA7IE for this! Our FlanT5-xxl finetuned model can predict entailment errors better than GPT3.5 and
huggingface.co
1
5
29
Wanna know gpt-3.5-turbo's embed size? We find a way to extract info from LLM APIs and estimate gpt-3.5-turboโs embed size to be 4096. With the same trick we also develop 25x faster logprob extraction, audits for LLM APIs, and more! ๐ https://t.co/NdYU8ZhuVH Hereโs how 1/๐งต
6
81
360
Absolutely thrilled to receive this honor. Rarely for a researcher could have their first PhD publication win a Test of Time Award (for 10 years of its cumulative impact). Iโm super grateful for the chance to collaborate with Xiao on this fun project โ turns out to be a
Congratulations, @xiangrenNLP! Recently honored as an MIT Technology Review Innovators Under 35 (Asia-Pacific) AND received a Test of Time research paper award from the ACM International Conference on Web Search and Data Mining (WSDM) this week! ๐ https://t.co/kU5TQrb7o0
9
5
106
To detect if your data was used for LLM pretraining, consider using data watermarks: https://t.co/hn2YuueDN8 Detection can be framed as hypothesis testing (statistical guarantees!), if you contributed multiple training documents and watermarked them before public release. ๐งต
1
11
78
Arrived at NOLA for #NeurIPS2023๐ฅ Exciting time to chat about limits/science of LLMs, โslowโ reasoning & explainability. Join our posters for a fun disucssion๐ป Ads: USC CS is hiring tenured track AI faculty + USC NLP is looking for strong PhD students. Talk to us!
1
5
47
How good are LMs at inductive reasoning? How are their behaviors similar to/contrasted with those of humans? We study these via iterative hypothesis refinement. We observe that LMs are phenomenal hypothesis proposers, but they also behave as puzzling inductive reasoners: (1/n)
2
72
262
Throwback to when @xiangrenNLP and our lab made our wishlist and dream research directions to discuss in our lab meeting โ very helpful in contextualising our work in the age of LLMs!! ๐๐ผ @nlp_usc is such a great place to do research ๐ซถ
Inspired by @xiangrenNLP opening his talk with a wish list! I'll have to do the same Also, I agree that a grand challenge for AI right now is that ็ฉบๆฐใ่ชญใใชใ
0
2
31
We're excited to attend #SocalNLP today! ICYMI, sunny southern California is a fantastic place to do #NLProc, come check out what USC NLP [ https://t.co/Z16jR4liAP] has been working on lately! And did we say we're hiring PhD students this fall? ๐ด๐๏ธโ๏ธ
nlp.usc.edu
USC NLP.
#SoCalNLP2023 is this Friday!!! ๐ Check out our schedule of invited speakers and accepted posters! ๐๐ฝ https://t.co/Vw8W4V9QRI
0
2
18
Feeling hard generating challenging evaluation data for LLMs? Check our work๐! Introducing LINK๐, the first framework for systematically generating data in the long-tail distribution, guided by symbolic rules https://t.co/NuTrjE1Znz w/@nlp_usc @ai2_mosaic ๐งตโฌ๏ธ #NLProc [1/n]
1
24
99