
Yining Lu
@Yining__Lu
Followers
207
Following
334
Media
15
Statuses
103
First year CS PhD student @NotreDame | Intern: @amazon | Prev: @JHUCLSP 🦋: https://t.co/gPXvdPuMi6
South Bend, IN
Joined December 2019
Thrilled to share that I'll start my Ph.D. at @ND_CSE this fall, working with @Meng_CS. I am so grateful for the sincere guidance from my current advisor, @DanielKhashabi, and for the unconditional support I received from my family, friends, and collaborators over the past years!.
9
3
50
RT @gliu0329: Introducing🔥torch-molecule🔥: A single line of code for molecular property prediction, generation & representation learning:….
0
47
0
RT @lucy_institute: Exciting news at the @lucy_institute! The Foundation Models and Applications Lab has launched with co-directors @Meng_C….
0
5
0
RT @Dongwei__Jiang: Now accepted by #ACL2025!. Thrilled to see our paper also referenced in @lilianweng's latest blog post on reasoning in….
lilianweng.github.io
Special thanks to John Schulman for a lot of super valuable feedback and direct edits on this post. Test time compute (Graves et al. 2016, Ling, et al. 2017, Cobbe et al. 2021) and Chain-of-thought...
0
11
0
Pleased to share that two papers were accepted to #ACL2025 main! Huge congratulations to all collaborators for the hard work and time we put in together! Both works study the multi-model collaboration. I’ll leave it to @Dongwei__Jiang to share more about his first-author paper:.
📣 New Preprint 📣 Did you realize there is a hidden misalignment between decomposer and verifier in long-form text factuality evaluation—an NP-hard puzzle for current methods? 🤔. We tackle this with an online RL solution called Dynamic Decomposition 👇.
2
3
17
RT @tli104: Excited to be presenting our paper on training language models under heavily imbalanced data tomorrow at #NAACL2025! If you wan….
arxiv.org
Data abundance across different domains exhibits a long-tailed distribution: few domains have abundant data, while most face data scarcity. Our work focuses on a multilingual setting, where...
0
7
0
RT @jackjingyuzhang: Excited to present two papers today and tomorrow at #NAACL2025! Look out for our oral sessions:. TurkingBench: https:/….
0
4
0
RT @jackjingyuzhang: Current copyright mitigation methods for LLMs typically focus on average-case risks, but overlook worst-case scenarios….
0
9
0
I will be at #NAACL2025 to present our LLM creativity benchmark work. Feel free to drop by if interested (Poster Session 8, Fri, May 2)!. I'd love to chat about RL and its interpretability, data influence for post-training, CogSci for LLM, and any other NLP-related topics. Feel.
"Benchmarking Language Model Creativity: A Case Study on Code Generation" TLDR— Proposed a framework for benchmarking LLMs' 𝒄𝒓𝒆𝒂𝒕𝒊𝒗𝒊𝒕𝒚.
1
3
23
RT @davidweichiang: Midwest Speech and Language Days is in full swing at @NotreDame! #NLProc #MSLD2025
0
9
0
Had a great time chatting with Prof. Xifeng Yan about some of my ongoing research ideas. I learned so much from his valuable insights. Thanks to Prof. @Meng_CS and @ND_CSE for organizing this!.
We were so happy having Prof. Xifeng Yan (UCSB) at Notre Dame today. He chatted with our @ND_CSE graduate students, met many old and new friends, and gave a wonderful talk about his recent thoughts on language models. Thanks, Xifeng! 😃
0
0
1
RT @DanielKhashabi: Can a simulated society of AI agents be used to assess the effectiveness of social policies?. See Abe Hou @abe_hou 's s….
arxiv.org
Can we simulate a sandbox society with generative agents to model human behavior, thereby reducing the over-reliance on real human trials for assessing public policies? In this work, we...
0
4
0
We are inspired by prior works on factuality evaluation (@sewon__min et al., @zhengping_jiang et al., Qisheng Hu et al.) and RL for NLP (@YunmoChen et al.) Please refer to our related.
0
0
0