Joykirat Profile
Joykirat

@joykiratsingh

Followers
403
Following
2K
Media
33
Statuses
461

CS PhD Student @unc_ai_group @UNC, advised by. @mohitban47 | ex RF @MSFTResearch

Joined April 2017
Don't wanna be here? Send us removal request.
@joykiratsingh
Joykirat
1 month
🚨 Excited to announce TRAAC, an online difficulty-adaptive, attention-based method that handles the tradeoff of under & overthinking in reasoning models to improve both accuracy and efficiency. Underthinking ❌: Models terminate reasoning too early on harder problems, leading
3
48
94
@EliasEskin
Elias Stengel-Eskin
5 days
🚨 Excited to share Gistify! Often the easiest way to understand large/complicated repos is by playing around with test cases and tracing back through the code that is executed. Gistify tasks models with turning a codebase and an entry-point (e.g. command, unit test) into a
@hyunji_amy_lee
hyunji amy lee
6 days
🚨 Excited to announce Gistify!, where a coding agent must extract the gist of a repository: generate a single, executable, and self-contained file that faithfully reproduces the behavior of a given command (e.g., a test or entrypoint). ✅ It is a lightweight, broadly applicable
0
9
15
@cyjustinchen
Justin Chih-Yao Chen
5 days
I'll be presenting ✨MAgICoRe✨ virtually tonight at 7 PM ET / 8 AM CST (Gather Session 3)! I'll discuss 3 key challenges in LLM refinement for reasoning, and how MAgICoRe tackles them jointly: 1⃣ Over-correction on easy problems 2⃣ Failure to localize & fix its own errors 3⃣
@mohitban47
Mohit Bansal
6 days
🚨 Check out our awesome students/postdocs' papers at #EMNLP2025 and say hi to them 👋! Also, I will give a keynote (virtually) on "Attributable, Conflict-Robust, and Multimodal Summarization with Multi-Source Retrieval" at the NewSumm workshop. -- Jaehong (in-person) finished
0
11
20
@mohitban47
Mohit Bansal
6 days
🚨 Check out our awesome students/postdocs' papers at #EMNLP2025 and say hi to them 👋! Also, I will give a keynote (virtually) on "Attributable, Conflict-Robust, and Multimodal Summarization with Multi-Source Retrieval" at the NewSumm workshop. -- Jaehong (in-person) finished
2
30
63
@hyunji_amy_lee
hyunji amy lee
6 days
🚨 Excited to announce Gistify!, where a coding agent must extract the gist of a repository: generate a single, executable, and self-contained file that faithfully reproduces the behavior of a given command (e.g., a test or entrypoint). ✅ It is a lightweight, broadly applicable
4
40
98
@joykiratsingh
Joykirat
7 days
Thrilled to have our paper “Data-scarce Behavior Editing of Language Models” accepted at #EMNLPFindings2025! 🎉 We propose TaRot, a gradient-free method to edit LLM behavior efficiently — no retraining or large datasets needed. Super fun collab with Subhabrata Dutta &
Tweet card summary image
aclanthology.org
Joykirat Singh, Subhabrata Dutta, Tanmoy Chakraborty. Findings of the Association for Computational Linguistics: EMNLP 2025. 2025.
@lcs2lab
LCS2 Lab
7 days
🚀 LCS2 Sneak Peek Series for #EMNLPFindings2025 🚀 📝 Data-scarce Behavior Editing of Language Models 👥 @joykiratsingh, Subhabrata Dutta, @Tanmoy_Chak 📌 Paper: https://t.co/QmyNyZ0h4Z 🎥 Video: https://t.co/yjXaJM9T43
1
0
15
@lcs2lab
LCS2 Lab
7 days
🚀 LCS2 Sneak Peek Series for #EMNLPFindings2025 🚀 📝 Data-scarce Behavior Editing of Language Models 👥 @joykiratsingh, Subhabrata Dutta, @Tanmoy_Chak 📌 Paper: https://t.co/QmyNyZ0h4Z 🎥 Video: https://t.co/yjXaJM9T43
1
1
2
@mohitban47
Mohit Bansal
12 days
It was an honor and pleasure to give a keynote at the 28th European Conference on Artificial Intelligence (#ECAI2025) in beautiful Bologna, and engage in enthusiastic discussions about trustworthy + calibrated agents, collaborative reasoning + privacy, and controllable multimodal
1
27
69
@EliasEskin
Elias Stengel-Eskin
26 days
🚨 Excited to share new work on inferring symbolic world models from observations! OneLife can infer world models in stochastic, complex environments by proposing rules via LLM and reweighting code-based environment laws from observations collected in a single interaction
@codezakh
Zaid Khan
26 days
How can an agent reverse engineer the underlying laws of an unknown, hostile & stochastic environment in “one life”, without millions of steps + human-provided goals / rewards? In our work, we: 1️⃣ infer an executable symbolic world model (a probabilistic program capturing
0
17
29
@ArchikiPrasad
Archiki Prasad
26 days
🚨 Excited to share our new work ✨ OneLife ✨, which investigates how an agent can infer executable symbolic world models 🌐 from a single unguided trajectory in a stochastic environment. I’m especially excited about our planning + evaluation contributions: 1️⃣ We support
@codezakh
Zaid Khan
26 days
How can an agent reverse engineer the underlying laws of an unknown, hostile & stochastic environment in “one life”, without millions of steps + human-provided goals / rewards? In our work, we: 1️⃣ infer an executable symbolic world model (a probabilistic program capturing
0
18
31
@codezakh
Zaid Khan
26 days
How can an agent reverse engineer the underlying laws of an unknown, hostile & stochastic environment in “one life”, without millions of steps + human-provided goals / rewards? In our work, we: 1️⃣ infer an executable symbolic world model (a probabilistic program capturing
2
42
89
@shoubin621
Shoubin Yu @ EMNLP
1 month
🚨 New Paper Alert! Introducing SciVideoBench — a comprehensive benchmark for scientific video reasoning! 🔬SciVideoBench: 1. Spans Physics, Chemistry, Biology & Medicine with authentic experimental videos. 2. Features 1,000 challenging MCQs across three reasoning types:
3
29
39
@ZunWang919
Zun Wang
1 month
🚨 Thrilled to introduce Self-Improving Demonstrations (SID) for Goal-Oriented Vision-and-Language Navigation — a scalable paradigm where navigation agents learn to explore by teaching themselves. ➡️ Agents iteratively generate and learn from their own successful trajectories ➡️
3
34
73
@hanqi_xiao
Hanqi Xiao
1 month
Landed in Montreal 🇨🇦 for #COLM2025 to present my first-author work on task-conditioned mixed-precision quantization: “Task-Circuit Quantization” (Thursday 11am, Poster Session 5). I'm applying to PhD programs this cycle and am excited to chat about this or other interests (LLM
@mohitban47
Mohit Bansal
1 month
🚨 Check out our awesome students/postdocs' papers at #COLM2025 and say hi to them (several are on the job market or hiring) --> -- Archiki, David are on the post-PhD job market! -- Elias finished his postdoc & is now faculty at UT-Austin CS and looking to admit PhD students!
0
13
29
@ArchikiPrasad
Archiki Prasad
1 month
I am attending #COLM2025 🇨🇦 this week to present our work on: Unit Test Generation: 📅 Oct 8th (Wed), 4:30 PM, #79 RAG with conflicting evidence: 📅 Oct 9th (Thu), 11 AM, #71 PS: I'm on the industry job market for RS roles, so you can reach me via DM or in-person to chat! 😄
@mohitban47
Mohit Bansal
1 month
🚨 Check out our awesome students/postdocs' papers at #COLM2025 and say hi to them (several are on the job market or hiring) --> -- Archiki, David are on the post-PhD job market! -- Elias finished his postdoc & is now faculty at UT-Austin CS and looking to admit PhD students!
0
16
40
@EliasEskin
Elias Stengel-Eskin
1 month
✈️ Arrived at #COLM2025 where I'll be helping to present the following 4 papers. I'm also recruiting multiple PhD students for my new lab at UT Austin -- happy to chat about research, PhD applications, or postdoc openings in my former postdoc lab at UNC! -- Learning to Generate
@mohitban47
Mohit Bansal
1 month
🚨 Check out our awesome students/postdocs' papers at #COLM2025 and say hi to them (several are on the job market or hiring) --> -- Archiki, David are on the post-PhD job market! -- Elias finished his postdoc & is now faculty at UT-Austin CS and looking to admit PhD students!
1
23
44
@mohitban47
Mohit Bansal
1 month
🚨 Check out our awesome students/postdocs' papers at #COLM2025 and say hi to them (several are on the job market or hiring) --> -- Archiki, David are on the post-PhD job market! -- Elias finished his postdoc & is now faculty at UT-Austin CS and looking to admit PhD students!
3
42
114
@mohitban47
Mohit Bansal
1 month
🚨 "Think the right amount" for improving both reasoning accuracy and efficiency! --> Large reasoning models under-adapt = underthink on hard problems and overthink on easy ones --> ✨TRAAC✨ is an online RL, difficulty-adaptive, attention-based compression method that prunes
@joykiratsingh
Joykirat
1 month
🚨 Excited to announce TRAAC, an online difficulty-adaptive, attention-based method that handles the tradeoff of under & overthinking in reasoning models to improve both accuracy and efficiency. Underthinking ❌: Models terminate reasoning too early on harder problems, leading
1
17
77
@EliasEskin
Elias Stengel-Eskin
1 month
🚨 TRAAC uses an online difficulty-adaptive, attention-based compression method to address a core problem in long thinking: an inability to adapt to problem difficulty! Leads to underthinking on hard problems, overthinking on easy ones, reducing accuracy and efficiency. TRAAC
@joykiratsingh
Joykirat
1 month
🚨 Excited to announce TRAAC, an online difficulty-adaptive, attention-based method that handles the tradeoff of under & overthinking in reasoning models to improve both accuracy and efficiency. Underthinking ❌: Models terminate reasoning too early on harder problems, leading
0
14
38
@ArchikiPrasad
Archiki Prasad
1 month
Models often think too much on easy problems and not enough on harder reasoning problems. Our new method ✨TRAAC✨ fixes this by teaching models to adaptively compress their "thinking budget" to the difficulty of the task during GRPO rollouts. Result? The model uses
@joykiratsingh
Joykirat
1 month
🚨 Excited to announce TRAAC, an online difficulty-adaptive, attention-based method that handles the tradeoff of under & overthinking in reasoning models to improve both accuracy and efficiency. Underthinking ❌: Models terminate reasoning too early on harder problems, leading
2
18
60