mohitban47 Profile Banner
Mohit Bansal Profile
Mohit Bansal

@mohitban47

Followers
11K
Following
15K
Media
116
Statuses
5K

Parker Distinguished Prof @UNC. PECASE/AAAI Fellow. Director https://t.co/5qlPVgnrlN (@unc_ai_group). Past @Berkeley_AI @TTIC_Connect @IITKanpur #NLP #CV #AI

Joined September 2012
Don't wanna be here? Send us removal request.
@mohitban47
Mohit Bansal
8 months
Thank you @RealAAAI for the honor & the fun ceremonies -- humbled to be inducted as a AAAI Fellow in esteemed company ๐Ÿ™ PS. I am still around today in Philadelphia if anyone wants to meet up at #AAAI2025 :-) Thanks once again to everyone (students+postdocs+collaborators,
27
31
302
@mohitban47
Mohit Bansal
12 hours
Tweet card summary image
arxiv.org
Recent advances in Chain-of-Thought (CoT) reasoning have improved complex video understanding, but existing methods often struggle to adapt to domain-specific skills (e.g., event detection,...
@danadaeun
Daeun Lee
5 months
Excited to share Video-Skill-CoT๐ŸŽฌ๐Ÿ› ๏ธโ€“ a new framework for domain-adaptive video reasoning with skill-aware Chain-of-Thought (CoT) supervision! โšก๏ธKey Highlights: โžก๏ธ Automatically extracts domain-specific reasoning skills from questions and organizes them into a unified taxonomy,
0
1
3
@mohitban47
Mohit Bansal
13 hours
Tweet card summary image
arxiv.org
Combining pre-trained expert models offers substantial potential for scalable multimodal reasoning, but building a unified framework remains challenging due to the increasing diversity of input...
@shoubin621
Shoubin Yu @ EMNLP
5 months
New paper Alert ๐Ÿšจ Introducing MEXA: A general and training-free multimodal reasoning framework via dynamic multi-expert skill selection, aggregation and deep reasoning! MEXA: 1. Selects task- and modality-relevant experts based on the query and various required multimodal
1
2
6
@mohitban47
Mohit Bansal
15 hours
@pingzli
Pingzhi Li
1 year
๐Ÿš€ Introducing GLIDER: Global and Local Instruction-Driven Expert Router! Our new approach combines LLM-generated semantic task instructions for global task-level routing with learned local token-level routing for improved performance on both held-in and held-out tasks. 1๏ธโƒฃ
1
0
2
@mohitban47
Mohit Bansal
16 hours
Tweet card summary image
arxiv.org
Recent video generative models primarily rely on carefully written text prompts for specific tasks, like inpainting or style editing. They require labor-intensive textual descriptions for input...
@jaeh0ng_yoon
Jaehong Yoon
1 year
๐ŸšจNew paper๐Ÿ‘‰RACCooN: remove/add/change video content effortlessly/interactively via our MLLM+Video Diffusion (V2P2V) framework with auto-generated descriptions! โ–ถ๏ธ 1. Video-to-Paragraph (V2P): RACCooN first generates well-structured/detailed descriptions of videos with MLLM
1
1
3
@mohitban47
Mohit Bansal
18 hours
@EliasEskin
Elias Stengel-Eskin
2 months
๐Ÿšจ Excited to share new work on LLMs and loopholes, accepted to #EMNLP2025 main! When models are faced with conflicting goals and ambiguous instructions that would let them exploit a loophole, many of the strongest models (Qwen, GPT4o, Claude, Gemini) do. This is a new risk and
1
0
3
@mohitban47
Mohit Bansal
19 hours
@cyjustinchen @ArchikiPrasad @swarnaNLP @EliasEskin -- Video-RTS: Rethinking Reinforcement Learning and Test-Time Scaling for Efficient and Enhanced Video Reasoning @ZiyangW00 @jaeh0ng_yoon @shoubin621 @mmiemon @gberta227 https://t.co/THxKAhgCPX https://t.co/c6s8hnrKFH
@ZiyangW00
Ziyang Wang
4 months
๐ŸšจIntroducing Video-RTS: Resource-Efficient RL for Video Reasoning with Adaptive Video TTS! While RL-based video reasoning with LLMs has advanced, the reliance on large-scale SFT with extensive video data and long CoT annotations remains a major bottleneck. Video-RTS tackles
1
3
7
@mohitban47
Mohit Bansal
19 hours
(detailed links/websites + summary ๐Ÿงต's of these papers attached below FYIย ๐Ÿ‘‡) -- MAgICoRe: Multi-Agent, Iterative, Coarse-to-Fine Refinement for Reasoning @cyjustinchen @ArchikiPrasad @swarnaNLP @EliasEskin https://t.co/i5PpdcSCSg https://t.co/zRHwIplma1
@cyjustinchen
Justin Chih-Yao Chen
1 year
Aggregation & refinement improve LLM reasoning, but aggregation saturates, while refinement has 3 issues: 1) over-correction for easy problems 2) fails to localize+fix its own errors 3) insufficient number of refinement iteration for hard problems ๐ŸšจMulti-Agent, Iterative,
1
1
4
@mohitban47
Mohit Bansal
2 days
FYI, info/tags of folks presenting at @emnlpmeeting --> in-person: @jaeh0ng_yoon, @shoubin621 virtual: @ZiyangW00 @cyjustinchen @EliasEskin @danadaeun
0
3
9
@mohitban47
Mohit Bansal
2 days
๐Ÿšจ Check out our awesome students/postdocs' papers at #EMNLP2025 and say hi to them ๐Ÿ‘‹! Also, I will give a keynote (virtually) on "Attributable, Conflict-Robust, and Multimodal Summarization with Multi-Source Retrieval" at the NewSumm workshop. -- Jaehong (in-person) finished
2
28
62
@hyunji_amy_lee
hyunji amy lee
2 days
๐Ÿšจ Excited to announce Gistify!, where a coding agent must extract the gist of a repository: generate a single, executable, and self-contained file that faithfully reproduces the behavior of a given command (e.g., a test or entrypoint). โœ… It is a lightweight, broadly applicable
2
37
90
@CanyuChen3
Canyu Chen
4 days
๐Ÿ”ฅThe deadline (Nov 3, 2025 AoE) for ๐๐ž๐ฎ๐ซ๐ˆ๐๐’ ๐Ÿ๐ŸŽ๐Ÿ๐Ÿ“ ๐–๐จ๐ซ๐ค๐ฌ๐ก๐จ๐ฉ ๐จ๐ง ๐’๐จ๐œ๐ข๐š๐ฅ๐ฅ๐ฒ ๐‘๐ž๐ฌ๐ฉ๐จ๐ง๐ฌ๐ข๐›๐ฅ๐ž ๐š๐ง๐ ๐“๐ซ๐ฎ๐ฌ๐ญ๐ฐ๐จ๐ซ๐ญ๐ก๐ฒ ๐…๐จ๐ฎ๐ง๐๐š๐ญ๐ข๐จ๐ง ๐Œ๐จ๐๐ž๐ฅ๐ฌ (๐‘๐ž๐ฌ๐ฉ๐จ๐ง๐ฌ๐ข๐›๐ฅ๐ž๐…๐Œ) is approaching!๐Ÿ”ฅ ๐Ÿ“ Hybrid (Hilton Mexico City Reforma +
0
16
36
@meetdavidwan
David Wan
3 days
๐Ÿšจ Proud to share our #TACL work on localizing factual inconsistencies in attributable text generation! To find where LLMs hallucinate, we need to get granular. We introduce QASemConsistency, a new method that decomposes text into simple question-answer pairs to precisely
@ArieCattan
Arie Cattan
3 days
LLMs love to hallucinate, but *where* exactly? ๐Ÿค” We're thrilled to announce that our paper "Localizing Factual Inconsistencies in Attributable Text Generation" has been accepted to #TACL #nlproc ! ๐ŸŽ‰ ๐Ÿงต๐Ÿ‘‡
0
12
23
@ArieCattan
Arie Cattan
3 days
LLMs love to hallucinate, but *where* exactly? ๐Ÿค” We're thrilled to announce that our paper "Localizing Factual Inconsistencies in Attributable Text Generation" has been accepted to #TACL #nlproc ! ๐ŸŽ‰ ๐Ÿงต๐Ÿ‘‡
1
9
22
@jaeh0ng_yoon
Jaehong Yoon
3 days
๐ŸŽ‰ Excited to share that 5/5 of my papers (3 main, 2 findings) have been accepted at #EMNLP2025, in video/multimodal reasoning, instructional video editing, and efficient LLM adaptation & reasoning! ๐Ÿšจ Iโ€™m recruiting Ph.D. students to join the Multimodal AI Group at NTU College
15
31
304
@mohitban47
Mohit Bansal
7 days
Social dinner + gala in a beautiful 800-year old Bologna palace (Palazzo Re Enzo) right next to the famous Neptune's Fountain ๐Ÿ™‚
0
2
10
@mohitban47
Mohit Bansal
8 days
More info: https://t.co/hKvH2k1BLF Some other interesting facts: -- ECAI started in and has been running since 1974. -- This year it was held at the University of Bologna, which is the oldest university in continuous operation in the world, and the first degree-awarding
1
1
6
@mohitban47
Mohit Bansal
8 days
It was an honor and pleasure to give a keynote at the 28th European Conference on Artificial Intelligence (#ECAI2025) in beautiful Bologna, and engage in enthusiastic discussions about trustworthy + calibrated agents, collaborative reasoning + privacy, and controllable multimodal
1
26
68
@mohitban47
Mohit Bansal
10 days
Check out her work here โ†’ https://t.co/r1M8XYFiJZ Google announcement blog (congrats to all the other fellows too) โ†’
Tweet card summary image
blog.google
Today, we are announcing the recipients of the 2025 Google PhD Fellowship Program.
0
0
5
@mohitban47
Mohit Bansal
10 days
๐ŸŽ‰ Big congratulations to Vaidehi on being awarded a Google PhD Fellowship in Machine Learning and ML Foundations for her important research contributions in machine unlearning for LLMs/VLMs, defenses against adversarial attacks, and multi-agent privacy! #ProudAdvisor ๐Ÿ‘‡๐Ÿ‘‡
@vaidehi_patil_
Vaidehi Patil
10 days
๐Ÿฅณ๐Ÿฅณ Honored and grateful to be awarded a 2025 Google PhD Fellowship in Machine Learning and ML Foundations for my research on machine unlearning, defenses against adversarial attacks, and multi-agent privacy! โœจ Deep gratitude to my advisor @mohitban47 for his constant
3
13
143
@peterbhase
Peter Hase
15 days
I would encourage technical AI types to consider working in grantmaking! Schmidt Sciences is hiring for a unique position where you get to continue your own research at the same time Link:
jobs.lever.co
Summary Schmidt Sciences invites recent PhD graduates in AI and computer science to apply for a 12-18 month fellows-in-residence program. Reporting to the Director of the AI Institute at Schmidt...
4
29
145