
Zaid Khan
@codezakh
Followers
532
Following
1K
Media
24
Statuses
425
@uncnlp with @mohitban47 working on grounded reasoning + multimodal agents // currently @allen_ai formerly @neclabsamerica // bs+ms CompE @northeastern
Boston, USA
Joined June 2023
RT @ZiyangW00: 🚨Introducing Video-RTS: Resource-Efficient RL for Video Reasoning with Adaptive Video TTS! . While RL-based video reasoning….
0
21
0
RT @hanqi_xiao: 🎉 Excited to share that TaCQ (Task-Circuit Quantization), our work on knowledge-informed mixed-precision quantization, has….
0
13
0
RT @prateeky2806: I've officially joined Meta Superintelligence Labs (MSL) org in the Bay Area. I'll be working on critical aspects of pre-….
0
27
0
RT @EliasEskin: 🎉 Very excited to see TaCQ — our work on task-conditioned mixed-precision quantization that draws on interpretability metho….
0
13
0
RT @ArchikiPrasad: 🥳Our work UTGen & UTDebug on teaching LLMs to generate effective unit tests & improve code debugging/generation has been….
0
22
0
RT @mohitban47: 🎉 Yay, welcome @hyunji_amy_lee -- super excited to have you join us as a postdoc! 🤗. Welcome to our MURGe-Lab + @unc_ai_gro….
0
10
0
RT @hyunji_amy_lee: 🥳Excited to share that I’ll be joining @unccs as postdoc this fall. Looking forward to work with @mohitban47 & amazing….
0
26
0
RT @EliasEskin: 🎉 Excited to share that CAPTURe has been accepted to #ICCV2025! CAPTURe is a new benchmark for VLM reasoning that requires….
0
13
0
RT @RenZhongzheng: 🥳 Excited to share that I’ll be joining the CS Department at UNC-Chapel Hill (@unccs @unc_ai_group) as an Assistant Prof….
0
15
0
RT @mohitban47: 🎉 Yay, welcome to the @unc @unccs @unc_ai_group family and beautiful Research Triangle area, Jason! . Looking forward to th….
0
8
0
RT @shoubin621: 🚀 Excited to introduce a new member of the LRM (Large Reconstruction Models) family — 4D-LRM!. 1. What is 4D-LRM?.It’s a la….
0
12
0
RT @shoubin621: 🎉Excited to announce VEEGIE has been accepted to #ICCV2025 ! VEGGIE is a unified MLLM + Diffusion framework for instruction….
0
16
0
RT @EliasEskin: 🚨 Excited to announce MF2, a new+challenging long-video understanding dataset! MF2 covers open-license movies and focuses o….
0
14
0
RT @shoubin621: New paper Alert 🚨 Introducing MEXA: A general and training-free multimodal reasoning framework via dynamic multi-expert ski….
0
26
0
RT @MercatJean: We evaluated more than 1000 reasoning LLMs on 12 reasoning-focused benchmarks and made fascinating observations about cross….
0
18
0
RT @meetdavidwan: Excited to share GenerationPrograms! 🚀. How do we get LLMs to cite their sources? GenerationPrograms is attributable by d….
0
32
0
RT @jaew00_lee: 🎉Excited to share that I’ll be starting my CS PhD journey at @UNC @unccs this fall! 🎓.I’ll be working with the renowned @mo….
0
12
0
RT @mohitban47: Welcome Jaewoo to the MURGe-Lab + @uncnlp + @unccs family & the beautiful Chapel Hill + Research Triangle area! 🎉. Looking….
0
8
0
RT @meetdavidwan: Excited to share our new work, CLaMR! 🚀. We tackle multimodal content retrieval by jointly considering video, speech, OCR….
0
62
0
RT @EliasEskin: 🚨 CLATTER treats entailment as a reasoning process, guiding models to follow concrete steps (decomposition, attribution/ent….
0
10
0