Anurag Kumar
@AcouIntel
Followers
2K
Following
309
Media
23
Statuses
214
Research Scientist, @GoogleDeepMind | Prev: @AIatMeta | CMU @SCSatCMU | @IITKanpur | Audio/Speech, Multimodal AI
Cambridge, MA
Joined June 2016
Looking forward to @NeurIPSConf #NeurIPS2024 next week, I am there from Dec 11th-15th. Join our Audio Imagination Workshop on Dec 14th for engaging discussions on all things in audio generation space. We have an exciting list of papers and speakers. https://t.co/AwFxXg4kZ8
2
0
15
We are looking for reviewers for @ieeeICASSP 2026 for AASP areas. We received quite a bit more papers this cycle. If you don't currently review for ICASSP please consider doing so. Fill out the form below
docs.google.com
Please submit new reviewer nominations for ICASSP 2026 using the form below. In order to be eligible to serve as a reviewer, candidate reviewers should either a) satisfy at least two of the following...
1
5
7
🚀 Join the ICASSP 2026 URGENT Challenge! Advance Universal, Robust & Generalizable Speech Enhancement. 🗣 Track 1: Universal Speech Enhancement 🎧 Track 2: Speech Quality Assessment 🔗 https://t.co/bZ3edVhIGM
#ICASSP2026 #SpeechEnhancement #AI #AudioProcessing
0
7
13
An advanced version of Gemini with Deep Think has officially achieved gold medal-level performance at the International Mathematical Olympiad. 🥇 It solved 5️⃣ out of 6️⃣ exceptionally difficult problems, involving algebra, combinatorics, geometry and number theory. Here’s how 🧵
157
736
4K
0
0
0
(2) XRIR: Hearing Anywhere in Any Environment. A key problem in neural RiR estimation has been cross-room generalization. We make an attempt to address this and introduce a large scale dataset ACOUSTICROOMS, with 300,000 high-fidelity RIRs simulated from 260 diverse rooms.
1
0
0
of well-balanced audio with intended video. Chao Huang will be around to talk about the paper @CVPR on Sun, Jun 15 in ExHall D, morning session. Paper:
1
0
0
RL is not all you need, nor attention nor Bayesianism nor free energy minimisation, nor an age of first person experience. Such statements are propaganda. You need thousands of people working hard on data pipelines, scaling infrastructure, HPC, apps with feedback to drive
31
196
1K
(2) Reexamining the Efficacy of MetricGAN for Speech Enhancement. Led by @realHaibinWu. Showcases some crucial limitations of MetricGAN, and proposes some training tricks to address. (already presented, but check out the paper) https://t.co/lNouyTwyka (3/3)
0
0
0
(1) Advancing Active Speaker Detection for Egocentric Videos. Led by @huh_jaesung. SOTA for active speaker detection in challenging ego-centric videos. Session: Machine learning for multimodal data I Apr 11: 11:30 am - 1:00 pm. https://t.co/CCvoJ53nT6 (2/3)
1
0
2
Career Update: Excited to join Google Deepmind @GoogleDeepMind to continue working on audio/speech/multimodal AI. I left Meta @Meta after more than 6 years and I will definitely miss working with some amazing friends and colleagues. Super thankful for all the fun collaborations.
45
26
2K
So happy to share that our work has been accepted to @SIGIRConf. Thank you to my amazing collaborators! @NegarEmpr, Andrea Tupini, Yuxuan Sun, @Tviskaron, @artemZholus, @Cote_Marc and @julia_kiseleva Pre-print:
What a way to wrap up @IgluContest! Our paper “IDAT: A Multi-Modal Dataset and Toolkit for Building and Evaluating Interactive Task-Solving Agents” accepted to @SIGIRConf including: 1) rich multi-modal dataset 2) A data collection tool 3) An online eval framework #SIGIR2025
2
2
12
``Efficient Audiovisual Speech Processing via MUTUD: Multimodal Training and Unimodal Deployment,'' Joanna Hong, Sanjeel Parekh, Honglie Chen, Jacob Donley, Ke Tan, Buye Xu, Anurag Kumar,
0
2
8
The paper explores how LLMs can be used to effectively contextualize excerpts from conversations to improve understandability, readability, and other factors and reduce misinterpretations.
0
0
2
Exciting new work focusing on comprehension of long-form social conversations @coling2025 #COLING2025. https://t.co/hpO8TFn4Bg. All thanks to the hard work of @shremoha.
3
0
9
Excited to share our work at @coling2025! While I couldn’t attend in person, @jad_kabbara will be presenting today at the 1:30 PM poster session. Come by to learn how we’re using LLMs to improve understanding in social conversations! #COLING2025 #NLProc
2
4
17
``SyncFlow: Toward Temporally Aligned Joint Audio-Video Generation from Text,'' Haohe Liu, Gael Le Lan, Xinhao Mei, Zhaoheng Ni, Anurag Kumar, Varun Nagaraja, Wenwu Wang, Mark D. Plumbley, Yangyang Shi, Vikas Chandra,
arxiv.org
Video and audio are closely correlated modalities that humans naturally perceive together. While recent advancements have enabled the generation of audio or video from text, producing both...
0
3
12
It was exciting to see the amazing turnout at our Audio Imagination Workshop @NeurIPSConf #NeurIPS2024. Grateful to everyone, invited speakers, panelists, authors and participants for the interesting presentations, discussions, and engagement. https://t.co/0l1gsDx39M
2
2
30
Vikas Chandra @vikasc on Audio Generation for AR/VR/MR
Join us tomorrow at Audio Imagination Workshop @NeurIPSConf #neurips2024. We will start at 8.15 am in West Hall Meeting Rooms 114, 115. #audioimagination2024
@YapengTian Zhaoheng Ni @shinjiw_at_cmu Wenwu Wang, @berraksismann
0
0
1
Yao Xie on Generative Models for Statistical Inference: Advancing Probabilistic Representations. #audioimagination2024 #NeurIPS2024
Join us tomorrow at Audio Imagination Workshop @NeurIPSConf #neurips2024. We will start at 8.15 am in West Hall Meeting Rooms 114, 115. #audioimagination2024
@YapengTian Zhaoheng Ni @shinjiw_at_cmu Wenwu Wang, @berraksismann
2
0
2