
William Chen
@chenwanch1
Followers
783
Following
873
Media
38
Statuses
282
PhD Student @LTIatCMU @SCSatCMU | Masters @LTIatCMU | Formerly @TXInstruments | @UCF ‘21
Joined June 2021
RT @cterdam: The week ahead: 20250721-20250727. [Tweet] OpenAI’s new model achieves gold medal-level performance in IMO. [Blog] Calvin’s t….
llz.info
Personal page for 李良澤 Liangze Li.
0
1
0
One of my favorite moments at #ICML2025 was being able to witness @_albertgu and the @cartesia_ai team’s reaction to Mamba being on the coffee sign. Felt surreal seeing someone realize their cultural impact.
1
8
87
RT @chenwanch1: I’ll be presenting this Thursday 4:30pm at the West hall, poster 418. Drop by to learn more about our latest experience i….
0
5
0
RT @chenwanch1: What happens if you scale Whisper to billions of parameters?. Our #ICML2025 paper develops scaling laws for ASR/ST models,….
0
32
0
RT @liweiche77: Presenting our #ICML2025 poster today!. Discover our continuous, end-to-end approach that helps speech language models proc….
0
2
0
I’ll be presenting this Thursday 4:30pm at the West hall, poster 418. Drop by to learn more about our latest experience in burning compute!.
What happens if you scale Whisper to billions of parameters?. Our #ICML2025 paper develops scaling laws for ASR/ST models, training models with up to 18B params and 360K hours of data, and 100+ languages. Joint work b/w @LTIatCMU and @nvidia.
0
5
8
RT @liweiche77: Thrilled to share our #ICML2025 paper!. We introduce a variational approach for speech language models, automating speech a….
arxiv.org
The success of large language models in text processing has inspired their adaptation to speech modeling. However, since speech is continuous and complex, it is often discretized for...
0
3
0
Not advertised yet, but we figured out how to do this too. And we release how exactly you can do it 👀. With the right training techniques, you can inject audio understanding and generation into an LLM with almost no loss in text perf. Details at
arxiv.org
This paper presents Open Unified Speech Language Models (OpusLMs), a family of open foundational speech language models (SpeechLMs) up to 7B. Initialized from decoder-only text language models,...
the best part about the mistral release is that the models don't loose as much on text - this has been a biggest pain point for a audioLMs for a long while
1
5
31
RT @awawawhoami: how do yall think current day google translate works?? everyone's just stupid now i guess.
0
384
0
RT @jiatongshi: 🔊 New release: #ARECHO -> Autoregressive Evaluation via Chain-based Hypothesis Optimization. • 87-metric coverage in one mo….
0
3
0
RT @mmiagshatoy: 🚀 Happy to share our #INTERSPEECH2025 paper:. Using speaker & acoustic context, we dynamically adjust model paths, resulti….
arxiv.org
Speech foundation models achieve strong generalization across languages and acoustic conditions, but require significant computational resources for inference. In the context of speech foundation...
0
10
0
RT @jiatongshi: 🚀 Introducing Uni-VERSA: a unified model for multi-dimensional speech evaluation-naturalness, intelligibility, noise, proso….
huggingface.co
0
9
0
7/7 papers accepted to #Interspeech2025 🎉. Lots of interesting work from my fantastic co-authors on long-form processing, multilingualism, and multi-modal foundation models. See y’all in Rotterdam 🇳🇱.
4
7
79
RT @cromz22: Excited to share our survey paper accepted to #ACL2025NLP Findings:.When Large Language Models Meet Speech: A Survey on Integr….
0
7
0
RT @arouditchenko: Do you really need audio to fine-tune your Audio LLM? 🤔 Answer below:. Introducing Omni-R1, a simple GRPO fine‑tuning me….
arxiv.org
We propose Omni-R1 which fine-tunes a recent multi-modal LLM, Qwen2.5-Omni, on an audio question answering dataset with the reinforcement learning method GRPO. This leads to new State-of-the-Art...
0
36
0
RT @huckiyang: We are happy that🦉 OWLS, 18B to 0.25B open ASR/AST limited data scaling laws, has been accepted to @icmlconf 2025 led by @c….
0
10
0
More analyses can be found in our pre-print: All models will be released on @huggingface: Many thanks to my wonderful co-authors and mentors: @shinjiw_at_cmu, @huckiyang, @brianyan918, @MXzBFhjFpS1jyMI . See y'all in Vancouver!.
huggingface.co
0
0
5