liang_weixin Profile Banner
Weixin Liang Profile
Weixin Liang

@liang_weixin

Followers
1K
Following
107
Media
29
Statuses
155

CS Ph.D. @Stanford | @StanfordAILab | TA for CS224C: NLP for Computational Social Science | Exploring AI & NLP | https://t.co/pOjcCS4gUk

Palo Alto, CA
Joined November 2019
Don't wanna be here? Send us removal request.
@liang_weixin
Weixin Liang
2 months
๐ŸŽ‰ Excited to share: "๐Œ๐ข๐ฑ๐ญ๐ฎ๐ซ๐ž-๐จ๐Ÿ-๐“๐ซ๐š๐ง๐ฌ๐Ÿ๐จ๐ซ๐ฆ๐ž๐ซ๐ฌ (๐Œ๐จ๐“)" has been officially accepted to TMLR (March 2025) and the code is now open-sourced!. ๐Ÿ“Œ GitHub repo: ๐Ÿ“„ Paper: How can we reduce pretraining costs for
Tweet media one
Tweet media two
3
84
435
@liang_weixin
Weixin Liang
25 days
RT @ShirleyYXWu: Even the smartest LLMs can fail at basic multiturn communication. Ask for grocery help โ†’ without asking where you liveโ€ฆ.
0
47
0
@liang_weixin
Weixin Liang
1 month
Thank you, @VictoriaLinML , for the write-up.
@VictoriaLinML
Victoria X Lin
1 month
Let's talk about Mixture-of-Transformers (MoT) and heterogeneous omni-model training. 1. Inspired by prior architectures consisting of modality-specific parametersโ€”such as Flamingo, CogVLM, BEIT-3, and MoMAโ€”MoT ( pushes this idea further by using.
1
0
10
@liang_weixin
Weixin Liang
1 month
RT @xuandongzhao: ๐Ÿš€ Excited to share the most inspiring work Iโ€™ve been part of this year:. "Learning to Reason without External Rewards"โ€ฆ.
0
510
0
@liang_weixin
Weixin Liang
5 months
๐ŸŒ On United Nations (UN) adoption: Even the world's most prominent international bodies are embracing LLMs! . UN press releases showed a rapid initial surge (3.1% to 10.1%) in early 2023, then steadily climbing to 13.7% by Q3 2024.
Tweet media one
1
1
13
@liang_weixin
Weixin Liang
5 months
Work done in collaboration w/@yaohuiz3, @m_codreanu, Jiayu Wang, @CaoHancheng, @james_y_zou.
0
0
3
@liang_weixin
Weixin Liang
5 months
๐Ÿ” Key findings:. - Lower education areas showed higher LLM adoption in consumer complaints.- Urban areas have higher LLM usage (18.2% vs 10.9%).- Science & tech companies lead in corporate adoption.- Younger firms (post-2015) use LLMs 3x more than older ones (pre-1980)
Tweet media one
0
0
9
@liang_weixin
Weixin Liang
5 months
๐Ÿšจ New research: We analyzed 1.5M+ documents to track LLM-assisted writing adoption across society from 2022-2024. The results? . ๐Ÿ“ŠBy late 2024, LLMs assist in writing:.- 18% of financial consumer complaints.- 24% of corporate press releases.- Up to 15% of job postings (esp. in
Tweet media one
5
35
121
@liang_weixin
Weixin Liang
5 months
RT @VoyageAI: Thanks @liang_weixin We all enjoyed reading the paper! And we appreciate your paper for helping the community gain a deeperโ€ฆ.
0
1
0
@liang_weixin
Weixin Liang
5 months
RT @VoyageAI: We are excited to announce that Voyage AI is officially joining @MongoDB !. Joining @MongoDB enables us to bring our cutting-โ€ฆ.
0
11
0
@liang_weixin
Weixin Liang
5 months
RT @kefandong: Update: check out for our code, data, and model!.
0
7
0
@liang_weixin
Weixin Liang
5 months
RT @JunhongShen1: We introduce Mixture-of-Mamba, a multi-modal SSM that leverages modality-aware sparsity for efficient multi-modal pretraiโ€ฆ.
0
5
0
@liang_weixin
Weixin Liang
5 months
๐Ÿš€ Want 2x faster pretraining for your multi-modal LLM?. ๐Ÿงต Following up on Mixture-of-Transformers (MoT), we're excited to share Mixture-of-Mamba (MoM)!. ๐Ÿ”ฅ Why it matters: MoM applies modality-aware sparsity across image, text, and speechโ€”making
Tweet media one
Tweet media two
Tweet media three
0
1
18
@liang_weixin
Weixin Liang
5 months
๐Ÿ“ข Can LLMs program themselves to run faster? ๐Ÿƒโฑ๏ธ . LLM self-taught to code for next-gen AI hardware!. 1/ Programming AI accelerators is a major bottleneck in ML. Our self-improving LLM agent learns to write optimized code for new hardware, achieving 3.9x
Tweet media one
Tweet media two
Tweet media three
2
6
36
@liang_weixin
Weixin Liang
5 months
RT @zhang677: ๐Ÿ” ML library development is crucial but requires expertise in ML algorithms & architecture-specific programming languages (ASโ€ฆ.
0
6
0
@liang_weixin
Weixin Liang
6 months
RT @Zhang_Yu_hui: ๐Ÿ” Vision language models are getting better - but how do we evaluate them reliably? Introducing AutoConverter: transformiโ€ฆ.
0
73
0
@liang_weixin
Weixin Liang
7 months
RT @WeijiaShi2: Introducing ๐‹๐ฅ๐š๐ฆ๐š๐…๐ฎ๐ฌ๐ข๐จ๐ง: empowering Llama ๐Ÿฆ™ with diffusion ๐ŸŽจ to understand and generate text and images in arbitrary sequenโ€ฆ.
0
175
0
@liang_weixin
Weixin Liang
7 months
RT @Zhang_Yu_hui: ๐Ÿค” Why are VLMs (even GPT-4V) worse at image classification than CLIP, despite using CLIP as their vision encoder?. Presenโ€ฆ.
0
21
0
@liang_weixin
Weixin Liang
7 months
RT @SiyouPei: Iโ€™m open to academia & industry in 2025. My work in #XR ๐Ÿฅฝ + #HCI ๐Ÿ‘ฉโ€๐Ÿ’ป enables low-friction XR experience thru #EmbodiedInteracโ€ฆ.
0
23
0
@liang_weixin
Weixin Liang
7 months
Honored that @Nature has highlighted our work again in their latest piece examining #ChatGPT's transformative impact on scientific research and academia over the past two years. h/t @Nature.
Tweet media one
1
1
17