Zexue He
@ZexueHe
Followers
481
Following
87
Media
17
Statuses
51
Trustworthy NLP PhD @McAuleyLabUCSD. IBM Ph.D. Fellowship. Prev intern @msftresearch. Researcher @MITIBMLab. Affiliated researcher @MIT.
San Diego, CA
Joined September 2021
๐The MemVis @ICCVConference workshop was a wonderful success! ๐Huge thanks to all our speakers and panelists for the inspiring talks and discussions: Kristen Grauman @ManlingLi_ @albertobietti @RanjayKrishna @Ben_Hoov; ๐ชAnd thanks to all participants for the engaging
0
1
24
Happening now at Room 304B! ๐จAlso, submit your questions and discuss them in the panel:
๐ Our Memory and Vision (MemVis) Workshop is happening this Sunday 8:30am-1pm, Oct 19 at #ICCV2025 in Honolulu, Hawaii! ๐ Room #304B ๐ Full schedule is live Join us and our amazing speakers to explore how memory connects with vision models through inspiring talks, panels,
0
0
6
Deadline Extended! Submit your work ๐ค
Good news! ๐ข The MemVis @ICCVConference submission deadline is extended to ๐ญ๐ฌ ๐๐๐ด๐๐๐ -- more time to send us your best work on memory & vision!
0
0
6
๐ข ๐๐๐ฅ๐ฅ ๐๐จ๐ซ ๐๐๐ฉ๐๐ซ๐ฌ โ ๐๐๐ฆ๐๐ข๐ฌ @ ICCV | ๐๐จ๐ง๐จ๐ฅ๐ฎ๐ฅ๐ฎ, ๐๐๐ฐ๐๐ข๐ข ๐บ Topics: ๐ง Memory-augmented models ๐ฅ Temporal & long-context vision ๐ค Multimodal & scalable systems and more on ๐ฆ๐๐ฆ๐จ๐ซ๐ฒ + ๐ฏ๐ข๐ฌ๐ข๐จ๐ง ... ๐OpenReview Submission:
0
2
16
How to build a factual but creative system? It is a question surrounding memory and creativity in modern ML systems. My colleagues from @IBMResearch and @MITIBMLab are hosting the @MemVis_ICCV25 workshop at #ICCV2025, which explores the intersection between memory and generative
1
23
112
3/3 ๐Big shout-out to my incredible co-organizers @gaotianyu1350 @abertsch72 @HowardYen1 @YuandongTian @danqi_chen @gneubig @RogerioFeris. ๐ชAnd special thanks to our wonderful on-site host @HowardYen1 for keeping everything running smoothly. ๐ฉSee you again at the next LCFM!!
0
0
3
2/3 Also, many thanks to all authors who submitted their great work, gave oral talks & poster presentations! Grateful to our dedicated reviewers โ this workshop wouldnโt have been possible without you! ๐
1
0
1
1/3โ
Our LCFM at #ICML2025 workshop wrapped up successfully! ๐Huge thanks to our speakers for sharing cutting-edge insights: @tri_dao @PangWeiKoh @bmwshop @jiajunwu_cs @volokuleshov ๐And to our panelists for the inspiring discussion: @YuandongTian @MohitIyyer @bmwshop @Xinyu2ML
1
5
16
Fantastic collaboration with my awesome coauthors @__YuWang__ @DimaKrotov @YzhuML @Yifan__Gao @wangchunshu, Dan Gutfreund, @RogerioFeris !! And many thanks to my labs for supporting our research @McAuleyLabUCSD @MITIBMLab
0
0
2
๐ฅ M+ is at #ICML2025 now! We combine long-term memory (on CPU) with short-term memory (on GPU) for LLMs, pushing efficient long-context modeling to 160k+ tokens. ๐ My co-author @YzhuML will present M+ in person tomorrow 4:30pm. ๐ Come chat about scaling memory for LLMs!
๐ Our paper โM+: Extending MemoryLLM with Scalable Long-Term Memoryโ is accepted to ICML 2025! ๐น Co-trained retriever + latent memory ๐น Retains info across 160k+ tokens ๐น Much Lower GPU cost compared to backbone LLM https://t.co/UZTSBINSij
1
4
25
Introducing The Most Advanced Memory System for LLM Agents MIRIX is by far the most advanced memory system in the world, designed to make AI truly remember, learn, and help you over time. Website: https://t.co/KXVIrJ54x3 Paper: https://t.co/zvZNfFAZsl Github:
arxiv.org
Although memory capabilities of AI agents are gaining increasing attention, existing solutions remain fundamentally limited. Most rely on flat, narrowly scoped memory components, constraining...
14
132
800
๐
Mark the date: 19 Jul, 8:30 a.m. PDT ๐ข Workshop schedule is now available! ๐ Find more details on our website: https://t.co/5Dt6uBATcN
#ICML2025 #AI #MachineLearning
0
0
1
๐ก Curious about long-context foundation models (LFCM)? ๐ง Weโre hosting a panel at the LCFM workshop at #ICML2025 on โHow to evaluate long-context foundation models?โ โ Weโd love to feature your question! Anything on long-context evaluation or modeling โ drop it below / DM me๐ค
1
10
26
๐จ Excited to announce our new ICCV 2025 workshop: Memory and Vision (MemVis) ๐ Holonunu, Hawaii ๐
First edition! Bridging memory and vision โ two key cognitive functions โ for AI systems that ๐ SEE, STORE, and RECALL like humans! โ๏ธCall for Paper is on!
MemVis @ #ICCV2025 -- 1st Workshop on Memory & Vision! ๐ง ๐๏ธ Call for papers now open: Hopfield & energy nets, state-space + diffusion models, retrieval & lifelong learning, long-context FMs, multimodal memory, & more. ๐๏ธ Submit by 1 Aug 2025 โ https://t.co/ttDaNdd7qA ๐บ #MemVis
1
3
24
๐ News! Our 2nd Workshop on Long-Context Foundation Models (LCFM), to be held at ICML 2025 in Vancouver ๐จ๐ฆ! If you're working on long-context models, consider submitting your work! ๐๏ธ DDL: May 22, 2025 (AOE) ๐ Web: https://t.co/5Dt6uBATcN ๐ OpenReview: https://t.co/FaxTWdeGnr
1
8
29
Excited to join the New Frontiers in Associative Memories workshop at ICLR 2025 as a panelist. Thanks to the organizers for making this great event happen and for the invitation. Join us to discuss the fascinating intersection of memory and AI in Hall 4 #5!
I am heading to #ICLR2025 in Singapore. If you want to chat about associative memories, energy-based models, and related topics letโs connect! The highlight for me this year is the New Frontiers in Associative Memories workshop on Sunday April 27. We have an exciting lineup of
0
2
6
Step by our poster if you are in ICLR ๐Poster Session 6 #578
๐ฟ NEW at ICLR 2025 Singapore: LARGE SCALE KNOWLEDGE WASHING We propose LAW โ a method to unlearn massive knowledge in LLMs while preserving reasoning. No reverse loss, no task data โ just MLP layer updates. ๐Poster Session 6 #578 Come chat unlearning at scale! ๐ง
0
0
1
๐ Integrating GenAI advancements with cognitive insights! Check out our paper for more details:
(1/4) Excited to share that our position paper "Towards LifeSpan Cognitive Systems" has been accepted to TMLR! As "agents" dominate the AI landscape in 2025, we push the boundaries by envisioning LSCS โ AI systems that go beyond personal assistants to lifespan cognitive systems.
0
0
4