
Zitong Yang
@ZitongYang0
Followers
752
Following
502
Media
20
Statuses
306
I gave a black board talk about Hopefield network at Emmanuel Candès’ group meeting a couple of weeks ago. It was super fun to drop laptop slides and get back to chalks and dusts.
During his Nobel Prize lecture, physics laureate John Hopfield spoke about his fascination with the human brain and how that inspired his development of the Hopfield network, an associative memory that can store and reconstruct images and other types of patterns in data. Get
0
2
21
RT @StevenyzZhang: Soon, AI agents will act for us—collaborating, negotiating, and sharing data. But can they truly protect our privacy?. W….
0
22
0
RT @yanndubs: 🔥 So excited to share GPT-5!. For thinking mode and API models, we’ve improved performance across key:.- Axes: factuality, st….
0
13
0
RT @Song__Mei: Today’s the day — GPT-5 is here! One of the reasons I joined OpenAI was to train the next generation of GPT. It became my fi….
0
2
0
RT @yubai01: Today is the day -- we are excited to bring gpt5 to you. Fortunate to have led several workstreams in GPT5 Thinking and Mini….
0
4
0
RT @Song__Mei: I’m excited to start at OpenAI this May and help ship the oss model. More to come soon!.
0
8
0
RT @ml_angelopoulos: 🎆 Alert: Huge data release 🎆. We released a dataset of 140k conversations from LMArena. It is the richest preference….
0
7
0
RT @lmarena_ai: 🧑🔬 Research Update: Today, we are releasing a new dataset with over 140k conversations from the text arena collected betwe….
0
23
0
RT @dittycheria: We just launched Voyage-context-3, a new embedding model that gives AI a full-document view while preserving chunk-level p….
0
13
0
RT @zffc: In this report, we describe the 2025 Apple Foundation Models ("AFM"). We also introduce the new Foundation Models framework, whic….
machinelearning.apple.com
We introduce two multilingual, multimodal foundation language models that power Apple Intelligence features across Apple devices and…
0
18
0
RT @ruomingpang: In this report we describe the 2025 Apple Foundation Models ("AFM"). We also introduce the new Foundation Models framework….
machinelearning.apple.com
We introduce two multilingual, multimodal foundation language models that power Apple Intelligence features across Apple devices and…
0
92
0
Sad that many profs don’t think about this question.
I believe all professors in the field of AI and machine learning at top universities need to face a soul-searching question: What can you still teach your top (graduate) students about AI that they cannot learn by themselves or elsewhere? It had bothered me for quite some years.
0
0
6
RT @kenziyuliu: heading to @icmlconf #ICML2025 next week! come say hi & i'd love to learn about your work :). i'll present this paper (http….
0
46
0
RT @ruomingpang: Proud to share our report on AXLearn (, the code base for building Apple Foundation Models: https:….
arxiv.org
We design and implement AXLearn, a production deep learning system that facilitates scalable and high-performance training of large deep learning models. Compared to other state-of-the-art deep...
0
56
0
RT @BanghuaZ: Really excited to work with @AndrewYNg and @DeepLearningAI on this new course on post-training of LLMs—one of the most creat….
0
45
0
RT @KexinHuang5: 🤝Excited to announce @ProjectBiomni × @AnthropicAI! . AI agents are set to transform how biologists do everyday research.….
0
64
0
SOTA open-sourced multimodal embedding available for download
huggingface.co
1/6 Introduce MoCa, a new method for continual pre-training of multimodal embeddings! 🚀. MoCa is the first to effectively scale with unlabeled interleaved image-text data, marking a paradigm shift in multimodal embeddings. Paper, code, & checkpoints! 👇.#AI #Multimodal #ML #NLP
0
0
3
RT @ChengleiSi: Are AI scientists already better than human researchers?. We recruited 43 PhD students to spend 3 months executing research….
0
178
0