Jiachen Zhu Profile
Jiachen Zhu

@JiachenAI

Followers
448
Following
116
Media
2
Statuses
23

Computer Science PhD Student | New York University | Computer Vision and Self-Supervised Learning

New York, NY
Joined October 2018
Don't wanna be here? Send us removal request.
@JiachenAI
Jiachen Zhu
2 months
RT @liuzhuang1234: our ICLR 2025 work on making a classic optimization technique practically effective in deep learning training. I'm by no….
0
4
0
@JiachenAI
Jiachen Zhu
3 months
RT @TongPetersb: Vision models have been smaller than language models; what if we scale them up?. Introducing Web-SSL: A family of billion-….
0
85
0
@JiachenAI
Jiachen Zhu
3 months
RT @DavidJFan: Can visual SSL match CLIP on VQA?. Yes! We show with controlled experiments that visual SSL can be competitive even on OCR/C….
0
94
0
@JiachenAI
Jiachen Zhu
4 months
RT @ylecun: New paper: turns out you can train deep nets without normalization layers by replacing them with a parameterized tanh().
0
568
0
@JiachenAI
Jiachen Zhu
4 months
RT @liuzhuang1234: New paper - . Transformers, but without normalization layers (1/n)
Tweet media one
0
600
0
@JiachenAI
Jiachen Zhu
4 months
RT @liuzhuang1234: How different are the outputs of various LLMs, and in what ways do they differ?. Turns out, very very different, up to t….
0
85
0
@JiachenAI
Jiachen Zhu
3 years
Regularized methods help us prevent JEM's trivial solutions. ➖ VICReg maximizes the information content ℹ️ of the embeddings. SwAV uses Sinkhorn 📯 algorithm to produce high entropy clustering assignments. Both techniques produce non-flat energy surfaces. 🌊
Tweet media one
Tweet media two
Tweet media three
Tweet media four
@JiachenAI
Jiachen Zhu
3 years
Joint Embedding Methods are powerful ways to learn visual 🖼️ representations. They don't require training a decoder or coming up with meaningful pretext tasks. They can be trained with contrastive (e.g. MoCo and SimCLR) or regularized (next lesson) techniques. 📚
Tweet media one
Tweet media two
Tweet media three
Tweet media four
0
6
38
@JiachenAI
Jiachen Zhu
3 years
Joint Embedding Methods are powerful ways to learn visual 🖼️ representations. They don't require training a decoder or coming up with meaningful pretext tasks. They can be trained with contrastive (e.g. MoCo and SimCLR) or regularized (next lesson) techniques. 📚
Tweet media one
Tweet media two
Tweet media three
Tweet media four
5
53
298