
Zach Nussbaum
@zach_nussbaum
Followers
1K
Following
5K
Media
34
Statuses
500
https://t.co/t0VFctEaZi | prev @nomic_ai 🗺️📍
Manhattan
Joined April 2019
RT @max_spero_: Pangram was featured in a @DebunkEU investigation identifying thousands of bots on X spreading AI-generated pro-Kremlin dis….
0
1
0
RT @orionweller: 🤔 Have you ever wondered how good ModernBERT is compared to decoders like Llama?. We made an open-data version of ModernBE….
0
45
0
daniel's put a lot of what i've been thinking about into words: when and how much should we automate in the face of things like claude code?. i *really* like the conscious architects framing.
I am trying out this Thought-Boi Thing. Give it a read. The Hidden Cost of Augmentation: Every Tool You Use Changes You.
0
0
1
@adityakusupati in related news:
📢Now open, Gemma 3n weights & it is natively flexible, first of its kind, thanks to MatFormer🪆. Any model between E4B & E2B with ZERO training near Pareto -- we found a bunch!. Find a better E3B than what we released, I will send you a 🪆😉. Find the colab for extraction 🧵👇🪆
0
0
0
RT @jxmnop: In the beginning, there was BERT. Eventually BERT gave rise to RoBERTa. Then, DeBERTa. Later, ModernBERT. And now, NeoBERT.….
0
69
0
RT @nomic_ai: so exciting to get a chance to collaborate with @Wikipedia & @Wikimedia on the first full multilingual wikipedia map! even mo….
enterprise.wikimedia.com
Nomic AI’s NOMAD Projection research visualizes multilingual Wikipedia, leveraging Wikimedia Enterprise datasets for powerful AI insights.
0
15
0
jack is not only 10/10 researcher, but also a 10/10 person. any org would be lucky to have him!.
hello twittersphere! i am planning to graduate in a few months, so i am officially ✨ Looking For A Job ✨. if you know of a role that'd be a good fit, or just want to chat, please reach out!. here are some projects i've worked on that i'm most proud of 👇
1
0
19
RT @antoine_chaffin: Modern retrievers can perform reasoning internally, yet they benefit from using reasoning traces from LLM! .So how doe….
0
34
0
GradCache was first introduced by @luyu_gao in.Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup:.
arxiv.org
Contrastive learning has been applied successfully to learn vector representations of text. Previous research demonstrated that learning high-quality representations benefits from batch-wise...
1
2
11
code is open-sourced here:
github.com
BRIGHT: A Realistic and Challenging Benchmark for Reasoning-Intensive Retrieval - nomic-ai/BRIGHT
Agentic RAG is the  ✨hot✨new✨thing. so I was curious how current LLMs performed, with no training, on BRIGHT, a reasoning intensive benchmark for retrieval and reranking. Surprisingly, Qwen3 32B and Qwen QwQ set a new SoTA on BRIGHT!. zero training, just reranking BM25!
0
0
5