
Yoav Artzi
@yoavartzi
Followers
17K
Following
19
Media
13
Statuses
105
Research/prof @cs_cornell + @cornell_tech🚡 / https://t.co/9YnWry7yHs / asso. faculty director @arxiv / building https://t.co/f9QkzO5kaC and @COLM_conf
New York, NY
Joined June 2011
It's now public! My postdoc call is for the inaugural postdoc as part of this $10.5M gift for a new AI fellows program at Cornell. There's a lot more in this program, so more exciting things to happen here real soon!. Application:
I am looking for a postdoc. A serious-looking call coming soon, but this is to get it going. Topics include (but not limited to): LLMs (🫢!), multimodal LLMs, interaction+learning, RL, intersection with cogsci, . see our work to get an idea:..Plz RT 🙏.
3
23
108
RT @COLM_conf: The list of accepted papers for COLM 2025 is now available here:. The papers will be made available….
0
3
0
RT @COLM_conf: COLM 2025 is now accepting applications for:. Financial Assistance Application -- Volunteer Applicat….
docs.google.com
Goal of the Childcare Financial Assistance Program. We at COLM believe our community should be diverse and inclusive. We recognize that parents might be less likely to attend because their children...
0
1
0
Check out our LMLM, our take on what is now being called a "cognitive core" (as far as branding go, this one is not bad) can look like, how it behaves, and how you train for it.
arxiv.org
Neural language models are black-boxes -- both linguistic patterns and factual knowledge are distributed across billions of opaque parameters. This entangled encoding makes it difficult to...
The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing:. - Natively multimodal.
2
7
34
RT @tallinzen: I'm hiring at least one post-doc! We're interested in creating language models that process language more like humans than m….
0
52
0
RT @COLM_conf: COLM 2025 will include 6 plenary sessions. The details about the format (panel vs. keynote) and topic will come up soon. We….
0
6
0
RT @COLM_conf: We are making progress on discussions, but also running out of time. Discussion ends tomorrow. Reviewers and ACs, please get….
0
2
0
RT @COLM_conf: We are doing our best to encourage engagement during the discussion period. It's moving, even if we wish folks would engage….
0
5
0
RT @COLM_conf: The 2nd stage of the discussion period has now started. The intermediate response deadline was very effective, so now we hav….
0
3
0
RT @COLM_conf: Our discussion period just started. Authors, please read our instructions carefully. We require responses by June 2. But,….
0
3
0
Really happy about this new work. Trying to think a lot about disentangling knowledge and reasoning/ling skill in LLMs, and this is a promising method in this direction. There is a lot of exciting things happening here, and a lot to build on.
🚀Excited to share our latest work:. LLMs entangle language and knowledge, making it hard to verify or update facts. We introduce LMLM 🐑🧠 — a new class of models that externalize factual knowledge into a database and learn during pretraining when and how to retrieve facts
0
3
28
RT @COLM_conf: The full list of COLM 2025 workshops is now online!. Most deadlines are June 23, but check the specific CFP of each workshop….
0
7
0