
Amir Feder
@amir_feder
Followers
568
Following
440
Media
0
Statuses
113
causality and human behavior in language models assistant prof. @HebrewU; research scientist @Google ex: {@blei_lab, @TechnionLive}
Tel Aviv
Joined July 2016
RT @victorveitch: Semantics in language is naturally hierarchical, but attempts to interpret LLMs often ignore this. Turns out: baking se….
0
65
0
RT @dan_biderman: We secure all communications with a cloud-hosted LLM, running on an H100 in confidential mode. Latency overhead goes aw….
0
8
0
RT @JiaqiZhangVic: ⌨️ 😇 Drafting for NeurIPS? Submit to #ICML2025 workshop on Scaling Up Intervention Models (SIM) too!. Let’s enjoy some f….
0
11
0
RT @ellliottt: new version of paper on worker rights in union contracts using NLP, with empirical work showing that these rights are valued….
0
12
0
RT @zorikgekhman: 🚨 It's often claimed that LLMs know more facts than they show in their outputs, but what does this actually mean, and how….
0
60
0
RT @JiaqiZhangVic: 📢 Excited to announce the #ICML2025 workshop on *Scaling Up Intervention Models (SIM)*! Let’s bring together state-of-th….
0
24
0
RT @dan_biderman: How can we use small LLMs to shift more AI workloads onto our laptops and phones?. In our paper and open-source code, we….
0
172
0
RT @_galyo: Excited for this work to be out 😀 . Self consistency is great but v expensive (especially when you care about those last few ac….
0
1
0
RT @TaubenfeldAmir: New Preprint 🎉. LLM self-assessment unlocks efficient decoding ✅. Our Confidence-Informed Self-Consistency (CISC) metho….
0
20
0
RT @zorikgekhman: At #EMNLP2024? Join me in the Language Modeling 1 session tomorrow, 11:00-11:15, for a talk on how fine-tuning with new k….
0
5
0
RT @YuvalShalev1: 🧠🤖 How do LLMs think? What kind of thought processes can emerge from artificial intelligence? Our latest paper about mult….
0
8
0
RT @dan_biderman: ✨Paper out in final form: exciting results from our semi-supervised pose estimation package, Lightning Pose, which is now….
github.com
Accelerated pose estimation and tracking using semi-supervised learning - paninski-lab/lightning-pose
0
21
0
RT @zorikgekhman: Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?. New preprint!📣. - LLMs struggle to integrate new factua….
0
58
0
RT @ninoscherrer: Super excited to present this work as spotlight tomorrow (Wed) at #NeurIPS23 alongside @causalclaudia & @amir_feder .🗓️10….
0
1
0
RT @dkaushik96: Attending @NeurIPSConf #NeurIPS2023 next week?. Join us for an enthralling discussion with Max Katz (from @SenatorHeinrich’….
0
18
0
RT @AchilleNazaret: 1/🧵 Excited to share #Decipher 🔍, a game-changing method for integrating #singlecell RNA-seq data 🧬 from multiple condi….
0
56
0
RT @roireichart: Due to their great success, LLMs have been increasingly used for scientific prediction and for uncovering the mechanisms b….
0
2
0
RT @NitCal: 1/15 .📣preprint📣.TL;DR.We (@YairGat1 @amir_feder Alex Chapanin @amt_shrma @roireichart) show (theoretically and empirically) th….
0
9
0
RT @ninoscherrer: Very happy to share that this work got accepted to #NeurIPS2023 as a spotlight 🥳 . It's my personal first ever acceptance….
0
5
0