Yonatan Bitton
@YonatanBitton
Followers
3K
Following
25K
Media
222
Statuses
11K
Research Scientist @GoogleAI | Multimodal ML & Vision-Language | Account restored after hack (July 2025).
Israel
Joined January 2020
🚨 Good news: My original account @YonatanBitton is back! Huge thanks to the amazing community (and colleagues) who helped me restore it after the hack. If you followed my temporary account (@YonatanBittonX), please return here for updates.
1
0
17
Thanks @_akhaliq for sharing our work "DEER3D: Error-Driven Scene Editing for 3D Grounding in Large Language Models"!🙏 For those who are interested, here is the detailed thread-> https://t.co/T9IXxQdVCu
0
7
15
Sharing our new work, led by @zhan1624. We present DEER-3D 🦌, a framework utilizing explicit 3D scene editing to generate visual counterfactuals for 3D grounding. This method, which targets visual context instead of relying on textual augmentations, corrects model biases more
🚨 Thrilled to introduce DEER-3D: Error-Driven Scene Editing for 3D Grounding in Large Language Models - Introduces an error-driven scene editing framework to improve 3D visual grounding in 3D-LLMs. - Generates targeted 3D counterfactual edits that directly challenge the
0
8
10
🚨 Thrilled to introduce DEER-3D: Error-Driven Scene Editing for 3D Grounding in Large Language Models - Introduces an error-driven scene editing framework to improve 3D visual grounding in 3D-LLMs. - Generates targeted 3D counterfactual edits that directly challenge the
3
33
47
Officially a Doctor of Philosophy! 🎓🎉 Huge thanks to everyone who supported me in this wild ride, and especially to my supervisor Ido Dagan who has taught me so much! So excited for the next chapter!
10
1
29
Thrilled to share that two papers got into #NeurIPS2025 🎉 ✨ FlowMo (my first last-author paper 🤩) ✨ Revisiting LRP I’m immensely proud of the students, who not only led great papers but also grew and developed so much throughout the process 👇
Beyond excited to share FlowMo! We found that the latent representations by video models implicitly encode motion information, and can guide the model toward coherent motion at inference time Very proud of @ariel__shaulov @itayhzn for this work! Plus, it’s open source! 🥳
4
13
157
Excited to share this has now been accepted at #NeurIPS2025 as a position paper (<6% acceptance)!🎉 We advocate for systematically studying entire model populations via weight-space learning, and argue that this requires charting them in a Model Atlas. @NeurIPSConf #NeurIPS 🧵👇
🚨 New paper alert! 🚨 Millions of neural networks now populate public repositories like Hugging Face 🤗, but most lack documentation. So, we decided to build an Atlas 🗺️ Project: https://t.co/1JpsC6dCeg Demo: https://t.co/4Xy7yLdIZY 🧵👇🏻 Here's what we found:
0
21
64
Glad to share that 3DLLM-Mem is accepted by #NeurIPS2025 Looking forward to meeting everyone in my undergrad city San Diego!!!
🤔How to maintain a long-term memory for a 3D embodied AI agent across dynamic spatial-temporal environment changes in complex tasks? 🚀Introducing 3DLLM-Mem, a memory-enhanced 3D embodied agent that incrementally builds and maintains a task-relevant long-term memory while it
0
2
15
Happy to share 3DLLM-Mem on long-term memory for 3D embodied agents is accepted to NeurIPS 2025! 🎉 🔗 https://t.co/hJISj748Si | 📄 https://t.co/IjxMBdYBdZ Congrats @gordonhu608 @yining_hong
arxiv.org
Humans excel at performing complex tasks by leveraging long-term memory across temporal and spatial experiences. In contrast, current Large Language Models (LLMs) struggle to effectively plan and...
🤔How to maintain a long-term memory for a 3D embodied AI agent across dynamic spatial-temporal environment changes in complex tasks? 🚀Introducing 3DLLM-Mem, a memory-enhanced 3D embodied agent that incrementally builds and maintains a task-relevant long-term memory while it
0
3
14
Glad to share 3DLLM-Mem has been accepted to NeurIPS! Congrats Wenbo!
🤔How to maintain a long-term memory for a 3D embodied AI agent across dynamic spatial-temporal environment changes in complex tasks? 🚀Introducing 3DLLM-Mem, a memory-enhanced 3D embodied agent that incrementally builds and maintains a task-relevant long-term memory while it
2
3
39
[1/6] 🎬 New paper: Story2Board We guide diffusion models to generate consistent, expressive storyboards--no training needed. By mixing attention-aligned tokens across panels, we reinforce character identity without hurting layout diversity. 🌐 https://t.co/aRG81nu5qK
5
11
30
Presenting my poster : 🕊️ DOVE - A large-scale multi-dimensional predictions dataset towards meaningful LLM evaluation, Monday 18:00 Vienna, #ACL2025 Come chat about LLM evaluation, prompt sensitivity, and our 250M COLLECTION OF MODEL OUTPUTS!
2
11
47
Thrilled that our paper on Confidence-Informed Self-Consistency (CISC) has been accepted to #ACL2025 Findings! 🎉 Paper: https://t.co/N5AFzgG5Je (1/2)
arxiv.org
Self-consistency decoding enhances LLMs' performance on reasoning tasks by sampling diverse reasoning paths and selecting the most frequent answer. However, it is computationally expensive, as...
1
4
32
Happy to share two #UCLA (w/ @kaiwei_chang @adityagrover_) papers won 🏆 Best Paper at recent workshops! 1️⃣ VideoPhy-2 @ World Models Workshop #ICML2025 Congrats @clarkipeng @hbXNov! https://t.co/o9GqyYj9E6 2️⃣ 3DLLM-Mem @ Foundation Models Meet Embodied Agents #CVPR2025
0
0
10
Happy to share our new work, EditInspector, that is also accepted to #ACL2025! Congrats @ron_yosef for leading! Check out https://t.co/sYXFjPkBfE for more details
editinspector.github.io
TWITTER BANNER DESCRIPTION META TAG
Happy to announce that our paper “EditInspector: A Benchmark for Evaluation of Text-Guided Image Edits” was accepted to #ACL2025 🎉 📄 https://t.co/mwugXz1H5q 🌐
0
1
6
Happy to announce that our paper “EditInspector: A Benchmark for Evaluation of Text-Guided Image Edits” was accepted to #ACL2025 🎉 📄 https://t.co/mwugXz1H5q 🌐
2
5
22