YonatanBitton Profile Banner
Yonatan Bitton Profile
Yonatan Bitton

@YonatanBitton

Followers
3K
Following
25K
Media
222
Statuses
11K

Research Scientist @GoogleAI | Multimodal ML & Vision-Language | Account restored after hack (July 2025).

Israel
Joined January 2020
Don't wanna be here? Send us removal request.
@YonatanBitton
Yonatan Bitton
5 months
🚨 Good news: My original account @YonatanBitton is back! Huge thanks to the amazing community (and colleagues) who helped me restore it after the hack. If you followed my temporary account (@YonatanBittonX), please return here for updates.
1
0
17
@_akhaliq
AK
2 days
Error-Driven Scene Editing for 3D Grounding in Large Language Models
3
8
32
@zhan1624
Yue Zhang
2 days
Thanks @_akhaliq for sharing our work "DEER3D: Error-Driven Scene Editing for 3D Grounding in Large Language Models"!🙏 For those who are interested, here is the detailed thread-> https://t.co/T9IXxQdVCu
@_akhaliq
AK
2 days
Error-Driven Scene Editing for 3D Grounding in Large Language Models
0
7
15
@YonatanBitton
Yonatan Bitton
2 days
Sharing our new work, led by @zhan1624. We present DEER-3D 🦌, a framework utilizing explicit 3D scene editing to generate visual counterfactuals for 3D grounding. This method, which targets visual context instead of relying on textual augmentations, corrects model biases more
@zhan1624
Yue Zhang
2 days
🚨 Thrilled to introduce DEER-3D: Error-Driven Scene Editing for 3D Grounding in Large Language Models - Introduces an error-driven scene editing framework to improve 3D visual grounding in 3D-LLMs. - Generates targeted 3D counterfactual edits that directly challenge the
0
8
10
@zhan1624
Yue Zhang
2 days
🚨 Thrilled to introduce DEER-3D: Error-Driven Scene Editing for 3D Grounding in Large Language Models - Introduces an error-driven scene editing framework to improve 3D visual grounding in 3D-LLMs. - Generates targeted 3D counterfactual edits that directly challenge the
3
33
47
@lovodkin93
Aviv Slobodkin
2 months
Officially a Doctor of Philosophy! 🎓🎉 Huge thanks to everyone who supported me in this wild ride, and especially to my supervisor Ido Dagan who has taught me so much! So excited for the next chapter!
10
1
29
@hila_chefer
Hila Chefer
2 months
Thrilled to share that two papers got into #NeurIPS2025 🎉 ✨ FlowMo (my first last-author paper 🤩) ✨ Revisiting LRP I’m immensely proud of the students, who not only led great papers but also grew and developed so much throughout the process 👇
@hila_chefer
Hila Chefer
6 months
Beyond excited to share FlowMo! We found that the latent representations by video models implicitly encode motion information, and can guide the model toward coherent motion at inference time Very proud of @ariel__shaulov @itayhzn for this work! Plus, it’s open source! 🥳
4
13
157
@EliahuHorwitz
Eliahu Horwitz
2 months
Excited to share this has now been accepted at #NeurIPS2025 as a position paper (<6% acceptance)!🎉 We advocate for systematically studying entire model populations via weight-space learning, and argue that this requires charting them in a Model Atlas. @NeurIPSConf #NeurIPS 🧵👇
@EliahuHorwitz
Eliahu Horwitz
8 months
🚨 New paper alert! 🚨 Millions of neural networks now populate public repositories like Hugging Face 🤗, but most lack documentation. So, we decided to build an Atlas 🗺️ Project: https://t.co/1JpsC6dCeg Demo: https://t.co/4Xy7yLdIZY 🧵👇🏻 Here's what we found:
0
21
64
@gordonhu608
Wenbo Hu
2 months
Glad to share that 3DLLM-Mem is accepted by #NeurIPS2025 Looking forward to meeting everyone in my undergrad city San Diego!!!
@gordonhu608
Wenbo Hu
6 months
🤔How to maintain a long-term memory for a 3D embodied AI agent across dynamic spatial-temporal environment changes in complex tasks? 🚀Introducing 3DLLM-Mem, a memory-enhanced 3D embodied agent that incrementally builds and maintains a task-relevant long-term memory while it
0
2
15
@YonatanBitton
Yonatan Bitton
2 months
Happy to share 3DLLM-Mem on long-term memory for 3D embodied agents is accepted to NeurIPS 2025! 🎉 🔗 https://t.co/hJISj748Si | 📄 https://t.co/IjxMBdYBdZ Congrats @gordonhu608 @yining_hong
Tweet card summary image
arxiv.org
Humans excel at performing complex tasks by leveraging long-term memory across temporal and spatial experiences. In contrast, current Large Language Models (LLMs) struggle to effectively plan and...
@gordonhu608
Wenbo Hu
6 months
🤔How to maintain a long-term memory for a 3D embodied AI agent across dynamic spatial-temporal environment changes in complex tasks? 🚀Introducing 3DLLM-Mem, a memory-enhanced 3D embodied agent that incrementally builds and maintains a task-relevant long-term memory while it
0
3
14
@yining_hong
Yining Hong
2 months
Glad to share 3DLLM-Mem has been accepted to NeurIPS! Congrats Wenbo!
@gordonhu608
Wenbo Hu
6 months
🤔How to maintain a long-term memory for a 3D embodied AI agent across dynamic spatial-temporal environment changes in complex tasks? 🚀Introducing 3DLLM-Mem, a memory-enhanced 3D embodied agent that incrementally builds and maintains a task-relevant long-term memory while it
2
3
39
@DavidDinkevich
David Dinkevich
3 months
[1/6] 🎬 New paper: Story2Board We guide diffusion models to generate consistent, expressive storyboards--no training needed. By mixing attention-aligned tokens across panels, we reinforce character identity without hurting layout diversity. 🌐 https://t.co/aRG81nu5qK
5
11
30
@EliyaHabba
Eliya Habba @EMNLP 🇨🇳
4 months
Presenting my poster : 🕊️ DOVE - A large-scale multi-dimensional predictions dataset towards meaningful LLM evaluation, Monday 18:00 Vienna, #ACL2025 Come chat about LLM evaluation, prompt sensitivity, and our 250M COLLECTION OF MODEL OUTPUTS!
2
11
47
@YonatanBitton
Yonatan Bitton
4 months
Happy to share two #UCLA (w/ @kaiwei_chang @adityagrover_) papers won 🏆 Best Paper at recent workshops! 1️⃣ VideoPhy-2 @ World Models Workshop #ICML2025 Congrats @clarkipeng @hbXNov! https://t.co/o9GqyYj9E6 2️⃣ 3DLLM-Mem @ Foundation Models Meet Embodied Agents #CVPR2025
0
0
10
@YonatanBitton
Yonatan Bitton
5 months
Happy to share our new work, EditInspector, that is also accepted to #ACL2025! Congrats @ron_yosef for leading! Check out https://t.co/sYXFjPkBfE for more details
editinspector.github.io
TWITTER BANNER DESCRIPTION META TAG
@ron_yosef
Ron Yosef
5 months
Happy to announce that our paper “EditInspector: A Benchmark for Evaluation of Text-Guided Image Edits” was accepted to #ACL2025 🎉 📄 https://t.co/mwugXz1H5q 🌐
0
1
6
@ron_yosef
Ron Yosef
5 months
Happy to announce that our paper “EditInspector: A Benchmark for Evaluation of Text-Guided Image Edits” was accepted to #ACL2025 🎉 📄 https://t.co/mwugXz1H5q 🌐
2
5
22