Brian Gordon Profile
Brian Gordon

@Brian_Gordon13

Followers
47
Following
183
Media
8
Statuses
35

Research Intern @ Google | https://t.co/YF6cq9yyny @ Tel-Aviv University

Joined November 2021
Don't wanna be here? Send us removal request.
@Brian_Gordon13
Brian Gordon
13 days
RT @jaron1990: 1/.What if you could animate a face directly from text?.🎭 Meet Express4D - a dataset of expressive 4D facial motions capture….
0
17
0
@Brian_Gordon13
Brian Gordon
3 months
RT @YonatanBitton: And finally, our work Unblocking Fine-Grained Caption Evaluation: AutoRater & Critic-and-Revise (.
0
3
0
@Brian_Gordon13
Brian Gordon
4 months
RT @sigal_raab: 🔔Excited to announce that #AnyTop has been accepted to #SIGGRAPH2025!🥳.✅ A diffusion model that generates motion for arbitr….
0
25
0
@Brian_Gordon13
Brian Gordon
6 months
RT @andaristidou: 🚀 New preprint! 🚀. Check out AnyTop 🤩.✅ A diffusion model that generates motion for arbitrary skeletons 🦴.✅ Using only a….
0
42
0
@Brian_Gordon13
Brian Gordon
6 months
RT @DanielCohenOr1: Thrilled to see this plot in a recent survey on 'personalized image generation' ( — highlightin….
0
19
0
@Brian_Gordon13
Brian Gordon
7 months
RT @rotemsh3: Excited to introduce our new work: ImageRAG 🖼️✨ . We enhance off-the-shelf generative models with Ret….
0
12
0
@Brian_Gordon13
Brian Gordon
7 months
RT @GuyTvt: 🚀 Meet DiP: our newest text-to-motion diffusion model!.✨ Ultra-fast generation.♾️ Creates endless, dynamic motions.🔄 Seamlessly….
0
89
0
@Brian_Gordon13
Brian Gordon
1 year
RT @ArarMoab: Checkout our work "GameNGen". A Gaming engine powered by a diffusion-model that simulates DOOM in Real-Time!. Find out more:….
0
17
0
@Brian_Gordon13
Brian Gordon
1 year
RT @SanLorenzoRedes: Sorteo el 18 de septiembre. Camiseta de @SanLorenzo titular original XL. Para participar:. 📌Nos tenés que seguir y dar….
0
440
0
@Brian_Gordon13
Brian Gordon
1 year
RT @nitzanguetta: Can you answer these riddles?. We are happy to present our new paper “Visual Riddles: a Commonsense and World Knowledge C….
0
14
0
@Brian_Gordon13
Brian Gordon
1 year
We are happy to share Mismatch Quest acceptance to #ECCV2024 @eccvconf ! 🥳 .Check out additional details in the project website Congrats to the team @YonatanBitton @shafir_yoni, @roopalgarg, Xi Chen, @DaniLischinski , @DanielCohenOr1, Idan Szpektor.
mismatch-quest.github.io
Mismatch Quest: Visual and Textual Feedback for Image-Text Misalignmen
@Brian_Gordon13
Brian Gordon
2 years
1/📄 Excited to introduce our paper "Mismatch Quest: Visual and Textual Feedback for Image-Text Misalignment"!🖼️👀.Website: w. @YonatanBitton, @shafir_yoni, @roopalgarg, Xi Chen, @DaniLischinski, @DanielCohenOr1, Idan Szpektor.🧵.
0
7
20
@Brian_Gordon13
Brian Gordon
1 year
RT @AbermanKfir: “Monkey See, Monkey Do”! 🐵.A cool new work demonstrating how manipulating self-attention features in diffusion models enab….
0
9
0
@Brian_Gordon13
Brian Gordon
1 year
RT @GuyTvt: #MDM is now 40X faster 🤩🤩🤩 (~0.4 sec/sample). How come?!?.(1) We released the 50 diffusion steps model (instead of 1000 steps)….
0
19
0
@Brian_Gordon13
Brian Gordon
2 years
RT @_akhaliq: Google announces PALP. Prompt Aligned Personalization of Text-to-Image Models. paper page: Content c….
0
104
0
@Brian_Gordon13
Brian Gordon
2 years
10/🏁 Conclusions: We present an end-to-end approach that provides visual and textual feedback in text-to-image models, identifying alignment discrepancies with visual annotations for targeted model refinement. Check out the paper and project website for more details! 🎉.
0
0
0
@Brian_Gordon13
Brian Gordon
2 years
9/ More results from SeeTRUE-Feedback test set! 🚀 PaLI 55B model, tuned on TV-Feedback, provides precise feedback, spotlighting textual and visual misalignment sources. The figure captures the essence - accuracy and insights bundled in one!
Tweet media one
1
0
1
@Brian_Gordon13
Brian Gordon
2 years
8/ 🪄Creating TV-Feedback training set: Gathering aligned image-text pairs, we construct a dataset to detect and interpret misalignments. Then, using 'ConGen-Feedback' method for misalignment and labels generation we conclude to a diverse training set.
Tweet media one
1
0
0
@Brian_Gordon13
Brian Gordon
2 years
7/🏅Introducing our SeeTRUE-Feedback evaluation benchmark test set, featuring 2,008 text-image misaligned pairs. Each pair includes three human-annotated misalignment descriptions. Additionally, we provide unified feedback covering both textual and visual misalignments.
Tweet media one
1
0
0
@Brian_Gordon13
Brian Gordon
2 years
6/🔍Comparison of model outputs on two examples from our SeeTRUE-Feedback test set. The PaLI 55B model, fine-tuned on TVFeedback, effectively identifies the misalignments demonstrating its refined feedback ability. 👁️‍📝
Tweet media one
1
0
0
@Brian_Gordon13
Brian Gordon
2 years
5/🏕️In The Wild: We evaluate our model’s generalisation capabilities with generations created using Adobe Firefly, Composable Diffusion, and Stable Diffusion versions 1.0 and 2.1.
Tweet media one
1
0
0