David Seunghyun Yoon
@david_s_yoon
Followers
184
Following
205
Media
2
Statuses
80
Research Scientist @AdobeResearch #NLProc, #ML
San Jose, CA, US
Joined December 2010
Key contributions to the captioning modelΒ SlimVLM-3B, which takes aΒ kidβs sketch and generates text to send to Firefly for image and 3D Generation. Thanks to amazing collaborator Viet and leader Trung.
0
0
0
π Thrilled to see the launch of Project Aqua! Itβs incredibly rewarding to see a product I contributed to make its way into the world. π https://t.co/llCNtkDv1t
blog.adobe.com
Project Aqua, launching today, is a free iOS app to inspire kids and parents through activities that grow creative confidence.
1
0
1
Join us today 4:30 at Exhibit Hall C,D,E #4311 to check out our poster "Localizing Knowledge in Diffusion Transformers".
0
2
10
π Another presentation is happening today! Come stop by our poster session π Program Synthesis via Test-Time Transduction π Paper: https://t.co/T8HIViPSIn π
Thu, Dec 4, 2025 β° 4:30 PM β 7:30 PM PST π Exhibit Hall C, D, E β #2005
#NeurIPS2025
1
1
2
Come to our poster session for a discussion! π Offline RL by Reward-Weighted Fine-Tuning for Conversation Optimization π Paper:: https://t.co/OSfCAKr13x π
Thu, Dec 4, 2025 β° 11:00 AM β 2:00 PM PST π Exhibit Hall C, D, E β #210
#NeurIPS2025
0
0
0
Come to our session!
Adobe researchers are sharing groundbreaking work this week at the premiere AI technology conference NeurIPS2025! We are proud to be presenting 24 papers that connect AI to the future of creativity, design, and data intelligence. We explore generative model innovation, efficient
0
0
1
Adobe Research @ #NeurIPS2025! Lots of exciting work will be presented this week. If you're interested in a deep discussion, potential collaborations, or internship opportunities, feel free to reach out β Iβm around as well π papers:
research.adobe.com
Adobe Researchβs NeurIPS 2025 contributions explore new territory in machine learning and AI, including layered image generation, design-aware templates, and subject-driven video models. This teaser...
0
0
4
Excited to share our work on understanding streaming video. Check our paper and dataset!
π€ We rely on gaze to guide our actions, but can current MLLMs truly understand it and infer our intentions? Introducing StreamGaze π, the first benchmark that evaluates gaze-guided temporal reasoning (past, present, and future) and proactive understanding in streaming video
0
1
1
π World record performance: SambaNova is running Llama 3.1 405B at 114 t/s with full precision accuracy, in only one rack. Verified by @ArtificialAnlys! π¦ This speed unlocks so many use cases for enterprises and developers that we cannot wait to see them built on our platform.
5
42
194
Check out Adobe Research's new work in Computer Vision at #CVPR2024!
research.adobe.com
Look into the future of innovation at Adobe Research
Adobe Research is proud to be one of the sponsors of #CVPR2024! We look forward to the innovative work our researchers will present next week at the conference in Seattle!
0
7
4
If you are at #NAACL2024 today, come by our poster 4οΈβ£9οΈβ£ to check out our explainable, **editable**, part-based image classifier! Users can intervene the classification process or even modify the classifier by simply editing text descriptors.
Our #NAACL2024 work introduces #PEEB, an explainable image classifier that (1) detect object parts; (2) match them to textual descriptors exclusively (**no class names** are used) to make predictions. Editing the descriptors immediately changes the classifier (no re-training).
0
1
8
Poster presentation of our exciting work :) Great job! @ArchikiPrasad
Thanks for the shoutout + covering our #ACL2023nlp work on MeetingQA, @JayAlammar @cohere @forai_ml! It was great interacting with you π PS: For those interested, details at π https://t.co/t706r2ZxLL cc/ Trung, @david_s_yoon, Hanieh, @FranckDernoncou and @mohitban47
0
0
5
New #ACL2023nlp paper: π π²π²ππΆπ»π΄π€π! We explore how good LMs are at answering questions in complex conversational meeting settings (with rhetorical + discussion seeking questions & multi-span + multi-speaker answers) https://t.co/CjrG3CYTpT
@AdobeResearch @uncnlp π§΅π
1
26
124
Adobe researchers are presenting new #ComputerVision work at this weekβs #CVPR2023. In addition to the publications, @Adobe authors have also contributed to the conference in many different ways. Check out the blog post to learn more! https://t.co/CarCfJr2QI
0
10
18
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
612
3K
15K
WebGPU in Chromium 113, excited for the web and great to see this ship to stable!
This feels unreal! After more than 6 years working on WebGPU, it's getting released in Chromium 113, in stable and without flags! It only took a bit longer than the 2 year adventure we initially thought it would be π
Read more about it here
77
151
1K
Adobe researchers are presenting new work at #EMNLP2022, one of the top research conferences on #NaturalLanguageProcessing. Check out the blog post to learn more! https://t.co/ZxQebCGtnm
0
6
15
Check out the list of @Adobe co-authored papers at this week's #COLING2022, one of the top research conferences on #NaturalLanguageProcessing. https://t.co/vjXqzEThZq
0
1
6
Check out the full list of @Adobe co-authored papers and other contributions at this year's @Siggraph, the premier academic conference in computer graphics and interactive techniques! https://t.co/prsqzffyPt
0
26
49
While reviewing conference papers, I was quite surprised that I could easily identify many papers from authors' websites or academic pages. They did not arXiv the papers but revealed the information. It's a kind of anonymity violation, right?
0
0
4