Cusuh Profile
Cusuh

@cusuh_

Followers
878
Following
888
Media
15
Statuses
133

Research Scientist @ Adobe | ML PhD from Georgia Tech | previously Google Research, MPI

Joined April 2010
Don't wanna be here? Send us removal request.
@cusuh_
Cusuh
2 years
Defended my PhD today!!! 🄳. Thank you to everyone who supported me along the way, especially my advisor @jhhays for taking a chance on my clueless little undergrad self and putting up with me ever since. I'll be joining Adobe next month as a research scientist under @aseemaa!.
17
7
213
@cusuh_
Cusuh
2 months
RT @gcduncombe: I've had so much fun experimenting with our latest @Adobe Firefly Video model - coming from a background in film, I've love….
0
3
0
@cusuh_
Cusuh
10 months
RT @AdobeVideo: We’re excited to share progress on the all-new Firefly Video Model with Text to Video & Image to Video capabilities across….
0
8
0
@cusuh_
Cusuh
10 months
RT @aseemaa: After a nearly 20 month journey we are finally going public with our work on video generative models at Adobe! This work start….
0
1
0
@cusuh_
Cusuh
1 year
I'll be presenting our work tomorrow (6/19) 5-630pm in Arch 4A-E (poster #329)!. #CVPR2024.
@cusuh_
Cusuh
1 year
In our latest work, we introduce personalized residuals for efficient personalization of T2I diffusion models and localized attention-guided sampling that dynamically blends the learned concept into new contexts. #CVPR2024.
Tweet media one
1
3
22
@cusuh_
Cusuh
1 year
In collaboration w/ Matt, @jhhays, Nick, Yuchen, @rzhang88, Tobias. @mlatgt @AdobeResearch.
0
0
1
@cusuh_
Cusuh
1 year
In our latest work, we introduce personalized residuals for efficient personalization of T2I diffusion models and localized attention-guided sampling that dynamically blends the learned concept into new contexts. #CVPR2024.
Tweet media one
2
6
21
@cusuh_
Cusuh
1 year
RT @_akhaliq: Personalized Residuals for Concept-Driven Text-to-Image Generation. We present personalized residuals and localized attention….
0
32
0
@cusuh_
Cusuh
2 years
PS I'm looking to hire a research intern for summer 2024 to work on image/video generation or multimodal learning. If you're interested, please apply online and send me an email :)
1
5
20
@cusuh_
Cusuh
2 years
MCM will part of the "Diffusion for Geometry" session tomorrow (Wed 8/9) at 9am in room 502 AB. Hope to see you there! :). #SIGGRAPH2023.
@cusuh_
Cusuh
2 years
I'll be presenting our work on multimodal conditioning modules (MCM) at #SIGGRAPH2023 this week! MCM is a small network that enables multimodal image synthesis using pretrained diffusion models without any direct updates to the diffusion model parameters.
Tweet media one
0
1
13
@cusuh_
Cusuh
2 years
Thank you to my collaborators (@jhhays, Cynthia, Krishna, Zhifei, and Tobias)!. @mlatgt @AdobeResearch.
0
2
4
@cusuh_
Cusuh
2 years
I'll be presenting our work on multimodal conditioning modules (MCM) at #SIGGRAPH2023 this week! MCM is a small network that enables multimodal image synthesis using pretrained diffusion models without any direct updates to the diffusion model parameters.
Tweet media one
1
16
99
@cusuh_
Cusuh
2 years
RT @_akhaliq: Modulating Pretrained Diffusion Models for Multimodal Image Synthesis. abs: .project page: https://t.….
0
20
0
@cusuh_
Cusuh
3 years
Stop by poster #85 in the afternoon session tomorrow and say hi! #ECCV2022.
@cusuh_
Cusuh
3 years
Excited to share "CoGS: Controllable Generation and Search from Sketch and Style" at #ECCV2022!. (w/ Gemma Canet TarrƩs, @tuvbui, @jhhays, @JCollomosse, Zhe Lin). Project page:
Tweet media one
0
0
7
@cusuh_
Cusuh
3 years
CoGS also offers an optional refinement step via unified embeddings for search and synthesis. We train a VAE that maps the codebook representations into a metric latent space, thus enabling retrieval and interpolation using the transformer's output.
0
0
0
@cusuh_
Cusuh
3 years
To enable training & evalation, we create a large-scale paired dataset of images and "pseudosketches," which are derived from the images via an automated process, and filtered via AMT crowdsourcing. A subset corresponds to the Sketchy Database, which contains free-hand sketches.
Tweet media one
1
0
1
@cusuh_
Cusuh
3 years
CoGS enables decoupled control over the structure and style of images across a diverse set of classes. We train a transformer to encode an input sketch, exemplar style image, and class label into a composite codebook representation, which is decoded by a pre-trained VQGAN.
1
0
0
@cusuh_
Cusuh
3 years
Excited to share "CoGS: Controllable Generation and Search from Sketch and Style" at #ECCV2022!. (w/ Gemma Canet TarrƩs, @tuvbui, @jhhays, @JCollomosse, Zhe Lin). Project page:
Tweet media one
1
3
19
@cusuh_
Cusuh
3 years
So excited to be attending #CVPR2022 n person! Looking forward to seeing familiar faces (especially some of my virtual collaborators) and meeting new ones 😁.
0
0
13
@cusuh_
Cusuh
3 years
RT @ak92501: CoGS: Controllable Generation and Search from.Sketch and Style.abs:
Tweet media one
0
11
0