Samyadeep Basu Profile
Samyadeep Basu

@BasuSamyadeep

Followers
584
Following
79
Media
9
Statuses
67

Research Scientist @ Adobe Research, CS PhD from UMD

Washington DC
Joined August 2022
Don't wanna be here? Send us removal request.
@BasuSamyadeep
Samyadeep Basu
5 months
Excited to start as a Research Scientist at @Adobe after a great time at UMD! I am going to be working on topics in language model reasoning and multimodality. Reach out if you are interested to collaborate!
11
4
154
@koustavagoswami
Koustava Goswami
17 days
πŸš€ New research drop! We reimagine attribution not as retrieval, but as a reasoning problem. Introducing DECOMPTUNE 🧩 β†’ a novel RL-driven training framework that teaches small models how to reason through decomposition-based reasoning πŸ“„ https://t.co/G7IA2GXe0v #AI #Reasoning
2
2
5
@ReliableAI
RELAI
27 days
πŸš€ RELAI is live β€” a platform for building reliable AI agents πŸ” We complete the learning loop for agents: simulate β†’ evaluate β†’ optimize - Simulate with LLM personas, mocked MCP servers/tools and grounded synthetic data - Evaluate with code + LLM evaluators; turn human
9
28
54
@BasuSamyadeep
Samyadeep Basu
1 month
Our team at @AdobeResearch is looking for summer research interns in the area of MLLM / LLM reasoning and MLLM post-training. Email me (samyadeepb@adobe.com) or DM your CV and research interests, if you are interested in interning at Adobe Research for Summer '26!
15
57
675
@RezaeiKeivan
Keivan Rezaei
2 months
πŸŽ‰ Excited to share that our paper Localizing Knowledge in Diffusion Transformers has been accepted to #NeurIPS2025! In this work, we extend our previous study on localizing knowledge within T2I models to DiTs such as FLUX, PixArt, and SANA. paper: https://t.co/uouXmJWqx6
1
2
19
@FeiziSoheil
Soheil Feizi
3 months
Introducing Maestro: the holistic optimizer for AI agents. Maestro optimizes the agent graph and tunes prompts/models/tools, fixing agent failure modes that prompt-only or RL weight tuning can’t touch. Maestro outperforms leading prompt optimizers (e.g., MIPROv2, GEPA) on
18
57
327
@BasuSamyadeep
Samyadeep Basu
5 months
Checkout our paper on how to use mechanistic interpretability to perform data attribution for extractive QA tasks. Appearing in #COLM2025 now!
@BasuSamyadeep
Samyadeep Basu
9 months
Check out our preprint on mechanistic circuits for extractive QA in language models! 🧡 We demonstrate that circuits *exist* for real-world tasks like extractive QA, and their components can be leveraged for applications: data attribution (for free!) and model steering. πŸš€πŸ”
0
1
18
@BasuSamyadeep
Samyadeep Basu
5 months
Checkout our recent work on evaluating if popular VLMs really reason "faithfully" through the lens of various explicit and implicit biases (especially visual ones)! For more details, check the thread by @b_shrir.
@b_shrir
Sriram B
5 months
Do AI models really think the way they say they do? In our latest paper, we examine the faithfulness of the chain-of thought (CoT) produced by LLMs and LVLMs when exposed to a wide range of biases, with a special focus on visual biases and more subtler, implicit, biases.
0
2
12
@chengez1114
Yize Cheng ✈️ NeurIPS 2025
6 months
πŸ”₯What if you could humanize any AI-generated text to fool ANY detector? 🚨We present Adversarial Paraphrasingβ€”A universal attack that breaks a wide range of detectors without fine-tuning or detector knowledge. Just pure evasion. πŸ”— https://t.co/zA1000eBA7 πŸ‘‡ Thread below.
1
2
10
@BasuSamyadeep
Samyadeep Basu
6 months
Checkout our paper on knowledge localization in state-of-the-art DiTs (e.g., Flux). Using our interpretability insights, we provide 𝘭𝘰𝘀𝘒𝘭π˜ͺ𝘻𝘦π˜₯ fine-tuning methods which show improvements in applications such as 𝘢𝘯𝘭𝘦𝘒𝘳𝘯π˜ͺ𝘯𝘨 and 𝘱𝘦𝘳𝘴𝘰𝘯𝘒𝘭π˜ͺ𝘻𝘒𝘡π˜ͺ𝘰𝘯.
@arman_zareii
Arman Zarei
6 months
πŸš€New Paper: Localizing Knowledge in Diffusion Transformers 🌐Project page: https://t.co/NXFJm9Twkp πŸ“„Paper: https://t.co/UY0oPDhRTp Joint work with: @BasuSamyadeep, @RezaeiKeivan, Zihao Lin, @nagsayan112358, @FeiziSoheil
0
2
17
@BasuSamyadeep
Samyadeep Basu
7 months
Checkout our #iclr2025 paper on copyright infringements in diffusion models!
@MLMazda
Mazda Moayeri
7 months
Remember the Ghibli memes? Never direct replicas, but always unmistakably in that style? Would you call that copying? How can you tell, and crucially, in a way that art + legal folks would understand + adopt? We grapple with these qs in ArtSavant🎨 #iclr poster 535 this PM🧡
0
1
4
@FeiziSoheil
Soheil Feizi
7 months
πŸš€ Introducing Data Agentsβ€” generate accurate, reasoning-based AI benchmarks from your own data in minutes! ⚑ With Data Agents, we’ve created 100+ benchmarks with 100K+ samples using docs from tools like React, PyTorch, Kubernetes, LangChain, and more. πŸ“‚ All benchmarks are
6
31
122
@RyanSullyvan
Ryan Sullivan
9 months
I’m heading to AAAI to present our work on multi-objective preference alignment for DPO from my internship with @GoogleAI If anyone wants to chat about RLHF, RL in games, curriculum learning, or open-ended environments please reach out!
2
1
29
@BasuSamyadeep
Samyadeep Basu
9 months
Can mechanistic insights lead to tangible applications for multimodal models? Check out our recent survey on this topic! We highlight the practical aspects of interpretability methods and lay down various open-problems in the area.
0
3
28
@BasuSamyadeep
Samyadeep Basu
9 months
Check out our preprint on mechanistic circuits for extractive QA in language models! 🧡 We demonstrate that circuits *exist* for real-world tasks like extractive QA, and their components can be leveraged for applications: data attribution (for free!) and model steering. πŸš€πŸ”
2
2
24
@FeiziSoheil
Soheil Feizi
10 months
Wow, I am speechless and deeply honored to receive the Presidential Early Career Award for Scientists and Engineers (PECASE), the highest honor bestowed by the U.S. government on outstanding scientists and engineers early in their careers. I’m grateful for the recognition of our
46
6
301
@RezaeiKeivan
Keivan Rezaei
11 months
🚨Preprint from internship at @allen_ai πŸ€–We propose restorative unlearning: not just forgetting knowledge from specific documents but retaining the knowledge the model would have had if those documents had never been part of the training corpus. Paper: https://t.co/mgluIMZpXF
3
24
140
@FeiziSoheil
Soheil Feizi
1 year
LLMs are powerful but prone to 'hallucinations'β€”false yet plausible outputs. In our #NeurIPS2024 paper, we introduce a compute-efficient method for detecting hallucinations in single responses using hidden states, attention maps, and output probabilities. Our approach achieves
4
17
105
@RyanSullyvan
Ryan Sullivan
1 year
Have you ever wanted to add curriculum learning (CL) to an RL project but decided it wasn't worth the effort? I'm happy to announce the release of Syllabus, a library of portable curriculum learning methods that work with any RL code! https://t.co/K1AbUfYL7Q
Tweet card summary image
github.com
Synchronized Curriculum Learning for RL Agents. Contribute to RyanNavillus/Syllabus development by creating an account on GitHub.
5
10
77
@FeiziSoheil
Soheil Feizi
1 year
How do vision language models process information in factual visual question answering tasks? In our #NeurIPS2024 paper, we use a constraint-based formulation to study this problem. We introduce VQA-Constraints, a rich test-bed with 9.7K annotated visual questions for deep
@BasuSamyadeep
Samyadeep Basu
1 year
Interested in how MLLMs (e.g., LLaVa) process information "mechanistically" for VQA tasks? Checkout our #neurips2024 paper, in which we study this; tl;dr : LLMs under a visual prompt process info quite differently! @FeiziSoheil @dannimassi @besanushi
0
4
36