
Leonard Salewski
@L_Salewski
Followers
105
Following
65
Media
21
Statuses
35
Working with Transformers before it was cool ๐. (V)LLMs + RAG. Research intern at @Nvidia | PhD student at @uni_tue @ExplainableML @cgtuebingen | ex @Bosch_AI
Santa Clara
Joined May 2023
๐ข #scholarGPT now can perform complex filtering operations like selecting a specific year ๐
, making research even more efficient! ๐ช Build with self-query from @LangChainAI Sign up on the waitlist to be the first to try it out https://t.co/iRS47YelE4! ๐โจ #ChatGPT #Research๐งโ๐ป
0
3
12
Iโm incredibly grateful to my supervisors @zeynepakata and Hendrik Lensch for their invaluable guidance and support throughout this journey. I am also very thankful for all of the support from my two amazing groups, @ExplainableML and @CG_Tuebingen.
1
0
2
I am very happy to announce that I successfully defended my PhD thesis with the title "Advancing Multimodal Explainability: From Visual Reasoning to In-Context Impersonation".
3
3
21
๐PhD Spotlight: Leonard Salewski Celebrate @L_Salewski, who will defend his PhD on 24th June! ๐ธ๐ Leonard has been a PhD student in the EML and @CG_Tuebingen groups at @uni_tue since May 2020. He is part of the IMPRS-IS program (@MPI_IS), jointly supervised by @zeynepakata
1
9
24
We are hiring a Ph.D. intern (in-person at an NVIDIA, USA, office; California preferred) to work on multi-modal foundation models + agents / digital avatars. Ideal candidate is one with a track record of leading impactful research & the ability to obtain work permit in the US.
1
29
245
Iโm happy to share that from today Iโm starting a new position as a Research Intern at @nvidia. I will be working with @ekta_prashnani, Joohwan Kim and Iuri Frosio.
2
2
19
๐ Do you want to know how ZerAuCap generates audio captions without training on paired data? Come to our #NeurIPS2023 ML for Audio talk at 11:10am in Room 228 - 230 or visit us at our poster from 1:30pm to 3:00pm!
๐Excited to share that our project "Zero-shot audio captioning with audio-language model guidance and audio context keywords" is now live on @arxiv and has been accepted as an Oral in the ML for Audio Workshop @ #NeurIPS2023 ๐คฉ (Paper link below ๐)
0
1
6
Are you interested in #LLM's and their ability to impersonate? Visit us during the upcoming poster session 2 at #1426, where Stephan Alaniz and I will present our #NeurIPS2023 Spotlight. Paper: https://t.co/cz5K9xoZ2Y
Thrilled to share that my research on In-Context Impersonation in Large Language Models has been accepted as a spotlight paper at #NeurIPS2023 ๐๐! Joint work with Stephan Alaniz, Isabel Rio-Torto, @cpilab and @zeynepakata. Check out the paper here:
0
4
10
๐ท๐บ Excited to be at #NeurIPS2023! Hit me up if you want to chat about my Spotlight paper on LLM impersonation, my ML-for-Audio workshop oral paper on zero-shot audio captioning or RAG for scientific #Chatbots.
0
0
3
Amazing collaboration with Stephan Alaniz, Isabel Rio-Torto, Eric Schulz (@cpilab) and Zeynep Akata (@zeynepakata)!
0
0
0
โ
These findings earned us an "Excellent rebuttal" statement by the most skeptical reviewer and finally a Spotlight at #NeurIPS2023! Read the revised paper on arXiv ( https://t.co/Dk92FLtfN4) and visit our poster in #NewOrleans on December 12th in poster session 2.
1
0
0
๐ธ For the vision language tasks, we evaluated in-context impersonation on two additional fine-grained datasets (FGVC Aircrafts and Oxford Flowers), studied more bias groups, and examined the effect of composing personas, revealing #LLM's biases.
1
0
0
๐ For reasoning, we extended our analysis to all MMLU subtasks (STEM, Humanities, Social Sciences and Other) and added a neutral baseline, once more confirming our findings that impersonating task experts improves over domain and non-experts.
1
0
0
๐ฐ We also extended our bandit analysis to more specific ages within the 2โ20 range and older ages 20โ60. For ages 2โ20, we again found the same effect of #LLM's replicating human-like developmental findings, above there is none. This extends our findings to a broader age group.
1
0
0
๐ง Many concerns centered around the robustness and significance of our findings. We created 5 extra prompt variations through meta-prompting and ran our experiments with them --> we were able to confirm our findings.
1
0
0
โฎ๏ธ In case you didn't read the paper, here is a tl;dr: Large language models can impersonate different personas and it affects their performance in bandit, reasoning and vision and language tasks. Find the full updated paper here:
1
0
0
Do you want to know how we earned a #NeurIPS2023 Spotlight for our "In-Context Impersonation Reveals Large Language Models' Strengths and Biases" paper ๐ง? Buckle up and join for a ride on the rebuttal roller coaster ๐ข๐งต
1
2
12
This was joint work with Stefan Fauth, A. Sophia Koepke and Zeynep Akata (@zeynepakata). You can find the paper here https://t.co/Rs5FdneumY and we will present our work in #NewOrleans on December 16th at 11:10 am in Room 228 - 230.
arxiv.org
Zero-shot audio captioning aims at automatically generating descriptive textual captions for audio content without prior training for this task. Different from speech recognition which translates...
0
0
0
๐Excited to share that our project "Zero-shot audio captioning with audio-language model guidance and audio context keywords" is now live on @arxiv and has been accepted as an Oral in the ML for Audio Workshop @ #NeurIPS2023 ๐คฉ (Paper link below ๐)
1
1
9
If you enjoyed the #GaussianSplatting of "Diana of Versailles" from the @MuseeLouvre and this overview, follow me for similar tweets!
0
0
2