dannimassi Profile Banner
Daniela Massiceti Profile
Daniela Massiceti

@dannimassi

Followers
2K
Following
873
Media
46
Statuses
591

Machine learning researcher at Microsoft Research | based in Sydney | PhD @UniofOxford | South African πŸ‡ΏπŸ‡¦ | @DeepIndaba 🐘 | maker of things

Sydney, Australia
Joined November 2009
Don't wanna be here? Send us removal request.
@dannimassi
Daniela Massiceti
4 months
A bit late to the party, but happy to share our #NeurIPS2024 paper: We use a causal tracing methodology to study how multi-modal LLMs like LLaVa and Phi retrieve information required to answer factual visual questions. πŸ”Žβ“πŸ‘©πŸΌβ€πŸ’».
Tweet card summary image
arxiv.org
Understanding the mechanisms of information storage and transfer in Transformer-based models is important for driving model understanding progress. Recent work has studied these mechanisms for...
1
3
13
@dannimassi
Daniela Massiceti
4 months
Find My Things, an object recogniser our team built that helps people who are blind find their personal objects, has been featured by @Microsoft's blog on "6 ways AI is making a difference in the world". Check it out along with some other amazing work by @MSFTResearch and others!.
@Microsoft
Microsoft
5 months
AI is enabling change in remarkable ways, from improving health care and education to making life easier for people with disabilities. Read about six ways Microsoft AI is bringing positive change around the world.
0
0
10
@dannimassi
Daniela Massiceti
4 months
Excellent work led by @BasuSamyadeep in collaboration with @FeiziSoheil, @besanushi, Cecily Morrison, and Martin Grayson πŸ€“.
1
0
1
@dannimassi
Daniela Massiceti
4 months
We also introduce a new model-editing algorithm that leverages these findings to correct errors and insert new information into MLLMs by targeting these early causal blocks.
1
0
0
@dannimassi
Daniela Massiceti
4 months
Our experiments reveal that 1) MLLMs rely on earlier layers for information storage compared to LLMs, and 2) a small subset of visual tokens play a crucial role in transferring visual information to these causal blocks.
1
0
0
@dannimassi
Daniela Massiceti
9 months
Check out our #EMNLP paper on how we can improve CLIP's visio-linguistic capabilities by adding a loss term which distills this information from a diffusion model (which has stronger visio-linguistic abilities). Great work led by @BasuSamyadeep.
@BasuSamyadeep
Samyadeep Basu
9 months
This #emnlp2024, we have 3 papers on VLM compositionality, prompt tuning and language model interpretability!. (i) answers how we can distill good properties of generative models (e.g., compositionality) to contrastive CLIP like models! [Main].
0
2
13
@dannimassi
Daniela Massiceti
1 year
❗ Note, this job advert had a screening question which excluded applicants who were currently completing their PhD. The updated advert has been updated to consider applicants who will complete their PhD by the end of 2024. If you were affected by this, please do re-apply.
0
0
0
@dannimassi
Daniela Massiceti
1 year
. and a strong publication record in any of the following areas: multi-modal generative models, AI fairness/bias, transparency/interpretability, AI ethics, accessibility, data-centric AI, model robustness, OOD generalisation, long-tailed learning, & generative evaluation methods.
1
0
0
@dannimassi
Daniela Massiceti
1 year
Final 2 weeks to apply for an ML post-doc role in Equitable AI with our team at @MSFTResearch Cambridge UK! We're looking for candidates with a PhD in Computer Science, Machine Learning/AI or a related field, a passion for building inclusive AI tech. .
2
16
49
@dannimassi
Daniela Massiceti
1 year
Check out this role with the Gray Systems Lab ( at @Microsoft. It's an applied research position that will work closely with the Azure Data product team. Role is based in the US (Seattle, Silicon Valley, or Madison, US). Apply here:
Tweet card summary image
microsoft.com
The GSL team designs, develops, and evaluates novel database system technologies, with a focus on transitioning the best ideas into Azure Data product lines.
0
0
1
@dannimassi
Daniela Massiceti
1 year
Our team at @MSFTResearchCam is hiring a 2-year AI resident to drive progress in equitable multi-modal AI. Candidate should have a PhD in ML/related field with experience in multi-disciplinary research and a passion for equitable tech. Apply here: οΏ½.
4
36
114
@dannimassi
Daniela Massiceti
1 year
Thank you to an amazing group of organizers who have been working in the background to bring this workshop to life for the 5th year running! @DrG_inCS, @jeffbigham, @edcutrell, @AbigaleStangl, Everley (Yu-Yun) Tseng and @Joshmyersdean1 πŸ’š.
0
0
3
@dannimassi
Daniela Massiceti
1 year
Join us to hear about the latest advances in everything from visual question answering, answer grounding, single answer grounding recognition, few-shot video object recognition, few-shot private object localization, and zero-shot image classification πŸ€“πŸ€©.
1
0
1
@dannimassi
Daniela Massiceti
1 year
Alongside our stellar invited speakers, at the VizWiz Grand Challenge workshop @CVPR2024 we will also be announcing the winners of 6 #AI Challenges and hearing posters and spotlights from researchers working at the intersection of #computervision and #accessibility/@a11y.
1
3
2
@dannimassi
Daniela Massiceti
1 year
We are looking forward to welcoming you and these fantastic speakers to #CVPR2024 next week. See you Summit Room 435 next Tuesday 18 June 2024 at 8am-12pm PST!.
0
0
1
@dannimassi
Daniela Massiceti
1 year
Talk 4⃣: Elisa Kreiss (@ElisaKreiss), Assistant Professor @UCLA, Linguistics PhD, and Lab Director of @CoalasLab who will be speaking on "How Communicative Principles (Should) Shape Human-Centred AI for Nonvisual Accessibility".
1
1
2
@dannimassi
Daniela Massiceti
1 year
Talk 3⃣: Brian Fischler (@blindgator), comedian and host/producer of "That Real Blind Tech Show" podcast (@BlindTechShow). Brian will share perspectives on "Will Computer Vision and A.I. Revolutionize the Web for People Who Are Blind?".
1
0
2
@dannimassi
Daniela Massiceti
1 year
Talk 2⃣: Raul Puri (@TheRealRPuri) and Rowan Zellers (@rown), researchers from @OpenAI, who have been involved in developing GPT-4o and exploring its use for the blind/low vision community via OpenAI's @BeMyEyes partnership.
1
0
2
@dannimassi
Daniela Massiceti
1 year
Talk1⃣: Soravit Beer Changpinyo (@schangpi), Software Engineer at @Google Research, who has been involved in developing and testing Google's Gemini model amongst other things. Soravit will be speaking on "Towards Vision and Richer Language(s)".
1
0
2