
Daniela Massiceti
@dannimassi
Followers
2K
Following
873
Media
46
Statuses
591
Machine learning researcher at Microsoft Research | based in Sydney | PhD @UniofOxford | South African πΏπ¦ | @DeepIndaba π | maker of things
Sydney, Australia
Joined November 2009
A bit late to the party, but happy to share our #NeurIPS2024 paper: We use a causal tracing methodology to study how multi-modal LLMs like LLaVa and Phi retrieve information required to answer factual visual questions. πβπ©πΌβπ».
arxiv.org
Understanding the mechanisms of information storage and transfer in Transformer-based models is important for driving model understanding progress. Recent work has studied these mechanisms for...
1
3
13
Find My Things, an object recogniser our team built that helps people who are blind find their personal objects, has been featured by @Microsoft's blog on "6 ways AI is making a difference in the world". Check it out along with some other amazing work by @MSFTResearch and others!.
AI is enabling change in remarkable ways, from improving health care and education to making life easier for people with disabilities. Read about six ways Microsoft AI is bringing positive change around the world.
0
0
10
You can find code for our work here:
github.com
The implementation of the interpretability and model editing experiments from NeurIPS 2024 paper : https://arxiv.org/abs/2406.04236. - microsoft/MLLMInterpret
0
0
1
Excellent work led by @BasuSamyadeep in collaboration with @FeiziSoheil, @besanushi, Cecily Morrison, and Martin Grayson π€.
1
0
1
Check out our #EMNLP paper on how we can improve CLIP's visio-linguistic capabilities by adding a loss term which distills this information from a diffusion model (which has stronger visio-linguistic abilities). Great work led by @BasuSamyadeep.
This #emnlp2024, we have 3 papers on VLM compositionality, prompt tuning and language model interpretability!. (i) answers how we can distill good properties of generative models (e.g., compositionality) to contrastive CLIP like models! [Main].
0
2
13
Final 2 weeks to apply for an ML post-doc role in Equitable AI with our team at @MSFTResearch Cambridge UK! We're looking for candidates with a PhD in Computer Science, Machine Learning/AI or a related field, a passion for building inclusive AI tech. .
2
16
49
Check out this role with the Gray Systems Lab ( at @Microsoft. It's an applied research position that will work closely with the Azure Data product team. Role is based in the US (Seattle, Silicon Valley, or Madison, US). Apply here:
microsoft.com
The GSL team designs, develops, and evaluates novel database system technologies, with a focus on transitioning the best ideas into Azure Data product lines.
0
0
1
Our team at @MSFTResearchCam is hiring a 2-year AI resident to drive progress in equitable multi-modal AI. Candidate should have a PhD in ML/related field with experience in multi-disciplinary research and a passion for equitable tech. Apply here: οΏ½.
4
36
114
Thank you to an amazing group of organizers who have been working in the background to bring this workshop to life for the 5th year running! @DrG_inCS, @jeffbigham, @edcutrell, @AbigaleStangl, Everley (Yu-Yun) Tseng and @Joshmyersdean1 π.
0
0
3
Alongside our stellar invited speakers, at the VizWiz Grand Challenge workshop @CVPR2024 we will also be announcing the winners of 6 #AI Challenges and hearing posters and spotlights from researchers working at the intersection of #computervision and #accessibility/@a11y.
1
3
2
We are looking forward to welcoming you and these fantastic speakers to #CVPR2024 next week. See you Summit Room 435 next Tuesday 18 June 2024 at 8am-12pm PST!.
0
0
1
Talk 4β£: Elisa Kreiss (@ElisaKreiss), Assistant Professor @UCLA, Linguistics PhD, and Lab Director of @CoalasLab who will be speaking on "How Communicative Principles (Should) Shape Human-Centred AI for Nonvisual Accessibility".
1
1
2
Talk 3β£: Brian Fischler (@blindgator), comedian and host/producer of "That Real Blind Tech Show" podcast (@BlindTechShow). Brian will share perspectives on "Will Computer Vision and A.I. Revolutionize the Web for People Who Are Blind?".
1
0
2
Talk 2β£: Raul Puri (@TheRealRPuri) and Rowan Zellers (@rown), researchers from @OpenAI, who have been involved in developing GPT-4o and exploring its use for the blind/low vision community via OpenAI's @BeMyEyes partnership.
1
0
2