Vishwa Shah
@vishwayvs
Followers
214
Following
192
Media
1
Statuses
8
ML Research @Apple | Prev @Meta | | NLP/ML @LTIatCMU, CS @BITSPilaniGoa
Seattle, WA
Joined January 2019
Excited to attend #NeurIPS2025 with Apple! Find out more about our accepted papers, talks, booth and more here
machinelearning.apple.com
Apple is presenting new research at the annual conference on Neural Information Processing Systems (NeurIPS), which takes place in person in…
4
0
69
Cannot attend #ICLR2025 in person (will be NAACL and Stanford soon!), but do check out 👇 ▪️Apr 27: "Exploring the Pre-conditions for Memory-Learning Agents" led by @viishruth and Vishwa Shah, at SSI-FM workshop ▪️Apr 28: our @DL4Code workshop with a fantastic line of works &
Just 6 days until #DL4C! 🗓️ Daniel Fried (CMU / Meta AI) @dan_fried @AIatMeta will be sharing insights on how inducing functions from code makes LLM agents smarter and more efficient. Don't miss it! See you Sunday! #ICLR2025 #iclr
0
37
18
🧵We’ve spent the last few months at @datologyai building a state-of-the-art data curation pipeline and I’m SO excited to share our first results: we curated image-text pretraining data and massively improved CLIP model quality, training speed, and inference efficiency 🔥🔥🔥
5
36
175
Bittersweet as our 1st batch of Datologists graduates today🎓4/5 interns heading back to school. Working with these rockstars @datologyai has been an incredibly rewarding journey—right from hiring them to seeing them grow in agency, capability & independence in such a short time!
0
4
56
New paper on LLMs+culture! 🎊🎉 Thrilled to share our work on NormAd, a dataset evaluating whether LLMs can adapt to the diversity of cultural norms worldwide! (Spoiler: they can't!) ArXiv: https://t.co/vZUSsHC34u w/ @akhila_yerukola @vishwayvs @_doctor_kat @MaartenSap [1/n]
arxiv.org
To be effectively and safely deployed to global user populations, large language models (LLMs) may need to adapt outputs to user values and cultures, not just know about them. We introduce NormAd,...
3
25
98
🚨New paper🚨 Contrastively trained Vision-Language Model’s (like CLIP) are poor at compositional reasoning Our new paper improves this: “Coarse-to-Fine Contrastive Learning in Image-Text-Graph Space for Improved Vision-Language Compositionality” https://t.co/LccQI4ojST
6
46
204
Had a great time meeting enthusiasts from AI and open-source! Got to know about the amazing story of @huggingface from @ClementDelangue himself! Thanks for hosting this event
0
3
24
Kudos to my collaborators especially students and faculty at BITS Goa: three papers at the upcoming IJCLR conference including this one. We are looking for pre-doc/post-doc applicants in Neurosymbolic reasoning. @TCSResearch @SforAiDL
Knowledge-based Analogical Reasoning in Neuro-symbolic Latent Spaces https://t.co/uzufNcs8Ma by Vishwa Shah et al. including @gmshroff
#NeuralNetwork #DistributedRepresentations
1
6
28