Arnav Verma
@ArnavVerma0_0
Followers
117
Following
491
Media
2
Statuses
39
Studying how people understand and communicate with visualizations and graphics. CogSci + Viz. PhD @MIT_CSAIL @mitvis. Prev. @StanfordPsych; BS/MS @UofTCompSci.
Boston, US
Joined July 2021
Personally I think my scrap paper chart looks better
1
2
6
Thrilled to welcome members of @cogsci_soc to the SF/Bay area for #CogSci2025 this week! Here's a preview of what the Cognitive Tools Lab 🧠🛠️ @Stanford @StanfordPsych will be presenting!
3
17
148
On Wed 07/30, we will be hosting the in-person component of our Minds in the Making ( https://t.co/0oMmYQ2Twy) workshop ft. Barbara Tversky, Grace Hawthorne, & many other speakers from our virtual seminars! Also a poster session to highlight work relevant to design+cognition!
1
1
10
How do people reason while still staying coherent – as if they have an internal ‘world model’ for situations they’ve never encountered? A new paper on open-world cognition (preview at the world models workshop at #ICML2025!)
4
26
149
Thrilled to join the UMich faculty in 2026! I'll also be recruiting PhD students this upcoming cycle. If you're interested in AI and formal reasoning, consider applying!
We’re happy to announce that @GabrielPoesia will be joining our faculty as an assistant professor in Fall 2026. Welcome to CSE! ▶️Learn more about Gabriel here: https://t.co/WD0dcIDWVR
#UMichCSE #GoBlue
31
28
280
📣📣📣 Neural Inverse Rendering from Propagating Light 💡 just won Best Student Paper award at #CVPR!!!
📢📢📢 Neural Inverse Rendering from Propagating Light 💡 Our CVPR Oral introduces the first method for multiview neural inverse rendering from videos of propagating light, unlocking applications such as relighting light propagation videos, geometry estimation, or light
31
18
184
I've recently started my job as an asst professor at NTU, Singapore. If you are ever in town come say hi :)
28
11
687
Come check out our amazing online and in-person workshop, Minds in the Making, this summer ( https://t.co/oPxovdUrQy)! We're bringing together an interdisciplinary community of researchers in CogSci 🧠, HCI 🖌️, & Graphics 🫖!
Delighted to announce our CogSci '25 workshop at the interface between cognitive science and design 🧠🖌️! We're calling it: Minds in the Making🏺 https://t.co/dP3eMNTxuc Register now! June – July 2024, free & open to the public. (all career stages, all disciplines)
0
1
8
We’re working on developing those now! Stay tuned for updates from us at Project Nightingale ( https://t.co/zXAcNm2JbK)—a new collaborative effort to advance the science of how people reason about data!
0
2
3
Hi friends, excited to share a new paper on data visualization literacy 🧠📈 w/ @judyefan, @ArnavVerma0_0, @hollyahuey, Hannah Lloyd, and Lace Padilla! 📝 preprint: https://t.co/aHH9fhzK82 💻 code: https://t.co/LJtv753aWt For those on bluesky, join the conversation there!
github.com
Contribute to cogtoolslab/visualization_literacy_convergent_validity development by creating an account on GitHub.
2
6
14
I am on the job market, seeking tenure-track or industry research positions starting in 2025. My research combines human-computer interaction and robotics—please visit https://t.co/POmSPUd2H9 for updated publications and CV. Feel free to reach out if interested. RT appreciated!
1
39
96
🌟 I am looking to hire research interns at #Adobe for Summer 2025. If you are a student in HCI/AI and passionate about inventing the future of human-AI co-creation for video or audio (e.g., speech/music), please email me or chat with me at #UIST2024 next week! RT appreciated!
Update: I’ve joined @AdobeResearch as a Research Scientist! I’ll continue working in human-AI interaction, focusing on building next-gen AI tools that empower creativity and storytelling. Excited about the new chapter—thanks to everyone who supported my PhD journey in Toronto🤗
4
65
266
DM 1.5 is the best model we have released ! Way better text to video, better text rendering, better prompt following 🥰 So excited for the future!! ❤️❤️❤️ Try out v1.5 at:
lumalabs.ai
Ideate, visualize, create videos, and share your dreams with the world, using our most powerful image and video AI models.
Dream Machine 1.5 is here 🎉 Now with higher-quality text-to-video, smarter understanding of your prompts, custom text rendering, and improved image-to-video! Level up. https://t.co/G3HUEBE2ng
#LumaDreamMachine
4
4
42
I left my job at NVIDIA last year... and now I co-founded @outerbasis and started a position as ceo sararīman!! (also we're part of the current Y Combinator S24 batch 😎) With new tech come new formats, and with new formats come new distribution tech. We've seen this with
14
29
325
📢📢 Wanted to share something I've been working on - an open source library to make it dead simple to run 100s of experiments from a simple Python script / notebook (soon on remote GPUs). You can try it with `pip install haipera` and is open source! https://t.co/gDQ2Vyd8tz
2
16
73
Come check out COGGRAPH this summer ( https://t.co/NhelvLwYPi)!! It's an awesome workshop that's bringing together an interdisciplinary community of Graphics🫖, CogSci🧠,Visualization📈, and HCI🖥️ researchers🤝!! Super excited for all the vibrant discussions!
Hi friends — I'm delighted to announce a new summer workshop on the emerging interface between cognitive science 🧠 and computer graphics 🫖! We're calling it: COGGRAPH! https://t.co/XGbMQWW7By June – July 2024, free & open to the public (all career stages, all disciplines) 🧶
0
1
10
📢📢📢 A pulse of light takes ~3ns to pass through a Coke bottle—100 million times less than it takes you to blink. Our work lets you fly around this 3D scene at the speed of light, revealing propagating wavefronts of light that are invisible to the naked eye—from any viewpoint!
3
40
258
(1/9) Robots today are not expressive but have to perform tasks in environments where communicating with humans is critical. We appropriate LLMs which have shown the capability to create task plans and write policy code, to make robots more expressive.
3
15
50