JenJSun Profile Banner
Jennifer J. Sun Profile
Jennifer J. Sun

@JenJSun

Followers
1K
Following
512
Media
28
Statuses
241

AI for Scientists, assistant professor @CornellCIS, part-time @GoogleDeepMind

New York
Joined August 2020
Don't wanna be here? Send us removal request.
@JenJSun
Jennifer J. Sun
2 years
Thanks my co-advisors Pietro Perona & @yisongyue & committee (@Antihebbiann, @swarat, @klbouman)!! I'm grateful for the opportunity and everyone's support through the challenging & rewarding journey. I'm looking forward next steps in collaborative AI for Scientists!
@yisongyue
Yisong Yue
2 years
Congratulations Dr. @JenJSun 🎉🎉
14
5
117
@yoavartzi
Yoav Artzi
17 days
.@Cornell is recruiting for multiple postdoctoral positions in AI as part of two programs: Empire AI Fellows and Foundational AI Fellows. Positions are available in NYC and Ithaca. Deadline for full consideration is Nov 20, 2025! https://t.co/Cp5710BauU
2
38
116
@yanahasson
Yana Hasson
4 months
Thrilled to share our latest work on SciVid, to appear at #ICCV2025! 🎉 SciVid offers cross-domain evaluation of video models in scientific applications, including medical CV, animal behavior, & weather forecasting 🧪🌍📽️🪰🐭🫀🌦️ #AI4Science #FoundationModel #CV4Science [1/5]🧵
1
9
30
@yoavartzi
Yoav Artzi
5 months
Check out our LMLM, our take on what is now being called a "cognitive core" (as far as branding go, this one is not bad) can look like, how it behaves, and how you train for it. https://t.co/gxrDVSkcZE
Tweet card summary image
arxiv.org
Neural language models are black-boxes--both linguistic patterns and factual knowledge are distributed across billions of opaque parameters. This entangled encoding makes it difficult to reliably...
@karpathy
Andrej Karpathy
5 months
The race for LLM "cognitive core" - a few billion param model that maximally sacrifices encyclopedic knowledge for capability. It lives always-on and by default on every computer as the kernel of LLM personal computing. Its features are slowly crystalizing: - Natively multimodal
2
7
34
@atharva_sehgal
Atharva Sehgal
5 months
I’m presenting Escher ( https://t.co/oNYHQeUaBH) at #CVPR2025 Saturday morning (Poster Session #3; #236). Escher builds a visual concept library with a vision‑language critic (no human labels needed). Swing by if you’d like to chat about program synthesis & multimodal reasoning!
2
5
18
@JenJSun
Jennifer J. Sun
5 months
VideoPrism is now available at: https://t.co/9eRQuaaUTu :)
Tweet card summary image
github.com
Official repository for "VideoPrism: A Foundational Visual Encoder for Video Understanding" (ICML 2024) - google-deepmind/videoprism
@GoogleAI
Google AI
2 years
Introducing VideoPrism, a single model for general-purpose video understanding that can handle a wide range of tasks, including classification, localization, retrieval, captioning and question answering. Learn how it works at https://t.co/vAVqXo8g4j
1
4
19
@_tingliu
Ting Liu
5 months
After over 15 months, we are excited to finally release VideoPrism! The model comes in two sizes, Base and Large, and the video encoders are available today at https://t.co/imLrPYAnEk. We are also working towards adding more support into the repository, please stay tuned.
Tweet card summary image
github.com
Official repository for "VideoPrism: A Foundational Visual Encoder for Video Understanding" (ICML 2024) - google-deepmind/videoprism
@GoogleAI
Google AI
2 years
Introducing VideoPrism, a single model for general-purpose video understanding that can handle a wide range of tasks, including classification, localization, retrieval, captioning and question answering. Learn how it works at https://t.co/vAVqXo8g4j
0
1
8
@linxizhao4
Linxi Zhao
6 months
🚀Excited to share our latest work: LLMs entangle language and knowledge, making it hard to verify or update facts. We introduce LMLM 🐑🧠 — a new class of models that externalize factual knowledge into a database and learn during pretraining when and how to retrieve facts
1
15
44
@rogerioagjr
Rogério Guimarães
2 years
We're excited to share our latest work! We achieve SOTA results in segmentation, detection, and depth estimation, in single and cross-domain, by exploiting image-aligned text prompts in a pretrained diffusion backbone repurposed for vision tasks. See https://t.co/fGI2UfvJwS 🧵👇
4
30
179
@Antihebbiann
Ann Kennedy
2 years
Won't you be my neighbor? Northwestern Neuroscience in downtown Chicago is running a broad faculty search: https://t.co/KYH63sE27e Come join a large and growing neuroscience community!
0
45
84
@JenJSun
Jennifer J. Sun
2 years
Huge thanks to additional co-authors: Andrew Ulmer who helped develop our benchmark, @__dipam__ for developing the eval framework, and MABe22 Challenge winners Ed Hayes, Heng Jia, Sebastian Oleszko, Zach Partridge, Milan Peelman, Chao Sun, Param Uttarwar, and Eric Werner!😊
0
0
3
@JenJSun
Jennifer J. Sun
2 years
Thanks to @Antihebbiann and @KristinMBranson for co-organizing the MABe 2022 Workshop & Challenge and their work on the MABe 2022 benchmark! And to @yisongyue, Pietro Perona, and @_tingliu for advising our workshop hosted this year at CVPR. https://t.co/d18NS30KRb
sites.google.com
Announcements: Our workshop is in Room 212 in the New Orleans Convention center on June 20th! See you there! Travel grants to CVPR available for students & postdocs! See call for papers for more...
1
0
6
@JenJSun
Jennifer J. Sun
2 years
🪰 The fly dataset consists of trajectories of 8 to 11 flies interacting in a group from Janelia, with 50 tasks such as optogenetic activation, behavior classification, and fly types. Thanks to @KristinMBranson, Alice Robie, and @ceschretter for this dataset!
1
0
5
@JenJSun
Jennifer J. Sun
2 years
The beetle dataset consists of videos of a rove beetle interacting with another insect (ex: an ant 🐜), with 14 tasks such as identifying interactor type, interaction duration, and behavior classification. Thanks to @JMyrmecoWagner and @Pselaphinae for this dataset!
1
0
5
@JenJSun
Jennifer J. Sun
2 years
🐭 The mouse dataset consists of videos and trajectories of mice triplets from JAX Labs, with 8 tasks such as behavior classification, strain, and time of day. Thanks to @Vivekdna, Brian Geuther, Keith Sheppard, and Tom Sproule for this dataset!
1
1
7
@JenJSun
Jennifer J. Sun
2 years
Co-first-author Markus Marks and I will be at ICML poster session 2 today (7/25) - we look forward to meeting you at ICML! We would also love to hear from the community on MABe22 or future years of MABe! https://t.co/5o55PnEdGJ More on dataset composition below:
1
1
3
@JenJSun
Jennifer J. Sun
2 years
We are presenting our MABe22 dataset at ICML! Our dataset studies representation learning of video and trajectory data - the representations are evaluated on a large set of downstream tasks. MABe22 organisms include mice, flies, and beetles! Paper: https://t.co/QV1Kyny4yZ
3
13
54
@_AmilDravid
Amil Dravid
2 years
Presenting BKiND-3D tomorrow with @JenJSun and Lili Karashchuk tomorrow (Wed., June 21) in the morning session at poster #74. Feel free to stop by!
@JenJSun
Jennifer J. Sun
3 years
All animals behave in 3D - we discover 3D poses directly from multi-view videos without requiring annotations. Essentially videos -> 3D keypoints + connections We will be @CVPR on June 21! BKinD-3D Paper: https://t.co/smkf3IgFQ6 Co-first-authors Lili Karashchuk & @_AmilDravid
2
2
22
@JenJSun
Jennifer J. Sun
2 years
The Multi-Agent Behavior workshop will happen on Monday morning @CVPR ! There will be talks from our amazing speakers, followed by a panel discussion, ending with our poster session. The program is available here: https://t.co/SDvHPenPaO Hope to see you there! #CVPR2023
sites.google.com
SPEAKERS
2
12
100
@JenJSun
Jennifer J. Sun
3 years
This project was a collaborative effort across institutions. Huge thanks to Lili Karashchuk, @_AmilDravid, Serim Ryou, Sonia Fereidooni, @casa_tuthill, Aggelos Katsaggelos, @bingbrunton, @georgiagkioxari, @Antihebbiann, @yisongyue, Pietro Perona! 😊
1
0
8