Rima (Yining) Cao
@YiningCao3
Followers
392
Following
52
Media
8
Statuses
23
Human-Computer Interaction. Now fifth-year Ph.D. student @UCSanDiego @DesignLabUCSD. Master @UMich @umsi. Undergrad @Tsinghua_Uni
San Diego, USA
Joined October 2019
🎥 New Talk: “Generative, Malleable, and Personal User Interfaces”. This talk describes our work toward the long-held vision in the field of human-computer interaction. Talk Link: https://t.co/uotMwQnLIZ We are committed to making it as real as we can.
4
26
149
I will be presenting DataParticles today at the "Visualization Grammars and Design" session (12:06 - 12:20, X11+12) #CHI2023 Welcome to my talk and let's chat 😊
Creating an animated data story is tedious and fragmented. Our #CHI2023 paper presents DataParticles - a language-driven system for visually exploring, reasoning and communicating data insights. Shout out to my collaborators @HaijunXia @janee424 @ZhutianChen.
2
1
35
The Motion feature in #AdobeFresco empowers artists and creators to add motion to individual elements in their artwork. @AdobeResearch’s @liyiwei and @rubaiat share the “behind the scenes” of this impactful research work! https://t.co/ewlmCVlmF5
1
4
18
Stop reading the text !!! Have a graphic dialogue with OpenAI’s #GPT4 with #Graphologue. It turns GPT4-generated text into interactive node-link diagrams in real-time. New Human-Computer Interaction #HCI research makes #AI easier to use. More at https://t.co/2M90KhqNar
59
235
1K
What if #ChatGPT were 3D? Structure #GPT4's responses in 3D with #Sensecape to better understand large amounts of text. New Human-Computer Interaction #HCI research makes #AI easier to use. More at https://t.co/2M90KhqNar.
25
103
493
Our paper was awarded Best Paper #CHI2023! Looking back, I am so grateful for this journey with @HaijunXia @janee424 @ZhutianChen. From the fuzzy idea we began with to the final project, I’m thankful for all the support, guidance, and inspiration that helped shape it (and me🐣
Creating an animated data story is tedious and fragmented. Our #CHI2023 paper presents DataParticles - a language-driven system for visually exploring, reasoning and communicating data insights. Shout out to my collaborators @HaijunXia @janee424 @ZhutianChen.
5
9
104
How can Curry outshine LeBron? Check out our #CHI2023 paper on iBall🏀 - A next-gen interactive game watching system with AR Vis + gaze/touch/text interactions! w/ @ticahere J.Shan @QisenY J.Beyer @HaijunXia @hpfister Live Demo,Code,Paper: https://t.co/4xD4jVzOBM
#NBA #sports
1
8
46
😆 Welcome to check out our video and full paper: 📽️ https://t.co/kSmF81sGdT 📰 https://t.co/vpkmiqrUc5
0
2
10
From evaluation with experts we found that modularized planning with natural language allows creators to be story-focused and exploratory. Interestingly, we observed creators tend to reason on their story narratives with generated animations rather than the text itself!
1
0
1
To leverage the latent connections between text, data and visualizations, our system used constituency parsing to understand the pragmatic structure that informs data selection, encoding, multiple data operations, and animation effects.
1
0
0
We studied 44 popular animated data stories and also found a strong correspondence between the segments of an informative story and their desired visualizations. Moreover, we observed that the narrations are usually descriptive for desired visual effects.
1
0
1
Creating an animated data story is tedious and fragmented. Our #CHI2023 paper presents DataParticles - a language-driven system for visually exploring, reasoning and communicating data insights. Shout out to my collaborators @HaijunXia @janee424 @ZhutianChen.
6
14
84
Console logs are handy for debugging and inspecting code. However, it’s hard for programmers to track and make sense of them - long interleaved stream, deep data structures, numbers without context… Log-it #CHI2023 is to help! w/ interactive, structured, and visualized logs 🧵
4
17
76
Excited to be at #CHI2022 in person! 🌼🫶
Taking Creativity Lab to #CHI2022 in person. They're doing cool stuff and excited to meet ppl! @YiningCao3 (animation, InfoVis) @peilingjiang (creativity tools) @ZhutianChen (InfoVis, AR) @janee424 (photo/vidoegraphy UI) Matthew Beaudouin-Lafon (fundamental UI) See you there!
1
0
18
VideoSticker supports a variety of interactive experiences with video content including learning a new skill, sports analytics, and integrating concepts from multiple videos.
1
0
2
Our system implements automated object extraction, tracking, and text detection, which are built on top of state-of-art deep learning models.
1
0
2
Instead of passive viewing , viewers can actively engage with educational videos and create semi-animated notes in just a few clicks.
2
1
5
VideoSticker supports 2 types of stickers ‘Frame Sticker’ and ‘Object Sticker’ to capture animated visual and text content from videos. Website: https://t.co/ZIwxIprKQp
1
0
2
🎥 -- 📔 I will present our paper *VideoSticker* tomorrow at #IUI2022 at 2:30 PM (Applications and Tools Session). We propose an AI-powered approach to take notes from videos. cc: @HariSubramonyam @eytanadar @DesignLabUCSD @UMSI @StanfordHCI @StanfordEd @StanfordHAI
4
6
42
Super excited to share our work "VideoSticker" - a tool for taking notes from videos in a semi-automated manner. This work started during my master's @UMich with amazing @HariSubramonyam @eytanadar. #IUI2022 📎Paper: https://t.co/OTCkBxdhrv 📽️Video: https://t.co/BnMMqriP03
0
2
36