CAIRLab_RIT Profile Banner
CAIR Lab at RIT Profile
CAIR Lab at RIT

@CAIRLab_RIT

Followers
466
Following
11
Media
53
Statuses
213

CAIR is the Center for Accessibility and Inclusion Research at RIT. Faculty and students conduct accessibility and assistive technology research.

Rochester, NY, USA
Joined April 2016
Don't wanna be here? Send us removal request.
@GarrethTigwell
Garreth Tigwell
1 year
The #ASSETS2025 early bird mentoring deadline is Feb 20. Signup info is at https://t.co/GxjlVVaIdo. We have an interesting blog post from mentoring chair @YasmineElGlaly describing her experiences with early ASSETS papers and why mentoring is helpful
Tweet card summary image
medium.com
TL;DR: Accessibility is deeply intertwined with disability, and disability exists within historical, cultural, and political contexts…
1
4
8
@GarrethTigwell
Garreth Tigwell
1 year
Announcing ASSETS 2025 Call for Papers and Mentoring. Technical papers are due April 16, experience reports on June 11, posters and demos on June 25, and the DC is due June 25. If you are new to ASSETS, you can also receive mentorship. Details at  https://t.co/dzEE3DKEDm Pls RT
0
12
19
@CAIRLab_RIT
CAIR Lab at RIT
2 years
Come to our Special Interest Group discussion on Spatial Computing and help to define the vision of the future. We have 80 minutes of activities and discussions planned! Let's work on accessibility, collaboration, ethics, and trust, among other topics.
1
0
1
@CAIRLab_RIT
CAIR Lab at RIT
2 years
CAIR Lab at #CHI2024, 13 of 13 / Special Interest Groups Title: Spatial Computing: Defining the Vision for the Future Monday, May 13, 16:00-17:20 / Spatial Computing: Defining the Vision for the Future
1
1
1
@CAIRLab_RIT
CAIR Lab at RIT
2 years
4/4 Design implications, such as the integration of onscreen visuals and speech-to-text transcription, are discussed based on the analysis of collected data and insights from participants. Discover more details and the full paper here:
Tweet card summary image
dl.acm.org
0
1
1
@CAIRLab_RIT
CAIR Lab at RIT
2 years
3/4 The findings highlight the utilization of multimodal communication, including verbal and non-verbal modes, both before and during the game, which significantly influences the level of collaboration among participants.
1
0
0
@CAIRLab_RIT
CAIR Lab at RIT
2 years
2/4 Through the use of "Urban Legends," a multiplayer co-located CS-AR game, the study investigates the experiences and challenges of 17 DHH participants with varying levels of hearing abilities.
1
0
0
@CAIRLab_RIT
CAIR Lab at RIT
2 years
1/4 This paper serves as an initial exploration into the communication, collaboration, and coordination behaviors within co-located collaborative shared AR (CS-AR) environments, aiming to address existing gaps in current literature.
1
0
0
@CAIRLab_RIT
CAIR Lab at RIT
2 years
Tuesday, May 14, at 14:30 / Assistive Interactions: Audio Interactions and d/Deaf and Hard of Hearing Users Authors on Twitter: @SanzidaLuna, @Cyberpumpkin29, @GarrethTigwell, @Michael_Saker, @Alan_yn_Aber, @LaatoSamuli, @jfdunham, and @yihong55681731
1
1
3
@CAIRLab_RIT
CAIR Lab at RIT
2 years
CAIR Lab at #CHI2024, 12 of 13 / Full Paper Title: Communication, Collaboration, and Coordination in a Co-located Shared Augmented Reality Game: Perspectives From Deaf and Hard of Hearing People
1
1
2
@CAIRLab_RIT
CAIR Lab at RIT
2 years
4/4 The final phase of research compared these styles against a non-styled baseline through an emotion-recognition task, identifying two preferred styles. Discover the detailed outcomes and design recommendations in the full paper here:
Tweet card summary image
dl.acm.org
0
0
0
@CAIRLab_RIT
CAIR Lab at RIT
2 years
3/4 The second study combined highly rated styles from the initial findings to assess how effective each was in simultaneously depicting valence and arousal, as judged by participants.
1
0
0
@CAIRLab_RIT
CAIR Lab at RIT
2 years
2/4 Conducted with 39 DHH participants, the research spanned three studies, each exploring different elements such as text color, boldness, and size. The initial study focused on preferences for conveying emotional valence or arousal separately.
1
0
0
@CAIRLab_RIT
CAIR Lab at RIT
2 years
1/4 🧵 Affective captions offer a way to convey emotions in speech through text, improving its accessibility for Deaf and Hard-of-Hearing (DHH) folks. The paper compares caption styles that convey valence (if words are positive or negative) and arousal (how excited they sound).
1
0
0
@CAIRLab_RIT
CAIR Lab at RIT
2 years
Authors on Twitter: @calua, @SaadHassann, and @roshan118
1
0
0
@CAIRLab_RIT
CAIR Lab at RIT
2 years
CAIR Lab at #CHI2024, 11 of 13 / Full Paper Title: Caption Royale: Exploring the Design Space of Affective Captions from the Perspective of Deaf and Hard-of-Hearing Individuals Tuesday, May 14, at 14:00 / Supporting Accessibility of Text, Image and Video B
1
1
4
@CAIRLab_RIT
CAIR Lab at RIT
2 years
This suggests a significant step forward in how dancers engage with instructional content. Interested in how AI can transform dance learning? Read the full study for a deeper dive into these tools and their implications for dance education:
Tweet card summary image
dl.acm.org
0
0
0
@CAIRLab_RIT
CAIR Lab at RIT
2 years
...traditional glossaries of dance moves. Findings indicate that the new video comprehension tool not only reduces the cognitive load but also improves the quality of notes dancers take during practice...
1
0
0