jonfroehlich Profile Banner
Jon Froehlich Profile
Jon Froehlich

@jonfroehlich

Followers
6K
Following
22K
Media
338
Statuses
4K

Professor of HCI at @uwcse | Director, Makeability Lab https://t.co/827DbR7534 | Founder, https://t.co/q6kYXS1D9t | #HCI+AI #AR #UrbanScience #Access

Seattle, WA
Joined March 2007
Don't wanna be here? Send us removal request.
@jonfroehlich
Jon Froehlich
1 year
I used to❤️Twitter as a place to genuinely connect & learn from others. It's not that place anymore. I uninstalled Twitter in April'22 & have decreasingly used it. Other platforms haven't worked as a replacement; hoping Bluesky will be diff. Join me here:
0
0
1
@ymatias
Yossi Matias
2 months
StreetReaderAI leverages context-aware AI to solve a critical accessibility barrier, making immersive street-level imagery interpretable by screen readers. This work is the path to fully inclusive digital exploration.
@GoogleResearch
Google Research
2 months
Introducing StreetReaderAI: A new and more accessible street-level imagery prototype using context-aware, real-time AI and accessible navigation controls. We're redefining immersive streetscape experiences to be inclusive for all with multimodal AI. More: https://t.co/CRlZH3Svhh
0
2
14
@jonfroehlich
Jon Froehlich
2 months
Delighted that our work is being featured. This was an incredible team effort by a set of hugely talented, passionate Googlers: @shaunkane, @AlexFiannaca, Victor Tsaran, Nimer Jaber, & Phil Nelson. An accessible street view has enormous potential for blind travel planning and O&M
@GoogleResearch
Google Research
2 months
Introducing StreetReaderAI: A new and more accessible street-level imagery prototype using context-aware, real-time AI and accessible navigation controls. We're redefining immersive streetscape experiences to be inclusive for all with multimodal AI. More: https://t.co/CRlZH3Svhh
2
0
13
@jonfroehlich
Jon Froehlich
4 months
Interested in learning more? See: 📄UIST'25 preprint: https://t.co/wgztdIBEgr 📽️UIST'25 video: https://t.co/N3B9xabqrV 🚀Project page:
0
0
1
@jonfroehlich
Jon Froehlich
4 months
On a personal level, this project was like a sabbatical dream 🦄, I got to reunite with @shaunkane (we first worked together in 2006), collaborate with some incredible Googlers like Alex Fiannaca, Nimer Jaber, & Victor Tsaran, and even write (lots of) code in Google's monorepo 🧑🏽‍💻
1
0
1
@jonfroehlich
Jon Froehlich
4 months
Research challenges ahead: 🧠 Mental models of pedestrian navigation vs. street-level streetscape imagery ⚠️ Bias towards trusting AI output (even when wrong) 🎧 Interaction design difficulties in creating concise audio feedback 🌐 Improving spatial reasoning & multimodal AI
1
0
0
@jonfroehlich
Jon Froehlich
4 months
User feedback from our lab study with 11 blind users: 🚀 "This is a huge leap forward in navigation" ✨ "Incredible!" 😊 "Going to make a lot of blind people very happy" A key finding: users overwhelmingly preferred conversing with an AI Chat Agent vs. other AI modes.
1
0
0
@jonfroehlich
Jon Froehlich
4 months
I have had the incredible privilege to sabbatical at Google Research. What have I been up to? Attempting to make Street View accessible to all! 🌍✨ StreetViewAI is a new, accessible street view prototype using context-aware AI & voice interaction. https://t.co/VwpK2YKJe7
1
2
16
@projsidewalk
Project Sidewalk
4 months
We're launching RampNet, an open-source AI that helps detect curb ramps with near-human accuracy. The most amazing part? The entire project was conceived of and led by high school student John O'Meara. 🧵 A thread on what we built and why it matters.
1
2
4
@projsidewalk
Project Sidewalk
5 months
📢We've completely redesigned our API pages! It's never been easier to: ✔️ Download data in CSV, GeoJSON, or Shapefiles. ✔️ Access data programmatically through our revised API. ✔️ Build tools that champion accessibility & improve urban mobility. https://t.co/b7Ge1JSiYE
0
2
2
@makeabilitylab
Makeability Lab
1 year
Today, we kick off hashtag#ASSETS2024 in beautiful St. John's, Newfoundland. Would love to meet up with you if you're also here!
1
1
14
@uwcse
Allen School
1 year
Can your robot vacuum do this? Researchers in @UW @uwengineering #UWAllen’s @makeabilitylab adapted one to create MobiPrint, a 3D printer on wheels that maps a room and prints objects on location, on demand based on a user's needs. #UWinnovates #NSFfunded
Tweet card summary image
washington.edu
University of Washington researchers created MobiPrint, a mobile 3D printer that can automatically measure a room and print objects onto the floor. The team’s graphic interface lets users design...
0
2
5
@arnavic
Arnavi Chheda-Kothary
1 year
Thrilled to be heading to #ASSETS24 to present “Engaging with Children’s Artwork in Mixed Visual-Ability Settings”, done with my wonderful collaborators and advisors @wobbrockjo and @jonfroehlich! I will be presenting on Mon, Oct 28 as a part of Session 1A: Creativity 🖼️🎨 (1/7)
1
6
26
@jonfroehlich
Jon Froehlich
1 year
Chu Li is a fantastic, multi-talented PhD student seeking an internship for Summer'25. She will level-up any group that she joins. See:
0
1
13
@jonfroehlich
Jon Froehlich
1 year
One step towards this future is Chu Li's (@Chimichurrichu) new VIS'24 tool AltGeoViz, which attempts to provide interactive high-level spatial analytic summaries verbally via screenreader I/O controls. https://t.co/GWwOXcYtqg
Tweet card summary image
makeabilitylab.cs.washington.edu
We present AltGeoViz, a new system we designed to facilitate geovisualization exploration for these users. AltGeoViz dynamically generates alttext descriptions based on the user’s current map view,...
1
2
13
@jonfroehlich
Jon Froehlich
1 year
Drawing on recent prior work in accessible visualization (e.g., from @arvindsatya1, @FrankElavsky, @domoritz, @athersharif, @ohnobackspace, & others), we are working on making geovisual analytics accessible to blind and low vision users.
1
0
8
@jonfroehlich
Jon Froehlich
1 year
These interactive spatial analytic tools are critical to informing urban planners, advocacy groups, and policy makers; however, because of their intrinsically visual nature, they are not accessible to screen reader users.
1
0
8
@jonfroehlich
Jon Froehlich
1 year
For the last decade+, my group has developed new interactive tools to advance understanding of the accessibility of the physical world for people with disabilities—projects like AccessScore, AccessVis, and Sidewalk Equity.
1
2
34
@makeabilitylab
Makeability Lab
1 year
On this last day of #UIST2024, PhD student @XiaSu09 will be presenting our joint work w/Chang Xiao and Eunyee Koh at Adobe Research on SonifyAR, which uses LLMs to generate contextual sound effects in AR environments. Talk is 2-3:15PM in Allegheny 2. https://t.co/NuJXkWehlS
Tweet card summary image
makeabilitylab.cs.washington.edu
SonifyAR is a custom AR sound authoring pipeline that generates context-matching sounds for AR events in situ using generative AI.
2
1
16
@jonfroehlich
Jon Froehlich
1 year
And @jaewook_jae will demo EARLL, an embodied and context-aware language learning application for AR glasses. Imagine touching an object & seeing its word in multiple languages.
Tweet card summary image
makeabilitylab.cs.washington.edu
EARLL (Embodied AR Language Learning) is an embodied language learning application for AR glasses that continuously segments and localizes objects in a user’s vicinity, checks for grabbing gestures,...
0
0
3