sayan mitra
@Mitrasayn
Followers
793
Following
2K
Media
97
Statuses
613
Prof @ECEILLINOIS, Outdoors enthusiast, Parent of twins. Author of verification book https://t.co/0c2ZdF00mi Alumni @MITEECS @Caltech @iiscbangalore Jadavpur.
Urbana, IL
Joined November 2009
The work is led by PhD students Chenxi Ji, Yangge Li, and Ziangru Zhong, in collaboration with my amazing colleague Huan Zhang in @ECEILLINOIS .
0
0
1
Abstract Rendering lets us formally verify statements such as: – “No stop sign is ever detected by this classifier as the camera moves along a 5 m path,” or – Identify the range of viewing angles over which a neural pose estimator stays within a required error tolerance.
1
0
1
I find the concept of abstract images to be interesting and there are many possible improvements and applications.
1
0
0
Our #AbstractRendering work appearing as a #NeurIPS2025 Spotlight. Fundamental question: how can to compute all images a scene can produce as the camera moves? We show how these Abstract Images can be computed and used for certifying visual models.
1
0
3
Paper deadline in mid Nov. St. Malo, France is going to be amazing!
0
0
0
Big changes announced for CPS-IoT Week 2026, with ICCPS and HSCC coming together and a new AI-Autonomy track.
1
0
0
Chicago river ready for swimming after an century
nytimes.com
Decades of work to clean the Chicago River culminated with the first swim in almost 100 years. It was the latest sign of how the city’s relationship with its river has changed.
0
0
0
This is a first step toward a theory of indistinguishability for real-world autonomy. It helps us ask: what's unknowable for robots? https://t.co/b39Wiujj6V
0
0
0
We show that indistinguishability can be: Automatically checked under mild conditions Viewed as a bisimulation relation Approximated with an iterative algorithm Analyzed via an observability-inspired method that converges in finitely many steps #Robotics #Observability
1
0
0
In our HSCC'25 paper with Daniel Liberzon, we study localization & control with such coarse measurements. We define indistinguishable states as agent-landmark pairs that yield same observations under all control inputs. They define fundamental limits on state estimation & SLAM
1
0
0
Classical control theory uses linear models and smooth outputs. But real autonomous systems observe the world through discrete, finite, and often noisy measurements—think landmarks with unknown locations, or limited-resolution sensors.
1
0
0
Einstein’s “most beautiful thought” was about indistinguishability: a falling person can’t perceive their own weight. In CS, this idea underpins impossibility results in distributed systems. What does indistinguishability mean for autonomous robots with real-world sensors?
1
0
2
Highlights of our Safe Autonomy projects from Class of Spring'25: https://t.co/Xx3ScqINeK The machines are becoming more capable, and their makers too!
0
0
3
Congratulations, Dr. Kristina Miller! May the Space Force be with you!
0
0
5
Neural scenes (NeRFs, Gaussian splats, etc.) are straight-up rewiring autonomy research. Try our FalconGym for creating Real2Sim2Real control policies.
0
0
0
It was great to host NSF workshop to bring together many folks working to make AI safe and trustworthy #SupportNSF
The @pennasset's @nsf Workshop on the Science of Safe AI brought together researchers to explore the future of AI safety. Watch the highlights in this video. @rajeevalur @cis_penn @pennengineers #SafeAI
1
3
25