Shruthi Chari
@shruthichari
Followers
421
Following
10K
Media
33
Statuses
820
ML Scientist - Bristol Myers Squibb | PhD - RPI | Applied Explainable AI. Pronouns: She/Her (Views are my own)
Woburn, MA
Joined June 2012
Happy to share that version 2.0 of our Explanation Ontology has been accepted as a journal paper to the Special Issue in XAI at the Semantic Web Journal (SWJ). Link to paper: https://t.co/okrriZWbZz 1/6
2
2
13
A really nice article in the WSJ today about the various use cases of AI in hospitals across the US: https://t.co/6oCsJZDzH3. It helps to both understand the trends of application of AI in healthcare and identify where we can improve. Thanks for sharing @dangruen.
wsj.com
Doctors arenβt relying solely on artificial intelligence, but some are using it to help reach diagnoses or spot risks.
0
1
2
Paper finally out in a journal! https://t.co/ZbRnOAuiXm
@shruthichari, @oshaniws, @mo_ghalwash, @dangruen,@pchakrabt1,@dlmcguinness
Glad to finally see a paper on "explainability" https://t.co/mEwdpGbnMx that explains why explainability cannot be achieved through data-centric thinking, but requires world knowledge about what is to be explained.
0
2
5
Many congratulations to my advisor, Prof. Deborah McGuinness as being elected as a AAAI Fellow 2023!
Excited to share that the @RealAAAI fellows have been announced for this year and I am a recipient "for significant contributions to the semantic web, knowledge representation and reasoning environments, and deployed AI applications. @RPIScience @csatrpi @rpi
0
0
7
Thanks to the efforts of all my collaborators and co-authors from @csatrpi and @IBMResearch - @oshaniws , @mo_ghalwash, @dangruen, Sola Shirai, @pchakrabt1, @jeriscience and @dlmcguinness. 6/6
0
0
3
You can download our ontology and explore our resource from our website: https://t.co/g5y0jaxqBS. Looking forward to feedback on using the EO or queries about applying it to your own use cases. 5/6
1
0
3
Here is a list of explanation types we support. Encoding the explanations using our model, also allows them to be inferred into one or more of the explanation types we support in the EO. 4/6
1
0
1
Below is an image of our updated model of the EO. The EO is a resource to model explanations based on their dependencies from the user, interface and system attributes. 3/6
1
0
1
In V2 of the EO, we support more explanation types taking our tally to 15 and also introduce better compatibility with explainer models mainly from the IBM AIX-360 toolkit. 2/6
1
0
0
Never had a failed macOS upgrade before Ventura and it is painful to get your Mac to start again -.- Maybe finally a time where I appreciate more transparent options aka Linux.
0
0
0
I can't believe in 2022 we're still sending reporters out in Hurricanes like this
4K
40K
327K
Hope such efforts bring to light the need to focus more on reporting results and releasing models more often outside of general-purpose domains. Also, hope that AI/ML papers start asking for more rigorous and illustrative analyses outside of just quantitative numbers. 2/2
0
0
1
Happy to see an application of language models (LMs) in a high-stakes setting like the government. I like how the post author says it is to 'show, not tell' in this thread. Working on LMs, I was frustrated with how little there is outside of large corpora like Wikipedia. 1/2
Today, I testified to the U.S. Senate Committee on Commerce, Science, & Transportation @commercedems. I used an @AnthropicAI language model to write the concluding part of my testimony. I believe this marks the first time a language model has 'testified' in the U.S. Senate.
1
0
1
Unpopular Opinion: Almost all academic papers do a decent job of *what* they found, many do pretty well to describe *how* they did it, very few do a stellar job of *why* they are doing it and *why* the community needs it. @PhDVoice @OpenAcademics #AcademicTwitter #research
3
6
33
A common point raised by ML reviewers is that a method is too simple or is made of existing parts. But simplicity is a strength, not a weakness. People are much more likely to adopt simple methods, and simple ones are also typically more interpretable and intuitive. 1/2
27
95
835
Very excited to introduce, OpenXAI - https://t.co/gSPMNgdlqq, an open-source framework we have been building for the past year to evaluate/benchmark the faithfulness, stability, fairness of post hoc explanation methods using easy-to-use API & just a few lines of code. [1/N] #XAI
open-xai.github.io
3
86
405
A comprehensive thread of XAI papers presented at this yearβs XAI workshop at #IJCAI2022. Do give them a read if you are doing researching in the XAI space or are looking to improve explainability of your AI methods.
And the XAI2022 workshop @IJCAIconf is underway. Tobias Huber kicks off with a talk on alterfactual reasoning. A good turn out! #IJCAI2022Workshops
2
1
11