shruthichari Profile Banner
Shruthi Chari Profile
Shruthi Chari

@shruthichari

Followers
421
Following
10K
Media
33
Statuses
820

ML Scientist - Bristol Myers Squibb | PhD - RPI | Applied Explainable AI. Pronouns: She/Her (Views are my own)

Woburn, MA
Joined June 2012
Don't wanna be here? Send us removal request.
@shruthichari
Shruthi Chari
3 years
Happy to share that version 2.0 of our Explanation Ontology has been accepted as a journal paper to the Special Issue in XAI at the Semantic Web Journal (SWJ). Link to paper: https://t.co/okrriZWbZz 1/6
2
2
13
@shruthichari
Shruthi Chari
2 years
Happy to share that I am now a PhD candidate and I am looking forward to a defense next year on my dissertation research titled, "An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems" #phdlife @csatrpi
1
1
28
@shruthichari
Shruthi Chari
3 years
A really nice article in the WSJ today about the various use cases of AI in hospitals across the US: https://t.co/6oCsJZDzH3. It helps to both understand the trends of application of AI in healthcare and identify where we can improve. Thanks for sharing @dangruen.
Tweet card summary image
wsj.com
Doctors aren’t relying solely on artificial intelligence, but some are using it to help reach diagnoses or spot risks.
0
1
2
@jeriscience
Pablo Meyer
3 years
@yudapearl
Judea Pearl
5 years
Glad to finally see a paper on "explainability" https://t.co/mEwdpGbnMx that explains why explainability cannot be achieved through data-centric thinking, but requires world knowledge about what is to be explained.
0
2
5
@shruthichari
Shruthi Chari
3 years
Many congratulations to my advisor, Prof. Deborah McGuinness as being elected as a AAAI Fellow 2023!
@dlmcguinness
Deborah McGuinness
3 years
Excited to share that the @RealAAAI fellows have been announced for this year and I am a recipient "for significant contributions to the semantic web, knowledge representation and reasoning environments, and deployed AI applications. @RPIScience @csatrpi @rpi
0
0
7
@shruthichari
Shruthi Chari
3 years
Thanks to the efforts of all my collaborators and co-authors from @csatrpi and @IBMResearch - @oshaniws , @mo_ghalwash, @dangruen, Sola Shirai, @pchakrabt1, @jeriscience and @dlmcguinness. 6/6
0
0
3
@shruthichari
Shruthi Chari
3 years
You can download our ontology and explore our resource from our website: https://t.co/g5y0jaxqBS. Looking forward to feedback on using the EO or queries about applying it to your own use cases. 5/6
1
0
3
@shruthichari
Shruthi Chari
3 years
Here is a list of explanation types we support. Encoding the explanations using our model, also allows them to be inferred into one or more of the explanation types we support in the EO. 4/6
1
0
1
@shruthichari
Shruthi Chari
3 years
Below is an image of our updated model of the EO. The EO is a resource to model explanations based on their dependencies from the user, interface and system attributes. 3/6
1
0
1
@shruthichari
Shruthi Chari
3 years
In V2 of the EO, we support more explanation types taking our tally to 15 and also introduce better compatibility with explainer models mainly from the IBM AIX-360 toolkit. 2/6
1
0
0
@tunguz
Bojan Tunguz
3 years
.@Stanford’s PubMedGPT LLM passes US Medical Licensing Exam (MedQA-USMLE) with more than 50% correct answers.
15
35
208
@shruthichari
Shruthi Chari
3 years
Nice thread on the shortcomings of LLMs and why we must build guards around them.
@AndrewYNg
Andrew Ng
3 years
1/Large language models like Galactica and ChatGPT can spout nonsense in a confident, authoritative tone. This overconfidence - which reflects the data they’re trained on - makes them more likely to mislead.
0
0
1
@shruthichari
Shruthi Chari
3 years
Never had a failed macOS upgrade before Ventura and it is painful to get your Mac to start again -.- Maybe finally a time where I appreciate more transparent options aka Linux.
0
0
0
@JPHilllllll
Read Raising Expectations (and Raising Hell)
3 years
I can't believe in 2022 we're still sending reporters out in Hurricanes like this
4K
40K
327K
@shruthichari
Shruthi Chari
3 years
Hope such efforts bring to light the need to focus more on reporting results and releasing models more often outside of general-purpose domains. Also, hope that AI/ML papers start asking for more rigorous and illustrative analyses outside of just quantitative numbers. 2/2
0
0
1
@shruthichari
Shruthi Chari
3 years
Happy to see an application of language models (LMs) in a high-stakes setting like the government. I like how the post author says it is to 'show, not tell' in this thread. Working on LMs, I was frustrated with how little there is outside of large corpora like Wikipedia. 1/2
@jackclarkSF
Jack Clark
3 years
Today, I testified to the U.S. Senate Committee on Commerce, Science, & Transportation @commercedems. I used an @AnthropicAI language model to write the concluding part of my testimony. I believe this marks the first time a language model has 'testified' in the U.S. Senate.
1
0
1
@UpolEhsan
Upol Ehsan
3 years
Unpopular Opinion: Almost all academic papers do a decent job of *what* they found, many do pretty well to describe *how* they did it, very few do a stellar job of *why* they are doing it and *why* the community needs it. @PhDVoice @OpenAcademics #AcademicTwitter #research
3
6
33
@micahgoldblum
Micah Goldblum
3 years
A common point raised by ML reviewers is that a method is too simple or is made of existing parts. But simplicity is a strength, not a weakness. People are much more likely to adopt simple methods, and simple ones are also typically more interpretable and intuitive. 1/2
27
95
835
@hima_lakkaraju
π™·πš’πš–πšŠ π™»πšŠπš”πš”πšŠπš›πšŠπš“πšž
3 years
Very excited to introduce, OpenXAI - https://t.co/gSPMNgdlqq, an open-source framework we have been building for the past year to evaluate/benchmark the faithfulness, stability, fairness of post hoc explanation methods using easy-to-use API & just a few lines of code. [1/N] #XAI
open-xai.github.io
3
86
405
@shruthichari
Shruthi Chari
3 years
A comprehensive thread of XAI papers presented at this year’s XAI workshop at #IJCAI2022. Do give them a read if you are doing researching in the XAI space or are looking to improve explainability of your AI methods.
@tmiller_uq
Tim Miller
3 years
And the XAI2022 workshop @IJCAIconf is underway. Tobias Huber kicks off with a talk on alterfactual reasoning. A good turn out! #IJCAI2022Workshops
2
1
11