
Subhabrata Mukherjee
@subho_mpi
Followers
545
Following
396
Media
21
Statuses
224
Co-Founder & Chief Scientific Officer, @HippocraticAI. PhD. Head of AI. Former Principal Researcher @MicrosoftResearch.
Seattle, WA
Joined April 2017
When we started building a safety-focused LLM for healthcare a year back, a result like this was beyond imagination. We are excited to share some of the technical and a lot of the clinical considerations that went into building #Polaris in our 53-page technical report available
0
4
30
The last few months have been a real rollercoaster ride as we see incredible commercial traction and customer validation of our efforts to leverage generative AI to bring healthcare abundance to all. @hippocraticai.
0
0
4
We are excited to announce $141 million series B financing round bringing @hippocraticai valuation to $1.64 billion. Round was led by @kleinerperkins, with backing from existing investors incl @generalcatalyst, @a16z, @nvidia, @nvidia, @premjiinvest, @svangel, UHS and @WellSpan.
1
1
6
RT @jonsakoda: We hosted our annual AI Pioneers Summit this week to celebrate the technical leaders at the forefront of deploying AI and LL….
0
12
0
RT @Stanford_AI_Bio: We are super excited to have @subho_mpi, Chief Scientific Officer & Co-founder at @hippocraticai to join us next Tuesd….
0
4
0
We are truly excited to find @EricTopol summarizing #Polaris in his report. Read about our LLM constellation work for real-time patient-AI voice conversations in Preprint: #GenerativeAI #healthcare @hippocraticai
This was a big week in healthcare #AI, summarized in the new Ground Truths (link in profile).Important new reports by @pranavrajpurkar @AI4Pathology @hippocraticai @PierreEliasMD @ItsJonStokes @james_y_zou @KyleWSwanson @ogevaert and their colleagues
0
8
32
We are honored and humbled to be featured in @FortuneMagazine 50 #AI Innovators 2023 list! I am incredibly proud of what my team is building at the forefront of #generativeai and #healthcare.
1
1
3
RT @arankomatsuzaki: SkipDecode: Autoregressive Skip Decoding with Batching and Caching for Efficient LLM Inference. Obtains 2-5x inference….
0
35
0
RT @billxbf: Bothered by the expensive runs on Auto-GPT and LangChain agents? Check out our recent work, ReWOO, that eliminates token redun….
0
33
0
RT @AutomlSeminar: Do you want to make your transformers more efficient? Check out @subho_mpi talk on ‘AutoMoE: Neural Architecture Search….
0
1
0
RT @AutomlSeminar: We kick-off the near year with a talk by @subho_mpi about ‘AutoMoE: Neural Architecture Search for Efficient Sparsely Ac….
0
2
0
RT @_akhaliq: AutoMoE: Neural Architecture Search for Efficient Sparsely Activated Transformers.abs: .
0
20
0
RT @hugo_larochelle: We (@BeEngelhardt, @NailaMurray and I) are proud to announce the creation of a Journal-to-Conference track, in collabo….
0
194
0
RT @fchollet: To put the "scale" narrative into perspective. The brain runs on 15 watts, at 8-35 hertz. And while we have ~90B neurons, u….
0
281
0