
Sasi Kumar Murakonda
@mskumars1729
Followers
37
Following
107
Media
0
Statuses
18
Designing dataflows that respect people's privacy. Research Scientist at Privitar. Previously researcher at the National University of Singapore.
Joined November 2014
RT @SciTechgovuk: Privacy-enhancing technology protects user data and has massive potential to improve people’s lives and tackle financial….
0
7
0
Glad to be part of the winning team from the UK! Our effort at the PETs prize challenge is an embodiment of everything that makes me love working in the labs team at @privitarglobal.
The results are in! Yesterday at the #summitfordemocracy we announced winners of the US-UK privacy-enhancing technologies prize challenges⤵️
1
4
7
RT @masterarijit: "We are in the same storm, but not in the same boat.". Prithibir Pathshala was made for those who are worst hit by the ed….
telegraphindia.com
There is a gap everyone will tip-toe around. An education gap widened by the pandemic and Amphan. A group of students took it upon itself to attempt a solder
0
1
0
RT @openminedorg: LET'S SOLVE PRIVACY #PriCon2020 . Sept 26/27. ML Privacy Meter: Aiding Regulatory Compliance by . Quantifying the Privacy….
0
5
0
RT @rzshokri: Sasi will present our open source tool for quantifying the privacy risks of machine learning models, and how it can be used t….
github.com
Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms. - privacytrustlab/ml_privacy_meter
0
11
0
Absolutely! Platform data tells only about our induced behaviour through carefully crafted nudges, not our natural (conscious) behaviour. I always wonder if our reactions to nudges reveal more about the nudge or our own hidden (subconscious) behaviour. .
I've long struggled with computational social science as a field, deeply discomforted by what can and can't be learned by people's performances on social media. @angelaxiaowu nails this tension with this new essay (+ paper w/ Harsh Taneja):
0
1
2
Felt so nice to give a talk at #hotpets20! Do checkout the tool at.
@PET_Symposium #hotpets20 ML Privacy Meter: Aiding Regulatory Compliance by Quantifying the Privacy Risks of ML Perform #GDPR Data Protection Impact Assessment: analyze, identify and minimize ML privacy risks @mskumars1729 Slides:
0
0
2
RT @rzshokri: Fair and fragile? We analyze the robustness of group fairness, notably equalized odds, in machine learning wrt adversarial bi….
arxiv.org
Optimizing prediction accuracy can come at the expense of fairness. Towards minimizing discrimination against a group, fair machine learning algorithms strive to equalize the behavior of a model...
0
9
0
RT @zackcooperYale: Extremely strong investigative @propublica piece by @iarnsdorf on how TeamHealth makes their money. In short, they do….
propublica.org
TeamHealth, a medical staffing firm owned by private-equity giant Blackstone, charges multiples more than the cost of ER care. All the money left over after covering costs goes to the company, not...
0
103
0
Monitoring and quantifying are the first steps towards controlling. Such systems will only increase the existing social/power divide. At worse, this might end up being a tool for algorithmic bullying at work!.
AI to give productivity score: “Imagine you’re managing somebody and you could stand and watch them all day long, and give them recommendations on how to do their job better,” . @NandoDF @jeffkagan @paulroetzer @jeremyphoward #ai #STARTUPS.
0
0
3
RT @math_rachel: Frictionlessness is antithetical to autonomy. Frictionless design robs users of precisely those moments that may give the….
0
17
0
RT @v0max: I just posted the camera-ready version of our upcoming PETS paper here: The Price is (Not) Right: Compa….
0
52
0