Theodore Vasiloudis
@thvasilo
Followers
521
Following
3K
Media
152
Statuses
2K
Machine Learning scientist @amazon. Passed through @ververicaData, @pandoramusic, @spotify. Opinions and retweets my own.
Seattle, WA
Joined August 2014
After a long and winding road, our work on uncertainty estimation for online regression forests is now published in @JmlrOrg ! Big thanks to my co-authors @gdfm7 and Henrik for sticking through and getting this over the finish line! https://t.co/cb5PQfSI0f
1
2
14
Excited to be speaking at the Stanford Graph Learning workshop about GNNs for Spotify RecSys applications! The event is great with lots of interesting talks! Livestream here:
1
5
60
A summer endeavor, developing MLC: the first open lecture series on ML compilation. Machine learning compilation is an emerging field for systematic optimization and deployment of AI workloads. Hope to share adventures and fun with the community https://t.co/SQ6nKvw0XL š
5
56
241
In our latest episode of āHow to Fix the Internet,ā comedian and host of @WTFpod @marcmaron and @ProducerMcD tell never-before-shared details of the battle to save podcasting from a patent troll, in a case that EFF took all the way to the Supreme Court.
eff.org
Imagine getting a letter in the mailāand then another, and then anotherātelling you that if you donāt pay $25,000 to a company youāve never heard of, youāll have to shut down the small business that
5
11
42
ICYMI: The best science career advice we heard in 2021 from @alliekmiller, @DalianaLiu, @sneha_rajana, @crislopeslara, @thvasilo, and many others. Read the full article here: https://t.co/eMxk2YGma4
3
9
22
I'm starting a series of blog posts on "optimization nuggets": short and beautiful proofs in optimization (let me know what you think!). First one shows that SGD converges exponentially fast to a neighborhood of the solution: https://t.co/WFVI5KSiS3
fa.bianp.net
This is the first of a series of blog posts on short and beautiful proofs in optimization (let me know what you think in the comments!). For this first post in the series I'll show that stochastic...
13
77
448
Well this is depressing š„²
quantamagazine.org
No matter how much weād like to eradicate SARS-CoV-2, it may be better to settle for other forms of control.
0
0
1
This is a very well-written intro to the topic of automated reasoning, and glad to see the techniques making their way onto real products. Logic geeks will be happy to see this :)
To launch our new research area, @byroncook, head of the AWS Automated Reasoning group, wrote a "gentle introduction" to the field, with some simple code examples (including a two-line illustration of the halting problem). #reInvent #AutomatedReasoning
1
1
7
Don't enjoy the doomscroll news on the right on the Twitter website? Current filter to get rid of them (you're already using uBlock Origin right?) https://t.co/LYQnRvSJnm##div.r-1udh08x.r-1ifxtd0.r-rs99b7.r-1phboty.r-1867qdf.r-k0dy70.r-1ysxnx4.css-1dbjc4n:nth-of-type(3)
0
0
0
Join Amazon Scholars Michael I. Jordan, @mkearnsupenn, and Amazon VP and distinguished scientist Bernhard Schƶlkopf, as they discuss the history of machine learning in the past decade, its social impacts, the role of causal reasoning, and more. Register: https://t.co/Fp9KFBRmrI
0
21
39
Excited to announce that we've finally released @XGBoostProject v1.0.0! This cannot happen without contributions from over 390 contributors and the support from the community. Thank you! #MachineLearning Release notes: https://t.co/JMbbaIwArr GitHub repo: https://t.co/Z0ukD2gsum
5
171
509
Graph neural networks are driving lots of progress in machine learning by extending deep learning approaches to complex graph data and applications. Letās take a look at a few methods ā
10
288
1K
What if I told you that fine-tuning T5-Large (0.8B params) on a couple hundred examples could outperform GPT-3 (175B params) on a bunch of tasks?
Iām thrilled to share our new work āA Few More Examples May Be Worth *Billions* of Parametersā. Joint w/ @PSH_Lewis @riedelcastro @omerlevy_ Paper: https://t.co/HPD3Afku2w 1/4
5
41
217
I am excited to share my latest work: 8-bit optimizers ā a replacement for regular optimizers. Faster š, 75% less memory šŖ¶, same performanceš, no hyperparam tuning needed š¢. š§µ/n Paper: https://t.co/V5tjOmaWvD Library: https://t.co/JAvUk9hrmM Video: https://t.co/TWCNpCtCap
18
271
1K
Happy to see @ApacheArrow performance and memory use benefits coming to @XGBoostProject
github.com
Apache Arrow defines a columnar memory format, which allows zero-copy read and lighting fast data access (link). This PR adds support for building xgboost DMatrix from the Apache Arrow data format....
2
13
102
I'm delighted to share that our paper "Fairness in Ranking under Uncertainty" has been accepted to #NeurIPS2021. This is joint work with my advisor Thorsten Joachims and David Kempe from USC. Looking forward to sharing the camera ready version soon! š
5
2
111