When you finish a PhD in computer science, they take you to a special room and explain that you must never use recursion in real life. Its only purpose is to make programming hard for undergrads.
When you finish a PhD in computer science, they take you to a special room and explain that you must never use recursion in real life. Its only purpose is to make programming hard for undergrads. 😂
Got promoted with tenure at University of Toronto :-) Thankful to my brilliant students, collaborators, and mentors.
I can't think of a better place to have landed than Toronto. I took risks without existential fear and struck gold over and over with students in Stats and CS.
It's easy to reject papers. You can manufacture issues and sink them based on amorphous/vague notions such as novelty, impact, clarity, etc.
Every paper has minor problems that you can amplify. It takes a lot more courage to argue to accept something.
When you finish a PhD in computer science, they take you to a special room and explain that you must never use recursion in real life. Its only purpose is to make programming hard for undergrads. 😂
When you finish a PhD in computer science, they take you to a special room and explain that you must never use recursion in real life. Its only purpose is to make programming hard for undergrads. 😂
I am inspired by my wife, Dr Gintare Karolina Dziugaite (
@KDziugaite
) who JUST TODAY defended her thesis at Cambridge, and, not only completed her PhD research in four years, but also brought two wonderful children into this world. Who else could complete a PhD between naps?!
So... now that we are giving online talks that are recorded, maybe we can invite junior scholars rather than listen to senior researchers give the same talk for the 20th time.
Professional update:
I've been named Research Co-Director of the Vector Institute.
This is an exciting opportunity but also a big responsibility: Vector has grown tremendously since 2017, with now over 700 researchers.
One top priority: attracting next-gen top AI talent. 1/2
👏Exciting news!
@roydanroy
has been named the Vector Research Co-Director, leading the way in cutting-edge AI research along with current Research Director, Graham Taylor. Learn more about his appointment here:
For many years, I enjoyed working on problems no one else was thinking about. There was no rush to publish and I could slowly make my way towards the correct formalizations. Like hiking in the backcountry. Now I feel like I'm hiking Yosemite Falls Trail in August.
🎉🙏👏🎉🙏👏 Big life announcement. I’m excited to announce that I’ll be joining the effort for XAI starting September 2023. I’m eager to seek the truth and uncover the true nature of the universe. More info to come.
If you write a paper about X because you read Y, give some love to Y. Don't bury that fact in some bullshit literature review. You might think we're competing but we're not.
A theoretical computer scientist was being seriously considered for a prestigious math prize, when a mathematician on the committee asked whether the work was really mathematics. A very well-known mathematical physicist interrupted, "their theorems are just as useless as yours!"
Einstein slept 12 hours a day, played video games for 4 hours a day, regularly procrastinated on Twitter for 3 hours a day, and sat on Zoom for 5 hours a day.
Just got word that a journal article has been accepted. Not usually something I would tweet, but this one is special, because I submitted it on Dec 15, 2011.
135 papers submitted, a record! Congratulations to me. Thanks in advance to all of you who will be reviewing these during your summer. Many are just undergrad ML course projects that I was too embarrassed to kill earlier. Sorry not sorry 🙇🤦♂️🪺
Dear ICML author,
Had a good paper rejected? Reconsider submitting it to NeurIPS. The NeurIPS/ICML/etc review system is more wasteful than bitcoin.
What should you do with your work?
Submit your work to TMLR. The community is better at judging impact.
Regards,
Yours Truly
Claim: Splitting a paper into TeX files for each section is a mistake, needlessly complicating, among other things, search (and replace).
Convince me otherwise. (Unless you use Dropbox for version control, because then I don't care about your opinion. ;)
I'm launching a new Twitter service.
You tweet me the name of a paper you're going to cite and for what reason, and I'll respond with a better paper to cite, if one exists.
No no no no no no no no no.
Thankfully, this advise was ignored by the authors. But this wide spread but unspoken belief is why NeurIPS/ICML/ICLR reviewing for empirical papers is totally broken.
My new approach to seminar invites in the time of zoom: I've been co-presenting with students. No extra costs (because zoom) and students get opportunity to talk at fantastic institutions. Two down, hopefully many more like this to go! I recommend sharing the love.
I asked Toronto students how much compute they thought the average NeurIPS author used per paper and 25% of them thought it was over six GPU years. One of them thought it was 800 GPU years. Really not sure what to make of this (The real number is 1 week before the deadline)
I will not be accepting AC roles going forward. They are a waste of my time. They are a waste of the field's time. In fact, conferences proceedings should be drop kicked in favor of some new system. Typical review quality is so poor, we are now a cargo cult.
I'm excited to announce that I moving to Twitter to take over the Engagement team. I'd like to thank all my collaborators for making this possible. My first step will be to introduce the Edit button.
I think it's a mistake for the US not to think about accelerating global vaccination. Variants created by out of control spread are going to come back to haunt all of us.
It’s an honor to be named a Canada CIFAR AI Chair and to be part of a rapidly growing ecosystem centered around AI here in Canada. Whether you’re an up and coming AI researcher, or a hard working high schooler, you should set your sights on Canada.
As part of the Pan-Canadian AI Strategy, we are announcing an expansion of the Canada CIFAR AI Chairs program, bringing the total number of chairs to 46, from 29 announced last December.
Meet Canada’s AI leaders:
#CIFARAI
The real problem with writing papers with more than 1 idea is that no one reads past the 1st idea and then you have to keep telling people... "no, we did that already, have you read section 5 or appendix J?".
I remember reading this paper when it first showed up an arXiv. (Was called l'arXiv back then.) Phenomenal work. I immediately recognized its importance and began working on its application to selling ads.
Oldies but goldies: Joseph Fourier, Théorie analytique de la chaleur, 1822. Introduces sines and cosines series as an approximation method and derived the heat equation PDE.
My ICLR reviewers just responded to my rebuttal and they agree totally and love my revisions 8 8 8 and I won a billions dollars and why yes I would like a unicorn!!
We're way beyond the point that attending a single NeurIPS/ICML/etc is more costly than an Oculus/HoloLens + high speed fiber internet. I'd love to see a VR/AR conference for ML. Imagine the savings in time, pollution, etc.
A new undergrad AI club is asking me for paper recommendations for their journal club. Help me give them a semesters worth of ideas by responding to this message.
Your SOTA code may only be SOTA for some random seeds. Nonsense or new reality? I suppose there are trivial ways to close the gap using restarts and validation data.
I need a list of reputable venues for ML researcher. Anyone have one? Clearly it should have NeurIPS/ICML/ICRL/CVPR. Where I'm missing familiarity are journals I've never heard of and subfield conferences and journals.
I’m delighted to announce the delightful news that I can announce my delight and excitement at announcing delight and did I mention delight and wow NeurIPS OOH YEAH D-LIGHT!
Can anyone verify that ChatGPT wasn't trained on the exam it is taking here or in any of the 20 papers on similar topics? And how would you prove this to me?
Capabilities of GPT-4 on Medical Challenge Problems
GPT-4, without any specialized prompt crafting, exceeds the passing score on USMLE by over 20 points and outperforms domain-expert models like Med-PaLM.
Before you go work with some professor, ask yourself: do they just promote themself? or do they also work for their students? Looking at someone's website is a pretty good tell.
Here's what success means for my postdocs.
You join my group. Borrow my resources. You start something, either by spinning off something going on or maybe doing something brand new. You get a job and THEN YOU TAKE WHAT YOU CREATED WITH YOU.
Don't accept a postdoc otherwise.
I feel that reviewers who suggest more experiments ought to explain what we might learn from... more experiments. What hypothesis are they testing? As far as I can tell, none.
We seem to be suffering as a community from not teaching our students how to do empirical work.
@jen_keesmaat
@A_Aspuru_Guzik
Disagree strongly. Many people were prepared and had masks well in advance. Very sensible to wear a mask, like everyone is doing in large cities in much of Asia. Hoarding them for profit quite different. Ultimately, this is a government failure.
Happy news for me. A journal paper that's been under review since, essentially, January 2016, has finally been accepted, and just in time for that former student (now Berkeley postdoc) to go on the market! Just sharing my joy now. Details in a week or a so.
My two-part tutorial on "Bayesian learning" from the Logic and Learning FoPSS Summer School is now on Youtube. Check them out. (1) (2) (slides) (workshop)
Proud Toronto advisor moment: my first student,
@victorveitch
, has just accepted his first tenure track position. 🎉 Here’s to a long and successful career, including many academic grandchildren. 😊🥂
Though it's an odd moment for good news, I'm incredibly excited that
1. I'm joining the University of Chicago as an assistant professor of stats/data science on January 1st, and
2. I'll be joining Google as a research scientist until then
My advisor at MIT was Leslie Kaelbling and she encouraged me explore and ask big questions. She was a fantastic sounding board. Grad school was such a great time and a lot of credit goes to Leslie.
2010: Nonconvex/Deep learning is [wrong].
2022: Thank you for inviting me to deliver this keynote on deep learning.
Funny how we forget how outspoken some people were. All of them saved by some grad students.
Yesterday, I announced that I was moving to Twitter. Apparently, Jack forgot to sign my contract, and the new CEO doesn't like my Edit button ideas, and so I'm going to stay a professor and pick up crocheting.
Mine and
@GCLinderman
's comment to the Becht et al. 2018 (
@EtienneBecht
) paper has finally appeared in
@NatureBiotech
after over a year of editorial considerations.
There is no response from the authors, so I assume we are all in agreement :-)
This observation is one reason to argue for replacing (or augmenting) best paper awards with multiple Test of Time Awards for 1, 2, 4, 8, 16, 32 years back.
Today 6 years ago, "Attention is All You Need" went on Arxiv! Happy birthday Transformer! 🎂
Fun facts:
- Transformer did not invent attention, but pushed it to the extreme. The first attention paper was published 3 years prior (2014) and had an unassuming title: "Neural Machine