Daniel Litt
@littmath
Followers
52K
Following
64K
Media
2K
Statuses
29K
Assistant professor (of mathematics) at the University of Toronto. Algebraic geometry, number theory, forever distracted and confused, etc. He/him.
Toronto, Ontario
Joined August 2010
I want to explain in down-to-earth terms what this paper is about, since it ultimately boils down to what I think are some really concrete and fundamental questions. 1/n
Yeuk Hay Joshua Lam, Daniel Litt: Algebraicity and integrality of solutions to differential equations https://t.co/jPYdOxyCgp
https://t.co/jvG3NLphim
20
107
972
If you’re in a relationship, why can’t you stop thinking about Clemens-Griffiths’ proof of the irrationality of the cubic 3-fold?
7
2
197
Planning to discuss this in week 4 of my upcoming algebraic geometry class.
I’ll never forget the day my professor changed my whole perspective on love. The class was loud until he asked, “If you’re in a relationship, do you still get crushes?” Silence washed over us. He drew a heart, wrote “Loyalty” and “Faithfulness” inside. “So if love contains these,
21
31
2K
I think a lot of commentators view the goal of their writing on AI as persuading the reader that it's a really big deal. That's not my goal, though it is indeed a really big deal.
1
0
59
I promise, I'll get excited about the amazing new results produced with the help of new AI tools when those results get produced!
1
0
63
Every once in a while I get a response to one of my tweets that suggests the respondent thinks of me as some kind of luddite skeptic. But what I mostly want to do is talk about capabilities of existing tools, rather than anticipating future capabilities.
2
0
80
For the record I'm fully on board with the idea that AI will dramatically change the way science is done, and that will probably include very substantial new results, eventually, and maybe even soon.
10
9
302
Actually extremely pumped to play with this; as much as I find some of the language around this stuff ("mathematical superintelligence"...) a bit hard to handle, these kind of tools are clearly a big part of the future of math research.
8
1
119
My general view is that we should evaluate work on its own merits, not e.g. based on the tools used to produce it, expected future work produced by descendants of those tools, etc.
3
3
75
Very Interesting thread. The paper in question is, I think, also a good illustration of some of the incentives around promoting LLM-aided academic work.
OpenAI leadership (@gdb, @markchen90) are promoting a paper in Physics Letters B where GPT-5 proposed the main idea — possibly the first peer-reviewed paper where an LLM generated the core contribution. One small problem: GPT-5's idea tests the wrong thing. 1/
3
5
104
This is incredibly awesome.
I made it into Terry Tao’s blog! https://t.co/n466FaLpTT One cool part of this experience is that I *would not have made the Claude Deep Research query resulting in the connection to Erdos 106 if not for Aristotle’s exact implementation*. i.e. Aristotle, an AI tool, contributed
4
5
150
Also “more of the same” here means “a lot more.” And smallish improvements to capabilities can translate to large improvements to usefulness I think.
1
0
51
TBC one still has to carefully check such arguments—they frequently have serious errors IME. Looking forward to seeing what can be accomplished with more involved scaffolding!
4
1
72
I think the situation now is “more of the above”; frontier models are very useful for literature search, and can often run “routine” arguments, especially with some hints. Especially in areas outside one’s immediate expertise this can be very useful.
2
2
109
6 months ago I wrote to a friend that: “Frontier models can now solve almost any [math] question you’d reasonably give an undergrad (at the level of a fairly strong undergrad, maybe with some bluffing). Genuinely useful for some research tasks (e.g. counterexample generation).”
6
7
270
@littmath I've honestly been impressed with how much the Greeks were able to accomplish despite having a writing system comprised entirely of frat symbols
1
6
82
Just had a very vivid memory of— in the early days of COVID, when there were some shortages of basic supplies—getting Chinese takeout, and the restaurant including a roll of toilet paper with each order.
3
0
81
I've noticed this too. Related phenomenon is dependence on search over thinking from first principles. From reading chain-of-thought summaries I get the sense that a common pattern is: search, try for a bit to deduce a result in one or two steps from fancy results in existing
An interesting phenomenon is recent AI engines seem to have become reluctant to attempt a problem if it's known to be an open conjecture/unsolved beyond a certain level of notoriety. This may depend on prompting, as might searching for a simpler solution to a solved problem.
11
3
108