
Justin Curl
@curl_justin
Followers
118
Following
118
Media
1
Statuses
38
tech + law. @harvard_law jd '26 | prev. @schwarzmanorg, @msftresearch, @princetoncs
Joined April 2018
Should judges use LLMs like ChatGPT to determine the meaning of legal text? . Whatever your answer, it’s already happening… @PeterHndrsn, @kartkand, Faiz Surani, and I explain why this is a dangerous idea in a recent article for Lawfare. 🧵 (1/10).
3
9
30
RT @sayashk: AI as Normal Technology is often contrasted with AI 2027. Many readers have asked if AI evaluations could help settle the deba….
0
23
0
RT @ohlennart: If the goal is dependency and revenue from China's AI market, why sell? .Cloud/remote access delivers both without giving aw….
0
16
0
RT @sayashk: How does GPT-5 compare against Claude Opus 4.1 on agentic tasks? . Since their release, we have been evaluating these models o….
0
70
0
RT @kevinlwei: As of today, submissions for @HarvardJOLT's spring issue are open! . We're looking for law review articles related to law an….
0
3
0
Yet despite being tasking NIST with the difficult work of implementing substantial portions of its AI Action Plan, the Admin is cutting its budget by 43% (from $1.46B to $839M). It's hard to imagine NIST/CAISI are well-positioned to succeed, but only time will tell. (4/5).
2
0
7
NIST, for example, is expected to lead 7 policy actions and CAISI (within NIST) is expected to lead 9. Some of the heavier lift items include developing new technical standards for high-security data centers and investing in automated cloud labs. (3/5).
1
1
4
The Plan is an ambitious attempt to promote AI progress through a series of common sense policy proposals. But given its sweeping scope, the big question is whether the various agencies tapped will have the capacity to pursue it (2/5).
1
0
4
I organized every recommended policy action in the AI Action Plan by the Agency/Department responsible for implementing it. 🧵 (1/5)
1
24
75
RT @sayashk: The mainstream view of AI for science says AI will rapidly accelerate science, and that we're on track to cure cancer, double….
0
65
0
RT @random_walker: We ourselves are enthusiastic users of AI in our scientific workflows. On a day-to-day basis, it all feels very exciting….
0
13
0
RT @TheEconomist: America’s view of AI is often abstract and hyperbolic. Rather than the Western concept of a superhuman or self-improving….
economist.com
China’s leaders believe they can outwit American cash and utopianism
0
60
0
RT @PeterHndrsn: Tempted to use AI to help interpret statutes or draft opinions? 📜🤖 Take pause. As we explained in @lawfare, closed models….
0
4
0
RT @lawfare: Justin Curl, @PeterHndrsn, Kart Kandula, and Faiz Surani warn that transfering influence to unaccountable private interests th….
0
4
0
RT @RichardMRe: Curl et al, “Judges Shouldn’t Rely on AI for the Ordinary Meaning of Text” | Lawfare
lawfaremedia.org
Large language models are inherently shaped by private interests, making them unreliable arbiters of language.
0
5
0
Read more in our article published on Lawfare here: We're also planning to write a longer follow-on law review article, so share any thoughts or comments you might have! (10/10).
lawfaremedia.org
Large language models are inherently shaped by private interests, making them unreliable arbiters of language.
0
2
3
Most judges, we think, would be displeased to find their clerks taking instructions from OpenAI, regardless of whether they had shown explicit bias towards the company. (9/10).
1
0
1
Some analogize LLMs to law clerks (which few people take serious issue with). But while clerks are vetted and employed by judges, commercial LLMs are fully controlled by the companies that create them. (8/10).
1
0
1
What matters here is NOT the specific values chosen but that companies are selecting and enshrining values into their models at all. Judges are supposed to interpret the law. But by consulting LLMs, they're effectively letting third parties help decide what the law means. (7/10).
1
2
2
2. Anthropic’s early models were trained to follow the principles it selected (Constitutional AI). 3. When asked for example laws that could help guide regulation of tech companies, o3 refused to respond to queries mentioning OpenAI yet offered suggestions for Anthropic. (6/10).
1
1
1