justin Profile
justin

@curl_justin

Followers
76
Following
75
Media
0
Statuses
20

tech + law. HLS JD ‘26. previously: schwarzman, microsoft research, princeton CS

Joined April 2018
Don't wanna be here? Send us removal request.
@curl_justin
justin
2 months
Should judges use LLMs like ChatGPT to determine the meaning of legal text? . Whatever your answer, it’s already happening… @PeterHndrsn, @kartkand, Faiz Surani, and I explain why this is a dangerous idea in a recent article for Lawfare. 🧵 (1/10).
3
9
27
@curl_justin
justin
2 months
RT @TheEconomist: America’s view of AI is often abstract and hyperbolic. Rather than the Western concept of a superhuman or self-improving….
0
62
0
@curl_justin
justin
2 months
RT @PeterHndrsn: Tempted to use AI to help interpret statutes or draft opinions? 📜🤖 Take pause. As we explained in @lawfare, closed models….
0
4
0
@curl_justin
justin
2 months
RT @lawfare: Justin Curl, @PeterHndrsn, Kart Kandula, and Faiz Surani warn that transfering influence to unaccountable private interests th….
0
5
0
@curl_justin
justin
2 months
RT @RichardMRe: Curl et al, “Judges Shouldn’t Rely on AI for the Ordinary Meaning of Text” | Lawfare
0
5
0
@curl_justin
justin
2 months
Read more in our article published on Lawfare here: We're also planning to write a longer follow-on law review article, so share any thoughts or comments you might have! (10/10).
0
2
3
@curl_justin
justin
2 months
Most judges, we think, would be displeased to find their clerks taking instructions from OpenAI, regardless of whether they had shown explicit bias towards the company. (9/10).
1
0
1
@curl_justin
justin
2 months
Some analogize LLMs to law clerks (which few people take serious issue with). But while clerks are vetted and employed by judges, commercial LLMs are fully controlled by the companies that create them. (8/10).
1
0
1
@curl_justin
justin
2 months
What matters here is NOT the specific values chosen but that companies are selecting and enshrining values into their models at all. Judges are supposed to interpret the law. But by consulting LLMs, they're effectively letting third parties help decide what the law means. (7/10).
1
2
2
@curl_justin
justin
2 months
2. Anthropic’s early models were trained to follow the principles it selected (Constitutional AI). 3. When asked for example laws that could help guide regulation of tech companies, o3 refused to respond to queries mentioning OpenAI yet offered suggestions for Anthropic. (6/10).
1
1
1
@curl_justin
justin
2 months
LLMs are built, prompted, fine-tuned, and filtered by private companies with their own agendas. For example… . 1. DeepSeek refuses to answer questions related to sensitive topics in China, (5/10).
1
0
1
@curl_justin
justin
2 months
Because LLMs are trained on billions of pages of text, some judges have viewed asking an LLM as a clever shortcut for finding a word's everyday meaning. But there's a catch: LLMs aren't neutral observers of language. (4/10).
1
2
1
@curl_justin
justin
2 months
Why are judges consulting LLMs? . First, context: to resolve many cases, judges must decide the meaning of key words and phrases. In modern textual interpretation, words are given their “ordinary meaning,” and essentially mean whatever the average person thinks they mean (3/10).
1
0
0
@curl_justin
justin
2 months
This is happening: An 11th Circuit federal judge asked LLMs to see if “landscaping” covers putting in a backyard trampoline and if threatening someone at gunpoint is “physical restraint.” . And he’s not alone. Judges across the country are citing AI in their opinions. (2/10).
1
0
1
@curl_justin
justin
2 years
RT @NatMachIntell: Note to self: Inspired by the psychological concept of self-reminders researchers at @MSFTResearch Asia developed a defe….
0
1
0
@curl_justin
justin
2 years
RT @random_walker: I keep thinking about the early days of the mainstream Internet, when worms caused massive data loss every few weeks. It….
0
202
0
@curl_justin
justin
2 years
RT @Huang_Sihao: Writing in @PrincetonCITP, @curl_justin and I explore China’s draft regulations on generative AI that were released by the….
0
6
0