causalinf Profile Banner
scott cunningham Profile
scott cunningham

@causalinf

Followers
52K
Following
169K
Media
5K
Statuses
99K

Economics professor paying it forward with 55 burgers, 55 fries, 55 tacos, 55 pies, 55 Cokes, 100 tater tots, 100 pizzas, 100 tenders, 100 meatballs

Waco, Texas
Joined October 2011
Don't wanna be here? Send us removal request.
@causalinf
scott cunningham
3 months
Hear ye, hear ye. Two workshops are on the horizon -- one next weekend, the other in 6 weeks. The first was at your house on your computer. The other one is in Madrid, Spain. Come to both!.
1
2
7
@causalinf
scott cunningham
3 months
I'm not sure what I'll do this weekend. I had covered initially the role that weights play in target parameters, which was fun. I might do repeated cross sections and compositional change next, but we'll see. I like this extra video using ChatGPT mic to create the simulations.
0
0
0
@causalinf
scott cunningham
3 months
I try to post 2 new ones every Sunday, but last week was sick again for the fourth consecutive week so missed it, but I'll get back to it. I had gone through several times the way you derive the bias in diff-in-diff using potential outcomes and the 2x2 formula.
1
0
1
@causalinf
scott cunningham
3 months
I've been trying to migrate my workshop material to my substack for the last two months (and it's ongoing) onto something called Mixtape University. It's slow, but it's a perk for the paying subscribers to my substack. They're each around 15-20 minutes.
1
0
0
@causalinf
scott cunningham
3 months
I made some videos showing in different ways the source of the bias in diff-in-diff when using an already treated comparison group. One of the videos has me using CHatGPT to create a simulation of this in R, Stata and python with the mic only.
2
0
3
@causalinf
scott cunningham
3 months
RT @BeatrizGietner: We have a new post! 🙋‍♀️Which was a delight to write 🥰.With papers/guides by @instrumenthull, @packlesshepherd, @xuyiqi….
0
23
0
@causalinf
scott cunningham
3 months
Help us all know the difference between what we can and cannot control, focus on what we can control (maybe it's larger than we think, maybe it's smaller), and then earnestly focus on those things. Wishing all of us this today.
1
2
1
@causalinf
scott cunningham
3 months
I wrote up that Acemoglu paper "A Simple Macroeconomic Theory of Artificial Intelligence" paper into a "simple explainer". I also put a song by Alice In Chains from Jar of Flies to brighten your day.
1
2
11
@causalinf
scott cunningham
3 months
Youtube algorithm sends me three things. 1. Luka videos.2. Captain America holding Thor's hammer.3. Other stuff.
0
0
5
@causalinf
scott cunningham
3 months
I think I heard you can't post links here anymore, so maybe that was it. Anyway, I wrote up my lecture on my substack about that paper, which I found pretty interesting.
0
0
0
@causalinf
scott cunningham
3 months
I taught that new Anthropic paper "on the biology of LLMs" today and I just wrote a huge thread explaining it. Unfortunately that thread got deleted. But Twitter assured me it wasn't my fault. Anyway, here's a substack that explained it.
2
1
7
@causalinf
scott cunningham
3 months
Then that gets into its refusal features. Remember Anthropic has a social good mandate because it's a Public Benefit Corporation. It had bound itself to creating harmless, helpful and honest AI. So it refuses prompts. Specifically, "I don't know" and "I won't say".
0
0
1
@causalinf
scott cunningham
3 months
The weird thing is that same "look up table" of memorized facts (e.g., 6+7=13) will then get used in non-mathematical contexts. Basically, it uses the same cheat sheet for math as it will for anything where there are numbers -- like journal cites. Could help explain those issues.
1
0
0
@causalinf
scott cunningham
3 months
Then there is the mathematics. I think we all knew it wasn't doing actual mathematics because it does not use algorithms -- it will get weird easy things wrong and hard things right. So I think they confirmed it is using a cheat sheet of memorized heuristics. But that's not it.
1
0
0
@causalinf
scott cunningham
3 months
So one theory might be that Claude just stream of consciousness writes it, but they show what it does is a two stage thing, using that skill of holding things in its head. First, it collects pairs of rhyming words. Rabbit and habit. Then it writes backwards last line first?.
1
0
0
@causalinf
scott cunningham
3 months
Robert Frost said writing poems that don't rhyme is like playing tennis with the net down. It's harder to play tennis with the net up, though. So how does Claude write rhyming poems? . First, note that rhyming poems are a minimum of two lines. And the rhymes happen at the end.
1
0
0
@causalinf
scott cunningham
3 months
The finding is that it does all that reasoning "in its head" so to speak. It does it prior to generating any text at all. That seems to be key to understanding some of the other things. For instance writing rhyming poems.
1
0
0
@causalinf
scott cunningham
3 months
What they do is say things like: "The capital of the state containing Dallas is . ". Claude then internally does this:. Dallas -> Texas -> Austin. They then block the "Texas" feature, and confirm that "Austin" fades. That's the idea, but that's not the finding.
1
0
0
@causalinf
scott cunningham
3 months
And the stuff on the right is what it says. I'm not 100% sure they know the actual attribution graph or if they're trying to deduce it. But nevertheless, the next part -- the experiment. They know the nodes, and can turn them off, though, which is key to the experiments.
1
0
0
@causalinf
scott cunningham
3 months
Third (was I numbering this?), the experiments. Interestingly, I think this is on a DAG, but it's called something else -- an "attribution graph". I tried to get ChatGPT to make an example. The stuff on the left is what you type in, the stuff inside is the brain in the jar.
Tweet media one
1
0
0