Joe Simmons Profile
Joe Simmons

@jpsimmon

Followers
7K
Following
1K
Media
27
Statuses
599

Prof. at Wharton. I research/teach decision making & research methods. Blog: https://t.co/XOI7aJXW1d ; Easy Pre-registration: https://t.co/qQwukhfPZm

Joined November 2012
Don't wanna be here? Send us removal request.
@jpsimmon
Joe Simmons
2 years
The lawsuit has some interesting Exhibits.
5
54
176
@jpsimmon
Joe Simmons
9 months
Gino's case against us has been dismissed. Scientists cannot effectively sue other scientists for exposing fraud/errors in their work. Those who work to correct the scientific record can sleep better tonight. Those who don’t want it corrected, well, I don’t care how they sleep.
57
468
2K
@jpsimmon
Joe Simmons
4 years
I used to teach this finding to my MBA students. It has more than 400 citations on Google Scholar. It's about dishonesty. Turns out it's fraudulent. "Evidence of Fraud in an Influential Field Experiment About Dishonesty"
Tweet media one
27
258
987
@jpsimmon
Joe Simmons
2 years
In 2021, a team of us sent a report to Harvard University detailing evidence of fraud in 4 papers (spanning 11 yrs) co-authored by HBS professor Francesca Gino. This is the 1st in a series of Data Colada posts describing that evidence.
13
129
549
@jpsimmon
Joe Simmons
2 years
Fraud has so many victims, and we don’t see or hear from most of them. You can’t compete with fraud in a system that doesn’t stop it. And so we have to stop it.
9
58
336
@jpsimmon
Joe Simmons
9 months
Thanks to everyone who supported us, both emotionally and financially. Thanks to our schools for the generous financial support. And thanks to our amazing lawyer, Jeffrey Pyle. We never felt alone in this, and we never felt like we had to stay silent. Thank you.
5
15
314
@jpsimmon
Joe Simmons
2 years
Hypothetical: Imagine a world in which some scientists cheat and some don’t and that the cheaters can’t/don’t get caught bc nobody checks. Who’s more likely to get papers/jobs/editorial positions: cheaters or non-cheaters? Who’s more likely to have to quit?.
16
52
285
@jpsimmon
Joe Simmons
11 months
New Data Colada post. Harvard's Gino Report allows us to re-construct what Harvard says is one of Gino's "original" data sets. We can compare that to the posted data to see how the data were altered
7
61
260
@jpsimmon
Joe Simmons
2 years
Thank you.
1
26
221
@jpsimmon
Joe Simmons
2 years
In the file that Ariely sent to coauthor Nina Mazar back in 2011 (3 yrs after the study), the effect was in the wrong direction. When Nina said so, Dan told her that he had accidentally flipped the conditions when preparing the file. See Footnote 14 in
@nickfountain
Nick Fountain
2 years
Ariely, in a statement, now says: "Getting the data file was the extent of my involvement with the data."
Tweet media one
2
26
196
@jpsimmon
Joe Simmons
3 years
Doesn’t look like any of the pre-registered effects are significant in the analyses that correct for multiple comparisons. The biggest effect - which was also not sig. after the correction - was not pre-registered. Great the authors reported everything. But the evidence is weak.
2
15
135
@jpsimmon
Joe Simmons
2 years
Part 2 of Data Falsificada is now up on Data Colada: "My Class Year Is Harvard".
7
32
110
@jpsimmon
Joe Simmons
1 year
An update on Gino's lawsuit against us (Data Colada). We had a hearing about our motion to dismiss, and we've learned some legal things along the way. No big news yet, but some details:
2
18
114
@jpsimmon
Joe Simmons
5 years
VERY excited to announce ResearchBox, a new and easy way to share data, code, pre-regs, & materials. I love using it, both as a researcher and a reader. We hope you do too.
1
30
106
@jpsimmon
Joe Simmons
2 years
Part 3 of Data Falsificada is now up on Data Colada: "The Cheaters Are Out of Order".
4
35
105
@jpsimmon
Joe Simmons
8 years
Open science isn’t just about posting materials/data. It's about allowing actual humans to *easily* access, comprehend, and reproduce them.
1
32
103
@jpsimmon
Joe Simmons
8 years
In response to some unpleasant accusations following the @susandominus NYT piece, I've decided to continue to be a rigorous scientist.
5
22
103
@jpsimmon
Joe Simmons
4 years
Participants were not randomly assigned to condition. n = 12 in the morning condition.n = 20 in the afternoon condition. All significant p-values were between .02 and .05. I'm going to keep running in the morning.
@nytimes
The New York Times
4 years
Depending who you are, exercising in the afternoon may be better than in the morning.
5
8
91
@jpsimmon
Joe Simmons
6 years
Reviewers cannot know what actually happened in a study w/o having access to the study materials. Going forward, I will respond to any review request by asking for the materials & data. If not provided (w/o a good reason), I will decline to review. I encourage everyone to do this.
9
24
92
@jpsimmon
Joe Simmons
6 years
In my MBA class, I ask my students to generate a random integer from 1 to 100. In every year, I find that numbers ending in 3 or 7 are the most common. So, when people are trying to type in numbers that look random, they disproportionately type in numbers that end in 3 or 7.
Tweet media one
4
10
84
@jpsimmon
Joe Simmons
2 years
Fraud has its visible victims. But there are countless invisible ones.
2
8
83
@jpsimmon
Joe Simmons
2 years
The insurance company says they didn't fake the data (see Data Colada [98]). They say they ran the study in 2007-2008 (with way fewer observations) and found nothing. Ariely was already publicly saying the study worked in July 2008. See minute 38:20:
@nickfountain
Nick Fountain
2 years
SCOOP: For years, Dan Ariely has suggested that the fabricated data in one of his most famous studies was doctored before he got it. This week, the insurance company he collaborated with came back swinging -- with lots of details:
2
11
82
@jpsimmon
Joe Simmons
5 years
In my MBA class yesterday, a student asked, "Aren't scientists required to make their data available?" In response I just started screaming.
1
5
76
@jpsimmon
Joe Simmons
2 years
My dept at Wharton is hiring Assistant Professors in the area of decision-making, defined very broadly, including psych, marketing, management, econ, computational social science. Please apply ASAP and definitely by Oct 13.
0
31
68
@jpsimmon
Joe Simmons
2 years
Part 4 (of 4) of Data Falsificada is now up on Data Colada: "Forgetting The Words"
0
12
67
@jpsimmon
Joe Simmons
3 years
In this Data Colada post with @uri_sohn, we examine a field experiment recently published in Nature. Hope you read it and like it. In this thread, just want to talk a bit about the section of our post on “illusory robustness.”
2
18
67
@jpsimmon
Joe Simmons
3 years
I do think that giving (enough) cash to poor mothers is very likely to have beneficial behavioral and developmental effects. And I think those effects should be studied. Just not convinced of this one piece of evidence, or that brain activity is what we should be focused on.
3
3
55
@jpsimmon
Joe Simmons
2 years
The fourth and final installment of the Data Falsificada series will, god willing, be out later this week, probably Friday. The delay is due to miscellaneous life chaos and (my own) personal shortcomings. (These are also the reasons why I haven't responded to your email.).
2
3
57
@jpsimmon
Joe Simmons
2 years
That self-appointed data police vigilante is Steve Haroz (@sharoz). VERY much appreciate his catching this mistake and notifying us.
@uri_sohn
Uri Simonsohn
2 years
Some vigilante from the self appointed data police downloaded the posted data, tried to reproduce our figure and found an error. Arrow was pointing at the observation in wrong condition for 'harvard' lower case. We have corrected it. . Very sorry.
3
7
61
@jpsimmon
Joe Simmons
2 years
New post showing how meta-analytic averages of dishonesty research include (1) erroneous effect sizes, (2) a study of nuns, and (3) 48 variations of the same study. Averages of invalid estimates are invalid. Averages of disparate studies are meaningless.
3
11
59
@jpsimmon
Joe Simmons
5 years
Without preregistering it is very hard for a researcher to not p-hack. Me included. P-hacking is not fraud. No academic researcher has ever been fired or asked to resign for p-hacking. Researchers who are asked to resign have been caught engaging in intentional misconduct.
6
8
57
@jpsimmon
Joe Simmons
5 years
I continue to think that running prereg replications is the best way to learn (1) how some findings hinge on small (and/or undisclosed) details, (2) what pre-registered results look like, and (3) how hard it is to do good research. Example of #3:
4
12
62
@jpsimmon
Joe Simmons
8 months
New retraction at OBHDP. Gino was an author. She had nothing to do with the anomalies. Study 1B had data "that were unlikely to have occurred naturally". The study was done at Wharton but "they do not know who handled the data."
2
13
59
@jpsimmon
Joe Simmons
3 years
Finally got around to reading this. It is excellent. Informative, useful, easy to read. This article should be assigned in PhD methods courses. Highly recommend.
@andre_quentin
Quentin André
4 years
Happy and excited to see this article appear online! I hope it will help correct common misconceptions in how outliers should be removed, and ultimately lead to lower false-positive rates in the literature.
Tweet media one
1
11
49
@jpsimmon
Joe Simmons
8 years
Our forthcoming article in the Annual Review of Psychology: Psychology's Renaissance (w/ Leif Nelson and @uri_sohn)
1
15
51
@jpsimmon
Joe Simmons
3 years
If you use R, and you like it when your code works, then you should use groundhog. It's pretty amazing. Check out Uri's post about it here:
Tweet media one
2
8
49
@jpsimmon
Joe Simmons
9 years
What I Want Our Field To Prioritize
3
22
43
@jpsimmon
Joe Simmons
4 years
Sorry if someone has said this already, but the incorrect statistics in the recent Ariely expression of concern may simply be a mistake of reporting t-values as F-values. E.g., whereas F(1,84)=2.41 implies p=.124, t(84)=2.41 implies p=.018, as reported in the original article.
Tweet media one
2
3
43
@jpsimmon
Joe Simmons
4 years
A recent Journal of Marketing Research paper finds that consumers find "progression" weight loss ads (like the one on the right) more credible than before/after ads (like the one on the left). We tried to replicate this finding.
Tweet media one
1
5
42
@jpsimmon
Joe Simmons
5 years
P(rotest)-Hacking.
@ddale8
Daniel Dale
5 years
Tweet media one
Tweet media two
1
3
38
@jpsimmon
Joe Simmons
3 years
“this means that as a default, authors will be required to post their data, materials, and code to a trusted repository before accepted papers are published.” Hell. Yes.
@donandrewmoore
Don Moore
3 years
Check out Lucas's awesome editorial:
1
4
37
@jpsimmon
Joe Simmons
3 years
There were three measures of performance. Two were non-significant and a third was p = .023 (and it looks like there were a few different ways to score that one). The evidence here isn't overwhelming.
@emollick
Ethan Mollick
3 years
Coffee is a mind-enhancing drug that actually works!. A randomized, placebo-controlled, between-subject, double-blind study shows the caffeine equivalent of one cup of ☕️ increases problem-solving ability, especially for problems requiring insight.
Tweet media one
Tweet media two
1
3
35
@jpsimmon
Joe Simmons
7 years
Extremely useful Data Colada post by @uri_sohn: Eight things I do to make my open research more findable and understandable
0
10
34
@jpsimmon
Joe Simmons
6 years
Those accused of p-hacking often act as if they have been accused of murder. But p-hacking = not knowing exactly how to analyze your data and so trying more than one way. Not quite murder. Good (and bad) people are p-hackers. If I don't preregister my studies, I will p-hack them.
4
5
27
@jpsimmon
Joe Simmons
4 years
This is a such an impressive effort and a great way to audit a literature. I strongly recommend reading this paper. I also recommend watching Leif's Data Colada Seminar presentation, during which he describes this work:
@donandrewmoore
Don Moore
4 years
In 2019, Leif Nelson and I co-taught a class on open science and replication. Everyone in class replicated a published finding examining the psychological effects of scarcity. The results were published today in PNAS
1
4
29
@jpsimmon
Joe Simmons
3 years
It is awesome that so many scientists elect to be transparent. But we can't rely on that in the long term. Being a scientist means showing your work so others can check it. (It also means taking the time to check others' work.) Transparency should be a requirement, not a choice.
0
5
28
@jpsimmon
Joe Simmons
4 years
It was great to talk with two such incredibly thoughtful people about a topic that has not been talked about enough.
@fourbeerspod
Two Psychologists Four Beers
4 years
Episode 73 of Two Psychologists Four Beers is live! Check out “We Need to Talk About Fraud (with Joe Simmons)” at
1
6
29
@jpsimmon
Joe Simmons
2 years
Research assistants need oversight.
@sTeamTraen
Nïck Brown🌻
2 years
New (Gino-related) blog post: Attack of the 50-foot research assistants.
1
0
27
@jpsimmon
Joe Simmons
4 years
Analyses of ALL p-values/z-values can be very misleading, as @uri_sohn wrote about in 2015. To make proper inferences, you need to extract the p-values of primary interest to the original researchers. This is hard. It takes a lot of time and care.
Tweet media one
0
3
23
@jpsimmon
Joe Simmons
3 years
Devastating loss for Wharton. Killer acquisition for Yale. Deb is an amazing advisor, researcher, colleague, and human. Determined to be happy for Deb (and Yale) instead of sad for me (and Wharton).
@deborahasmall
Deborah Small
3 years
Some personal news: Today is my first day on the faculty at Yale!.
0
1
28
@jpsimmon
Joe Simmons
2 years
The "Evil Genius" retraction cites "only" Experiment 4. I would bet my house/leg/whatever that the problems go beyond just Experiment 4. But even if it were "only" Experiment 4, this is the right model. ALL papers containing fake data should be retracted.
1
5
28
@jpsimmon
Joe Simmons
7 years
I had forgotten (or never knew) this detail about Festinger & Carlsmith’s (1959) classic cognitive dissonance experiment.
Tweet media one
1
2
24
@jpsimmon
Joe Simmons
7 years
Some failures to replicate are due to differences in design between the original and the replication attempt. But *many* failures to replicate occur simply because the original finding is a false-positive.
0
4
24
@jpsimmon
Joe Simmons
5 years
I'm reflecting on this: Just bc there are no good arguments against commonsense calls for change (e.g., materials/data posting) doesn't mean that change will happen. "Let's keep talking about this" is how those who control our journals prevent change. They are still talking. .
1
5
27
@jpsimmon
Joe Simmons
4 years
Impossible question. But 50 years after publication, Kahneman & Tversky's "Belief in the Law of Small Numbers" is still so widely applicable. So many errors & biases arise from the fact that we underestimate sampling error and the power of random chance.
Tweet media one
@westwoodsam1
Sam Westwood
4 years
What’s the best academic paper you’ve ever read and why?.
1
3
26
@jpsimmon
Joe Simmons
5 years
My favorite thing about Microsoft Word is how it defaults to assuming that I want my footnotes to be in a different font than the font I am using in the rest of the paper.
1
0
25
@jpsimmon
Joe Simmons
5 years
Another threat to the validity of scientific findings: hidden confounds. Until we require authors to post original materials, to show what they actually did in their studies, we aren't doing science. We are playing dress-up. And rewarding the wrong things.
0
10
23
@jpsimmon
Joe Simmons
4 years
I want this book, but I don't want it used. Should I buy the hardcover or the paperback?
Tweet media one
3
0
25
@jpsimmon
Joe Simmons
5 years
We are launching a Data Colada Seminar Series. It's on Fridays from 12-1 pm Eastern. Starts this week (4/24). Very excited/lucky that our first speaker is Yoel Inbar (@yorl). To sign up to receive the links to the seminars, go here:
0
9
21
@jpsimmon
Joe Simmons
6 years
I always tell my students: If you want your article to have impact, make sure its publication gets announced on the weekend before Christmas. Well done, guys.
0
3
19
@jpsimmon
Joe Simmons
2 years
Good.
0
0
21
@jpsimmon
Joe Simmons
5 years
Nice work. And it is time to start ensuring that all pre-registrations are thoroughly peer reviewed. If a pre-registration is vague/incomplete/not-adhered-to, the author should not be able to claim that the study was pre-registered.
@andre_quentin
Quentin André
5 years
"The Curious Case of the Convenient Outliers: A Twitter Thread". A recent paper in a leading psych. journal reports a pre-registered experiment with significant results: The "Predicted High" condition is significantly different from the two "Predicted Low" conditions.
Tweet media one
2
5
22
@jpsimmon
Joe Simmons
10 years
Powerful new paper showing that disfluent fonts probably don't do anything, including things Leif and I had claimed. http://t.co/mkCT9eguFH.
2
13
19
@jpsimmon
Joe Simmons
9 years
Here is my response to @Eli_Finkel and Paul Eastwick's take on prioritizing replicability.
0
11
18
@jpsimmon
Joe Simmons
2 years
@morewedge Some folks are trying to get a process in place. In the mean time, co-authors should feel free to reach out.
0
0
20
@jpsimmon
Joe Simmons
8 years
Worth revisiting - Three Ideas For Civil Criticism: @uri_sohn.
0
10
17
@jpsimmon
Joe Simmons
2 years
is still useful
Tweet media one
0
2
15
@jpsimmon
Joe Simmons
5 years
This paper by @minah_jung, Alice Moon, and Leif Nelson is simply awesome. Every study - and there are *many* - taught me something interesting that I didn't know before. Psychological science at its best.
0
1
17
@jpsimmon
Joe Simmons
7 years
New on Data Colada: Don't Trust Internal Meta-Analysis. Arguably our most important post. I super wish IMA was a good idea, but it is catastrophic, dramatically increasing false-positive rates. {Insert frowny face here}.
1
9
14
@jpsimmon
Joe Simmons
4 years
It is also worth noting that the person (@giladfeldman) who reported these erroneous stats to the journal was careful to point out they he did successfully replicate the results of Study 1 of this paper.
Tweet media one
2
2
16
@jpsimmon
Joe Simmons
4 years
@StuartJRitchie They find that left v right leads to a 211-calorie reduction from a base of 865 calories (24% effect). But a big RCT in a similar setting finds that NO labels vs. calorie labels (i.e., a bigger intervention) = much smaller effect (3%). 24% is too big.
1
2
15
@jpsimmon
Joe Simmons
3 years
An indefensible policy.
0
0
10
@jpsimmon
Joe Simmons
10 years
How to evaluate replication results: A new, thoughtful, and *much* better approach from @uri_sohn. http://t.co/MTqQ4MZqTw.
0
8
13
@jpsimmon
Joe Simmons
3 years
New on Data Colada. Meta-analytic means are (very) often uninformative or misleading, as it does not make sense to average invalid results together with valid results, or to average across studies with different manipulations or measures.
0
5
13
@jpsimmon
Joe Simmons
9 years
What if the NY Times reported every HUMAN-caused traffic fatality on its front page?
0
3
9
@jpsimmon
Joe Simmons
11 years
New on the Data Colada - MTurk vs. The Lab: Either Way We Need Big Samples http://t.co/S9h5zb6lzB.
1
9
14
@jpsimmon
Joe Simmons
3 years
Yeah, but psychology is 220% complete.
@xkcdComic
XKCD Comic
3 years
The Last Molecule
Tweet media one
0
1
12
@jpsimmon
Joe Simmons
11 years
The ad hominem argument: A good sign someone is on the wrong side of the truth MT @yorl @BrianNosek http://t.co/j42kPjNXjG.
Tweet media one
2
15
12
@jpsimmon
Joe Simmons
8 years
We were asked to write a short piece about "False-Positive Psychology". Now in press, w/ Leif Nelson & @uri_sohn
0
8
11
@jpsimmon
Joe Simmons
2 years
Tweet media one
0
0
11
@jpsimmon
Joe Simmons
6 years
New on Data Colada: Descriptions of serious and unexplainable data anomalies in a Psych Science article. In my opinion, an "expression of concern" is not enough in this circumstance; the article should have been retracted.
0
2
11
@jpsimmon
Joe Simmons
5 years
Why are people's confidence intervals too narrow? Don Moore (@donandrewmoore) is going to help us figure it out tomorrow, in the Data Colada seminar series (12 pm Eastern). Very excited. To get the link to this seminar (and others), subscribe here:
0
2
12
@jpsimmon
Joe Simmons
10 years
It is (selfishly) smart, and now super easy, for researchers to pre-register their studies:
0
6
12
@jpsimmon
Joe Simmons
7 years
New on Data Colada: Leif describes why it is usually impossible to compute an average effect size.
0
2
10
@jpsimmon
Joe Simmons
6 years
Agree 100%. Also, I can tell you from experience that when you go through the "hassle" of making your data and materials available, you catch errors before you submit your paper. That's an important positive side effect of this requirement.
@andre_quentin
Quentin André
6 years
@john_slp @jpsimmon 2/2: One afternoon compiling the data and writing a code book for it? We ask for transparency and clarity in theory and writing, why not ask the same when it comes to data?.
0
2
10
@jpsimmon
Joe Simmons
8 years
Jerry Nelson, Leif Nelson's dad, recently passed away. He was a great scientist and a great man.
0
3
11
@jpsimmon
Joe Simmons
10 years
New on Data Colada: Don't Analyze ALL P-values http://t.co/H3caDoyOpD.
3
14
10
@jpsimmon
Joe Simmons
5 years
We ran another replication of a study that was recently published in the Journal of Consumer Research.
0
1
12
@jpsimmon
Joe Simmons
8 years
1. Looking for methodological solutions that fix *every* problem will be unproductive. Different problems require different solutions.
1
1
7
@jpsimmon
Joe Simmons
4 years
@MetaMethodsPH I can't speak to everything in this thread, but I have looked into this a little, and it seems like the standard deviations reported in the tables must really be standard *errors.*.
1
0
10
@jpsimmon
Joe Simmons
3 years
Does it make sense to average the effects of (1) a “dish-of-the-day” label and (2) reminders to go to bed at a decent hour? No. But this is something meta-analysts do all the time. And readers accept these averages as truth.
0
3
10
@jpsimmon
Joe Simmons
7 years
New on Data Colada: Pilot-Dropping Backfires (So Daryl Bem Probably Did Not Do It)
0
2
7
@jpsimmon
Joe Simmons
6 years
"no direct proof that the data were tampered with." Maybe no "direct proof" (no one *saw* it happen), but there is *overwhelming* evidence. This retraction was long overdue. Thanks to Leif, Uri (@uri_sohn), Frank Yu & anon others for getting this done.
1
2
9
@jpsimmon
Joe Simmons
5 years
@I_Evangelidis Agreed: some p-hacking is intentional. But (1) I still believe that most of it is unintentional, even now; I still catch myself doing it sometimes and I've been on the warpath against it for a decade; and (2) even if intentional, you can't prove it or fire someone for it.
1
0
10
@jpsimmon
Joe Simmons
4 years
Excellent episode! And an excuse for me to plug "Pre-registration: Why and How," a recent paper with Leif and @uri_sohn, in which we discuss many of the issues that Yoel and Alexa discuss here, though in undoubtedly a less fun way.
@fourbeerspod
Two Psychologists Four Beers
4 years
Episode 76 of Two Psychologists Four Beers is live! Check out “Preregistration (What is it Good For)” at
0
2
8
@jpsimmon
Joe Simmons
9 years
When Does Making Detailed Predictions Make Predictions Worse? My forthcoming paper with Theresa Kelly.
0
1
9
@jpsimmon
Joe Simmons
12 years
Our p-curve paper and app are now publicly available: http://t.co/8RaMH6tlpv.
0
10
8
@jpsimmon
Joe Simmons
8 years
New from @uri_sohn on Data Colada: Interactions in Logit Regressions: Why Positive May Mean Negative.
0
6
9
@jpsimmon
Joe Simmons
4 years
Universities and journal editors act solely in their own self-interest. If you want them to do the right thing you have to make it in their interest. The only way to do that is to go public (or threaten it) but that puts you at risk and it may not work. It’s the Wild West.
@JoeHilgard
Joe Hilgard, data guy
4 years
Eighteen months ago, I tried reporting him to his institution. A year ago, I tried asking editors for retractions. Today, he’s publishing faster than ever.
1
2
9
@jpsimmon
Joe Simmons
10 months
@lakens @donandrewmoore @PNASNews Doesn’t answer the question. What of consequence isn’t preregistered here? If you’re going to impugn these specific pre-regs as totally worthless you should be able to answer that. Where is the p-hacking that another template would’ve caught?.
1
0
8