
Joe Simmons
@jpsimmon
Followers
7K
Following
1K
Media
27
Statuses
599
Prof. at Wharton. I research/teach decision making & research methods. Blog: https://t.co/XOI7aJXW1d ; Easy Pre-registration: https://t.co/qQwukhfPZm
Joined November 2012
In the file that Ariely sent to coauthor Nina Mazar back in 2011 (3 yrs after the study), the effect was in the wrong direction. When Nina said so, Dan told her that he had accidentally flipped the conditions when preparing the file. See Footnote 14 in
Ariely, in a statement, now says: "Getting the data file was the extent of my involvement with the data."
2
26
196
In response to some unpleasant accusations following the @susandominus NYT piece, I've decided to continue to be a rigorous scientist.
5
22
103
Participants were not randomly assigned to condition. n = 12 in the morning condition.n = 20 in the afternoon condition. All significant p-values were between .02 and .05. I'm going to keep running in the morning.
5
8
91
The insurance company says they didn't fake the data (see Data Colada [98]). They say they ran the study in 2007-2008 (with way fewer observations) and found nothing. Ariely was already publicly saying the study worked in July 2008. See minute 38:20:
SCOOP: For years, Dan Ariely has suggested that the fabricated data in one of his most famous studies was doctored before he got it. This week, the insurance company he collaborated with came back swinging -- with lots of details:
2
11
82
That self-appointed data police vigilante is Steve Haroz (@sharoz). VERY much appreciate his catching this mistake and notifying us.
Some vigilante from the self appointed data police downloaded the posted data, tried to reproduce our figure and found an error. Arrow was pointing at the observation in wrong condition for 'harvard' lower case. We have corrected it. . Very sorry.
3
7
61
Finally got around to reading this. It is excellent. Informative, useful, easy to read. This article should be assigned in PhD methods courses. Highly recommend.
Happy and excited to see this article appear online! I hope it will help correct common misconceptions in how outliers should be removed, and ultimately lead to lower false-positive rates in the literature.
1
11
49
There were three measures of performance. Two were non-significant and a third was p = .023 (and it looks like there were a few different ways to score that one). The evidence here isn't overwhelming.
Coffee is a mind-enhancing drug that actually works!. A randomized, placebo-controlled, between-subject, double-blind study shows the caffeine equivalent of one cup of ☕️ increases problem-solving ability, especially for problems requiring insight.
1
3
35
This is a such an impressive effort and a great way to audit a literature. I strongly recommend reading this paper. I also recommend watching Leif's Data Colada Seminar presentation, during which he describes this work:
In 2019, Leif Nelson and I co-taught a class on open science and replication. Everyone in class replicated a published finding examining the psychological effects of scarcity. The results were published today in PNAS
1
4
29
Impossible question. But 50 years after publication, Kahneman & Tversky's "Belief in the Law of Small Numbers" is still so widely applicable. So many errors & biases arise from the fact that we underestimate sampling error and the power of random chance.
1
3
26
Nice work. And it is time to start ensuring that all pre-registrations are thoroughly peer reviewed. If a pre-registration is vague/incomplete/not-adhered-to, the author should not be able to claim that the study was pre-registered.
"The Curious Case of the Convenient Outliers: A Twitter Thread". A recent paper in a leading psych. journal reports a pre-registered experiment with significant results: The "Predicted High" condition is significantly different from the two "Predicted Low" conditions.
2
5
22
Powerful new paper showing that disfluent fonts probably don't do anything, including things Leif and I had claimed. http://t.co/mkCT9eguFH.
2
13
19
@morewedge Some folks are trying to get a process in place. In the mean time, co-authors should feel free to reach out.
0
0
20
This paper by @minah_jung, Alice Moon, and Leif Nelson is simply awesome. Every study - and there are *many* - taught me something interesting that I didn't know before. Psychological science at its best.
0
1
17
It is also worth noting that the person (@giladfeldman) who reported these erroneous stats to the journal was careful to point out they he did successfully replicate the results of Study 1 of this paper.
2
2
16
@StuartJRitchie They find that left v right leads to a 211-calorie reduction from a base of 865 calories (24% effect). But a big RCT in a similar setting finds that NO labels vs. calorie labels (i.e., a bigger intervention) = much smaller effect (3%). 24% is too big.
1
2
15
How to evaluate replication results: A new, thoughtful, and *much* better approach from @uri_sohn. http://t.co/MTqQ4MZqTw.
0
8
13
New on the Data Colada - MTurk vs. The Lab: Either Way We Need Big Samples http://t.co/S9h5zb6lzB.
1
9
14
The ad hominem argument: A good sign someone is on the wrong side of the truth MT @yorl @BrianNosek
http://t.co/j42kPjNXjG.
2
15
12
Why are people's confidence intervals too narrow? Don Moore (@donandrewmoore) is going to help us figure it out tomorrow, in the Data Colada seminar series (12 pm Eastern). Very excited. To get the link to this seminar (and others), subscribe here:
0
2
12
@MetaMethodsPH I can't speak to everything in this thread, but I have looked into this a little, and it seems like the standard deviations reported in the tables must really be standard *errors.*.
1
0
10
@I_Evangelidis Agreed: some p-hacking is intentional. But (1) I still believe that most of it is unintentional, even now; I still catch myself doing it sometimes and I've been on the warpath against it for a decade; and (2) even if intentional, you can't prove it or fire someone for it.
1
0
10
Excellent episode! And an excuse for me to plug "Pre-registration: Why and How," a recent paper with Leif and @uri_sohn, in which we discuss many of the issues that Yoel and Alexa discuss here, though in undoubtedly a less fun way.
Episode 76 of Two Psychologists Four Beers is live! Check out “Preregistration (What is it Good For)” at
0
2
8
Universities and journal editors act solely in their own self-interest. If you want them to do the right thing you have to make it in their interest. The only way to do that is to go public (or threaten it) but that puts you at risk and it may not work. It’s the Wild West.
Eighteen months ago, I tried reporting him to his institution. A year ago, I tried asking editors for retractions. Today, he’s publishing faster than ever.
1
2
9
@lakens @donandrewmoore @PNASNews Doesn’t answer the question. What of consequence isn’t preregistered here? If you’re going to impugn these specific pre-regs as totally worthless you should be able to answer that. Where is the p-hacking that another template would’ve caught?.
1
0
8