Matthew Tyler
@_mdtyler
Followers
325
Following
777
Media
7
Statuses
85
Assistant Professor @RicePoliSci.
Houston, TX
Joined January 2016
A new release of the ANES Cumulative Data File is now available for download. Over 200 variables have been updated to include data from the 2024 Time Series. Visit the study page to download data and documentation: https://t.co/Ax4Y5YvvfL
0
3
5
ANES Data Release! https://t.co/pyraGPqqyM... The 3-wave ANES panel is now available. It merges data from 3 election studies (2016-2020-2024), the first time the ANES has collected interviews of the same respondents across 3 presidential elections.
electionstudies.org
2
2
5
Sharing an event that may be of interest to this community! Rare traits like support for violence, conspiracy beliefs, and unsafe health behaviors are really hard to measure! On Thursday November 13, learn from experts about the problem and solutions. https://t.co/QnBBTwDEkz
eventbrite.com
Low-quality respondents inflate the prevalence of rare traits & correlations between them. Learn from experts about the problem & solutions.
0
3
10
The full release of the ANES 2024 Time Series #Data is available for download. Read more here: https://t.co/OmbHCApuIM
1
18
46
No defiers is a required assumption for identification of LATE in IV analysis. If that is false, the best thing you can do is to calculate bounds for the LATE. But what happens if limit the maximum proportion of defiers. For instance, what if I don't believe is > 10%?
Niche nerdy tweet incoming: I’m not at all sure about this instrument. The “no defiers” assumption seems unlikely to hold — is there really no possible couple who would have divorced if the husband’s workplace stayed the same but not if it had hired more women?
2
12
31
From our new issue: "Testing the Robustness of the ANES Feeling Thermometer Indicators of Affective Polarization" by Matthew Tyler (@_mdtyler) and Shanto Iyengar. #ASPRNewIssue
https://t.co/cInNM0vPR3
0
10
74
During this year's STaRT@Rice, I’ll be teaching a workshop on missing data. You should join! Register here: https://t.co/w9daLZLRlY
#STaRTatRice2024 @STaRT_at_Rice @RiceSocSci
1
1
6
Our takeaway is that we should be careful using surveys to measure low-prevalence attitudes. When we account for screener error, we find that naive estimates of support for political violence should be seen as, at best, a loose upper bound on the truth.
0
1
3
Using this data, we find that the identified region for the mean “Was the shooter justified?” expands to include <1% even if we only allow a small degree of screener error. (For reasons discussed in the paper, 15% is a decent guess for the false positive rate.)
1
0
1
To make sure respondents were engaged with the issue of political violence, we screened them by asking them to recall which state the shooting took place in (Iowa). This fact was repeated three times in the stimulus, including in the headline and in a quote by the shooter.
1
0
0
If we ignore respondent engagement, then we estimate that about 10% ‼️ of respondents think the shooter is justified. This is on par with surveys featured in recent media coverage
nytimes.com
A nationwide poll last month found that 10 percent of those surveyed said the “use of force is justified to prevent Donald Trump from becoming president.”
1
0
0
We applied the new method to survey data that we collected in 2021. We asked respondents whether they supported a (fabricated) shooter who attempted to murder a political opponent. Respondents were co-partisan with the shooter, but the victim was the opposite party.
1
0
0
We develop a statistical toolkit to account for screener error. Using a partial identification approach, we produce lower/upper bounds for a population parameter instead of a misleading point estimate. We calculate the bounds using established numerical optimization techniques.
1
0
0
Survey researchers know this and use various screeners (e.g., attention checks) to remove low-effort respondents from the respondent pool. However, these screeners are subject to measurement error themselves. Effort can also vary throughout a survey, confounding screeners.
1
0
0
It’s no secret that survey respondents often satisfice, or provide low-effort survey responses. This isn’t always a big deal, but satisficing usually inflates low-prevalence characteristics (i.e., overestimates fringe beliefs).
1
0
1
⏰Alarmed by polls reported in the @nytimes suggesting that millions support political violence? Those estimates are inflated. New working paper: surveys provide imprecise estimates of how many think political violence is justified; high-quality surveys are consistent with <1%.
4
19
40
New paper w/ @joshclinton proposing a method to improve small-area public opinion estimates by calibrating to ground-truth data on auxiliary outcomes. https://t.co/9JpqjmRUts
2
10
64
Important new article showing how bad math is used in election conspiracy claims, and how it can be debunked.
1
19
42