josema
@J_river9
Followers
214
Following
2K
Media
18
Statuses
730
(Bayesian) Statistician interested in projects where statistics, machine learning, and psychometric contribute to sound evidence and decision-making. Only the t
Antwerp, Belgium
Joined April 2011
Hey! My latest #research on intelligibility and entropy scores, done with @svawa and Steven Gillis, is now published by @SpringerNature in #psynomBRM @Psychonomic_Soc. Check it out here: https://t.co/kPBoDA5A7p. But let me share what it's about! #EdubronUA (1/n)
1
1
2
There are individual-group conflicts in all contexts. But academia is the one I personally experience in which people routinely justify doing the wrong thing because the right thing is hard or personally costly. We do the right thing because it is right, not because it is easy!
7
18
192
"This study aimed to demonstrate the efficacy of the Bayesian beta-proportion . . . GLLAMM" Espejo et al. (in press). Everything, altogether, all at once: Addressing data challenges when measuring speech intelligibility through entropy scores https://t.co/vs0kzT77MT
link.springer.com
Behavior Research Methods - When investigating unobservable, complex traits, data collection and aggregation processes can introduce distinctive features to the data such as boundedness,...
0
1
3
Dissertation by Dr Rachel Los not only includes acknowledgements, but also .. anti-acknowledgements. https://t.co/xGddkc94k3
19
344
2K
I genuinely struggle to understand how you can be a tenured psychology professor and think you can “reanalyse” a meta-analysis by literally averaging the included effects. Is this just how bad stats education was pre 2011?
15
30
243
Ultimately, the insights from this study have implications for researchers and data analysts interested in quantitatively measuring complex, unobservable constructs, while accurately predicting empirical phenomena (18/n)
0
0
0
The study provided an illustrative example for investigating research hypotheses within the model’s framework. However, it did not offer a comprehensive evaluation of all factors influencing intelligibility (17/n)
1
0
0
The study assumed that the transcription task in Boonen et al. (2023) was correctly executed and expected the estimated latent variable to reflect the overall speech intelligibility. However, the study did not address the broader epistemological connection between the two (16/n)
1
0
0
As with any research, the authors acknowledge several limitations and suggest avenues for future exploration. Here, we highlight two of the most important concerns (15/n)
1
0
0
Nevertheless, despite lacking unequivocal support for a single hypothesis, the divided support among models suggested that statistical issues, such as a small non-representative sample size, may be hindering their ability to distinguish between individuals and models (14/n)
1
0
0
Results: multiple models were supported for the observed entropy scores. This indicated that multiple hypotheses regarding speaker-related factors were viable for the data, with some presenting contradictory conclusions about their influence on intelligibility (13/n)
1
0
0
Recognizing that research involves developing and comparing hypotheses, RQ3 illustrated how to examine hypotheses within the model’s framework. Specifically, it explored the influence of speaker-related factors on the newly estimated latent intelligibility (12/n)
1
0
0
Results: the Bayesian beta-proportion GLLAMM provided the complete posterior distribution of speakers’ potential intelligibility. This allowed for the calculation of summaries, individual rankings, and comparisons among selected speakers (11/n)
1
0
0
Recognizing that intelligibility is a key indicator of oral communication competence (Kent et al., 1994, https://t.co/t60xorlSFb), RQ2 explored how the proposed model estimates speakers’ latent intelligibility from manifest entropy scores (10/n)
1
0
0
Results: the beta-proportion GLLAMM consistently outperformed the normal LMM in predictions. The findings also highlighted that models neglecting measurement error and boundedness faced underfitting and misspecification issues, even with robust features included (9/n)
1
0
0
Given the importance of accurate predictions for developing useful practical models (Shmueli & Koppius, 2011, https://t.co/q1lg0m0Qxo), RQ1 assessed whether the Beta-proportion GLLAMM provided more accurate predictions compared to the widely used Normal LMM (8/n)
1
0
0
For this purpose, the study reexamined data from transcriptions of spontaneous speech samples initially collected by Boonen et al. (2023): https://t.co/ZuMBQuRYHH). This data was aggregated into entropy scores and analyzed using the Bayesian beta-proportion GLLAMM (7/n)
1
0
0
Our study aimed to showcase how effectively the Bayesian beta-proportion generalized linear latent and mixed model (beta-proportion GLLAMM) handles entropy score features when investigating research hypotheses related to speech intelligibility (6/n)
1
0
0
Additionally, overlooking measurement error, clustering, outliers, or heteroscedasticity can result in biased and less precise parameter estimates, ultimately diminishing the statistical power of models (5/n)
1
0
0
Neglecting boundedness can lead to underfitting at best and misspecification at worst. Both issues can hinder the model’s ability to generalize when confronted with new data and result in inconsistent and less precise parameter estimates (4/n)
1
0
0