Maria Glymour Profile
Maria Glymour

@MariaGlymour

Followers
8K
Following
3K
Media
169
Statuses
4K

Epidemiologist formerly at UCSF, now BUSPH. I'm trying alternative social media platforms: @mariaglymour.bsky.social, and @[email protected].

Boston, MA
Joined August 2012
Don't wanna be here? Send us removal request.
@MariaGlymour
Maria Glymour
2 years
Thanks to @ManlyEpic for a great talk -- very useful paper and incredibly important data resource @hrsisr .Thanks to @NIHAging for supporting HRS and MELODEM.
1
0
2
@MariaGlymour
Maria Glymour
2 years
Careful criterion for dementia vs MCI vs normal. Dementia prevalence among people age 65 was similar to that of western countries (this seems circular to me). Prevalence showed sig patterning on racial/ethnic/socioeconomic lines.
1
0
0
@MariaGlymour
Maria Glymour
2 years
Applying a -1.5SD threshold in the healthy sample implies a 7% false positive rate-ie specificity of 93%. (side note-steam coming from Maria's brain as I try to think through whether we learn anything re sensitivity or how we could).
1
0
0
@MariaGlymour
Maria Glymour
2 years
Q re why <1.5? Prior work by Bondi & Jak showing this is the best point to balance sensitivity and specificity. Lots of discussion of this in melodem- key is the norms defined on people *without* dementia, then applied on the population.
1
0
0
@MariaGlymour
Maria Glymour
2 years
Regression of blom-transformed cognitive scores on sociodemographics included main effects+2-way interactions. Then used this to calculate a predicted value for each member in full sample. For each person, assessed whether their actual score was <1.5 SD below predicted value.
1
0
0
@MariaGlymour
Maria Glymour
2 years
f(y): Blom(y)~restricted cubic spline(y) w/ 4 knots, ie a rank-based normalization to get us to roughly normal distribution. Wanted a 1.5 SD threshold to apply so want something closer to normal distribution.
1
0
0
@MariaGlymour
Maria Glymour
2 years
Standardize & normalize factor score estimates w/r/t sociodemographics using regression equations and apply standardization to full cohort. 1)Blom transformation. 2) Predict expected score in robust norms sample as a function of age, sex, race, ethnicity, and years of education.
1
0
0
@MariaGlymour
Maria Glymour
2 years
Their robust norms sample excluded people with: stroke, parkinsons' dementia, cog fx impairment, nursing home resident, or died by 2018.
1
0
0
@MariaGlymour
Maria Glymour
2 years
Prior work from Marty Sliwenski ( showing if you normed tests excluding people who developed dementia during follow-up vs just those w/ dementia at baseline, you got better sensitivity and specificity. Need care to avoid circularity (they did).
1
0
0
@MariaGlymour
Maria Glymour
2 years
Start by defining robust normative sample. Want to have norms to id presence of impairment w/ max specificity and sensitivity. Critical for diagnostic use of NP tests but not necessary for descriptive use of tests. Robust norms remove people w/ preclinical dementia.
1
0
2
@MariaGlymour
Maria Glymour
2 years
So using this we can redo estimates of prevalence of dementia and MCI- published in jama neuro 2022. All code used is in rnj0nes/hcap22 github. publication: doi:10.1001/jamaneurol.2022.3543.
1
0
2
@MariaGlymour
Maria Glymour
2 years
5 domains: Memory, Exec Fxn, Language, visuospatial (just 1 measure), and orientation (just 1 measure) domains. @rnjma did a factor analysis of factor structure, forthcoming in JINS. Preprint:
1
0
1
@MariaGlymour
Maria Glymour
2 years
Components drawn from multiple other studies, including ROSMAP, with valuable guidance from David Bennett re relevance of various tests. Included MMSE, CERAD, semantic fluency, story recall, number series, Raven's matrices, Trail making.
1
0
0
@MariaGlymour
Maria Glymour
2 years
HCAP battery administered in English (95%) or Spanish (5%) to 3,496 people. Tests selected for HCAP to be harmonizable internationally, good coverage of domains and credibility in AD community, feasible in an hour, overlap w/ ADAMS, and sensitive to change over time.
1
0
0
@MariaGlymour
Maria Glymour
2 years
In ADAMS the case conference was very challenging to find the boundary for classifying impairment. They aimed to simplify for HCAP to make a little more efficient so larger sample was feasible.
1
0
0
@MariaGlymour
Maria Glymour
2 years
Goal of HCAP to increase sample, especially for people from minoritized groups, lower expense & create a network of comparable data in intn'l sister studies. Yang et al in Framingham study shows generational changes in cognitive scores -need for updates.
Tweet card summary image
alz-journals.onlinelibrary.wiley.com
Introduction Generational changes warrant recalibrating normative cognitive measures to detect changes indicative of dementia risk within each generation. Methods We performed linear regressions t...
1
1
2
@MariaGlymour
Maria Glymour
2 years
ADAMS ultimately had very few Black, Hispanic, or Indigenous participants. It was also fairly expensive w/ high-burden on participants due to 3 hours of assessments.
1
0
0
@MariaGlymour
Maria Glymour
2 years
History of cognitive assessments in HRS: In 2001-2003, ADAMS was fielded with 856 HRS participants and then used to estimate prevalence of dementia in the US. ADAMS fairly small size and over-sampled people w/ impairment.
1
0
1
@MariaGlymour
Maria Glymour
2 years
Starts by noting challenges with most data sources, noting @lennon's recent paper re racial patterns in NACC ( and work on transportability from the KHANDLE cohort to the California population from @EHayesLarson.
alz-journals.onlinelibrary.wiley.com
Introduction Most dementia studies are not population-representative; statistical tools can be applied to samples to obtain critically-needed population-representative estimates, but are not yet...
2
0
1