Language Development Research
@LangDevRes
Followers
1K
Following
43
Media
4
Statuses
191
Language Development Research: A Platinum Open Science Journal with no fees for readers or authors.
Carnegie Mellon University
Joined September 2019
Thanks to our new followers, but we're not active on here anymore - please join us at the other place - the one where the sky is blue!
0
0
1
Hi everyone - We're not killing this account off just yet but like many people/organizations we're essentially transferring to the other place - see you there!
0
0
1
@LangDevRes @LVKremin @Concordia @Princeton It was a wonderful experience publishing with @LangDevRes ! So glad we have this platinum #OpenAccess journal in our field! Two thumbs up š š
0
1
3
Conclusion "It is not scientifically sound to tell parents that code-switching is āgoodā or ābadā... future experiments...[should] carefully document young bilingualsā everyday experience with code-switching & evaluate how they process instances of typical and atypical switching"
0
0
4
How come? Here, the switch came at an uninformative adjective (e.g., "le bon" duck didn't help children with their task of finding the duck onscreen) so maybe that's why it didn't interfere with processing (and perhaps the switch boosted attention) 4/n
1
0
2
Surprisingly, this study found that children were just as good at processing these code-switched sentences (or, if anything, slightly better!) 3/n
1
0
2
Bilingual children often hear sentences with words from both languages ("code-switching"); e.g., "Can you find le bon [the good] duck?". Some previous research (including by these authors!) has found that this makes understanding more difficult (vs single-language sentences) 2/n
1
0
2
Another NEW platinum-open-access article with important real-world implications: @LVKremin, Amel Jordan, Casey Lew-Williams and @Krista_BH @Concordia @Princeton - "Bilingual childrenās comprehension of code-switching at an uninformative adjective" https://t.co/yAo4WlVnlt š§µ1/n
ldr.lps.library.cmu.edu
Bilingual children regularly hear sentences that contain words from both languages, also known as code-switching. Investigating how bilinguals process code-switching is important for understanding...
2
7
16
And of course, their classifier tool is freely available for colleagues too: https://t.co/k3as5pLGrz 7/7
github.com
Contribute to kachergis/tCDS_nap_classifier_paper development by creating an account on GitHub.
0
0
2
We're honoured that these researchers have chosen to publish such methodologically groundbreaking research with LDR, and thank them for supporting our platinum open access (that's no fees for anyone!) model. 6/x
1
0
2
That's not all - the authors then used their huge automatically-tagged dataset to see if kids with more child- (but not other-) directed speech have larger vocabularies down the line. And yes they did (though of course, this can't show causation). 5/x
1
0
0
So that's what the authors did. Their automated procedure was pretty good! It correctly classified 80%/70% of segments as child/other directed (human-human agreement is 87%/65% so the same ballpark!) 4/x
1
0
0
There's some evidence for the idea, but it's limited because coding corpora of speech for whether or not each utterance is child-directed is laborious work! Wouldn't it be great to automate this process?! 3/x
1
0
0
Some theories of child language acquisition predict that children learn better from child-directed speech than other-directed (or "overheard") speech. But is this true? 2/x
1
0
0
Happy publication day to Janet Bang, George @kachergis Adriana Weisleder @adriweis and Virginia @V_Marchman for their groundbreaking methods paper: An automated classifier for periods of sleep and target-child-directed speech from LENA recordings https://t.co/B4NQKmUbdL š§µ1/x
ldr.lps.library.cmu.edu
Some theories of language development propose that children learn more effectively when exposed to speech that is directed to them (target child directed speech, tCDS) than when exposed to speech...
1
6
17
Forced aligners work better on adults than children. We tried different aligner configs with HTK and MFA, thinking that using child speech in training data would be important. We were wrong, MFA with models pretrained on lots of adult speech was better than anything else!
1
1
4
@wavable And thanks @LangDevRes for being a cool Open Science journal! "We don't believe in locking articles behind paywalls, in charging taxpayers and universities to publish research they've already funded..." https://t.co/JW0lXyVYlp
0
2
3
Overall the authors conclude that these tools can be very useful for working with child speech, but aren't yet at the stage where they can completely replace manual coding 6/6
0
0
0
And perhaps surprisingly, having LOTS of training data was more important than having training data similar to the child test data. The "winner" was trained on adult speech (just lots of it) 5/n
1
0
0
So, the authors set out to find out how well these tools work for child speech. They found out (at least for this dataset), they do OK - around 45% overlap with gold-standard manual coding, as compared to around 65% for adult speech 4/n
1
0
0