ChangLabUcsf Profile Banner
ChangLabUCSF Profile
ChangLabUCSF

@ChangLabUcsf

Followers
3K
Following
12
Media
44
Statuses
144

Chang Lab at UCSF, human brain, speech, brain-computer-interfaces, neurosurgery

San Francisco, CA
Joined August 2020
Don't wanna be here? Send us removal request.
@ChangLabUcsf
ChangLabUCSF
4 months
Our latest work on the neural mechanism for stopping speech production is published! See a brief summary below and the original paper linked to the post at the bottom.
Tweet media one
@NatureHumBehav
Nature Human Behaviour
4 months
In natural conversations, people can stop speaking at any time. How? Using high-density electrocorticography, Zhao et al. find a distinct neural signal in the human premotor cortex that inhibits speech output to achieve abrupt stopping. @ChangLabUcsf.
1
13
53
@ChangLabUcsf
ChangLabUCSF
1 year
Our models also showed stable performance without retraining for ~2 months and these results were achieved ~4 years after ECoG implantation. We hope these findings can be scaled to more patients in the near future!
Tweet media one
0
2
3
@ChangLabUcsf
ChangLabUCSF
1 year
We leveraged this finding to demonstrate transfer learning across languages. Data collected in a first language could significantly expedite training a decoder in the second language, saving time and effort for the user.
Tweet media one
1
2
4
@ChangLabUcsf
ChangLabUCSF
1 year
Cortical activity instead represented the intended vocal-tract movements of the participant, irrespective of the language. This allowed us to train a model that generalized across a shared set of English and Spanish syllables.
Tweet media one
1
2
2
@ChangLabUcsf
ChangLabUCSF
1 year
Despite learning English later in life, we found that cortical activity was largely shared across languages, with no clear differences in magnitude or language-selective neural populations.
Tweet media one
1
2
4
@ChangLabUcsf
ChangLabUCSF
1 year
Our system decoded cortical activity, measured with ECoG over the IFG and SMC, into English and Spanish sentences. The intended language was not set by the user, instead it was freely decoded from cortical activity and language models.
1
4
10
@ChangLabUcsf
ChangLabUCSF
1 year
Speech decoding has primarily been shown for monolinguals but half the world is bilingual with each language contributing to a person’s personality and worldview. There is a need to develop decoders that let bilinguals communicate with both languages.
Tweet media one
2
3
3
@ChangLabUcsf
ChangLabUCSF
1 year
Work led by @asilvaalex1 with mentors and co-authors @jessierliu , @SeanMetzger5 , @bhaya_ilina , @KayloLittlejohn , @AtDavidMoses , Max Dougherty, Margaret Seaton, and Edward Chang.
1
2
3
@ChangLabUcsf
ChangLabUCSF
1 year
Excited to share our work on developing a bilingual speech neuroprosthesis that decodes cortical activity into English and Spanish sentences in a person with paralysis. Out today in @natBME!
3
24
133
@ChangLabUcsf
ChangLabUCSF
1 year
Reminder to those at #Cosyne2024 that @asilvaalex1 will be presenting his work on ***a bilingual speech neuroprosthesis*** ! Poster Session 2 #131 at 12:30!.
0
0
6
@ChangLabUcsf
ChangLabUCSF
1 year
Chang lab is at #Cosyne2024!.🗣️Thurs, posters 1 #083: @ItzikNorman neural oscillations during speech production. 💬Fri, posters 2 #131: @asilvaalex1 bilingual speech neural prostheses. 🤖Sat, talk session 9: @shaileeejain modeling single neurons in humans with deep neural nets.
0
3
22
@ChangLabUcsf
ChangLabUCSF
1 year
Contrary to prior work, we found no evidence for a music-region. Instead, populations selective for music were intermixed with other sound-responsive populations across a broad swath of auditory cortex. [7/7].
0
0
3
@ChangLabUcsf
ChangLabUCSF
1 year
Neural populations that selectively responded to music (over speech) were driven by the encoding of expectation, indicating domain specialization for representing the statistical structure of melody. [6/7].
1
0
5
@ChangLabUcsf
ChangLabUCSF
1 year
To what extent is this encoding specialized for music versus general auditory processing? To answer this, we also recorded neural activity while the same subjects listened to speech. [5/7].
1
0
1
@ChangLabUcsf
ChangLabUCSF
1 year
Within the superior temporal gyrus, distinct dimensions of melody were represented across a spatial map – where the pitch, pitch-change, and expectation of notes were encoded in separate neural populations. [4/7]
Tweet media one
1
0
1
@ChangLabUcsf
ChangLabUCSF
1 year
What is the neural code that underlies our perception of melody? To answer this, we recorded high-density intracranial activity from the cortical surface as subjects listened to musical phrases. [3/7].
1
0
1
@ChangLabUcsf
ChangLabUCSF
1 year
Melody is a core component of music in which discrete pitch-units (notes) are serially ordered to convey emotion and meaning. Humans perceive not only the pitch of notes, but pitch-changes between them and their statistical expectation given prior notes. [2/7].
1
0
1
@ChangLabUcsf
ChangLabUCSF
1 year
Excited to share our work characterizing the encoding of melody using direct recordings from the human auditory cortex! Out today in @ScienceAdvances. Led by @nsankaran72 together with mentors @matt_k_leonard, @FrdricTheuniss1 and Eddie Chang [1/7].
4
29
130
@ChangLabUcsf
ChangLabUCSF
2 years
This is just the beginning… Neuropixels and similar technologies will help us crack the neural code of language. [6/6].
0
1
12
@ChangLabUcsf
ChangLabUCSF
2 years
At the same time, each vertical ‘column’ of neurons also contained a surprisingly diverse set of neurons tuned to many different types of speech features. [5/6].
1
1
3