Yichen Yuan Profile
Yichen Yuan

@YichenYuan6

Followers
11
Following
3
Media
7
Statuses
10

PhD candidate at Utrecht University • Interested in multisensory perception, working memory & attention

Joined June 2022
Don't wanna be here? Send us removal request.
@YichenYuan6
Yichen Yuan
8 months
Huge thanks to Surya and Nathan for their help😊. Open access preprint: All materials and data: Feel free to reach out to me in case of any questions! 9/9.
Tweet card summary image
osf.io
Predicting the location of moving objects in noisy environments is essential to everyday behavior, like when participating in traffic. Although many objects provide multisensory information, it...
0
0
1
@YichenYuan6
Yichen Yuan
8 months
(2) Observers can flexibly prioritize one sense over the other, in anticipation of modality-specific interference, and use only the most informative sensory modality to guide behavior, while nearly ignoring other modalities (even when they convey substantial information). 8/9
Tweet media one
1
0
1
@grok
Grok
5 days
Generate videos in just a few seconds. Try Grok Imagine, free for a limited time.
353
639
2K
@YichenYuan6
Yichen Yuan
8 months
In these four experiments, we concluded that (1) observers use both hearing and vision when localizing static objects, but use only unisensory input when localizing moving objects and predicting motion under occlusion. 7/9
Tweet media one
1
0
1
@YichenYuan6
Yichen Yuan
8 months
In Exp. 3 the target did not move, but only briefly appeared as a static stimulus at the exact same endpoints as in Exp. 1 and 2. Here, a substantial multisensory benefit was found when participants localized static audiovisual target, showing near-optimal (MLE) integration. 6/9
Tweet media one
1
0
1
@YichenYuan6
Yichen Yuan
8 months
In Exp. 2, there was no occluder, so participants were required to simply report where the moving target disappeared from the screen. Here, although localization estimates were in line with MLE predictions, no multisensory precision benefits were found either. 5/9
Tweet media one
1
0
1
@YichenYuan6
Yichen Yuan
8 months
In these two experiments, we showed that participants do not seem to benefit from audiovisual information when tracking occluded objects, but flexibly prioritize one sense (V in Exp 1A and A in Exp 1B) over the other, in anticipation of modality-specific interference. 4/9
Tweet media one
1
0
1
@YichenYuan6
Yichen Yuan
8 months
We asked whether observers optimally weigh the auditory & visual components of audiovisual stimuli. We therefore compared the observed data to maximum likelihood estimation (MLE) model predictions, which weighs the unisensory inputs according to their uncertainty (variance). 3/9
Tweet media one
1
0
1
@YichenYuan6
Yichen Yuan
8 months
In Exp. 1A, moving targets (auditory, visual or audiovisual) were occluded by an audiovisual occluder, and their final locations had to be inferred from target speed and occlusion duration. Exp. 1B was identical to Experiment 1A except that a visual-only occluder was used. 2/9
Tweet media one
1
0
1
@YichenYuan6
Yichen Yuan
8 months
New paper accepted in JEP-General (preprint: with @SuryaGayet & @NathanvdStoep. We show that observers use both hearing & vision for localizing static objects, but rely on a single modality to report & predict the location of moving objects. 1/9.
osf.io
Predicting the location of moving objects in noisy environments is essential to everyday behavior, like when participating in traffic. Although many objects provide multisensory information, it...
1
2
7