Nima Zargarnezhad
@Nima_ZN
Followers
89
Following
565
Media
9
Statuses
53
MSc. - Research Technician in @ingridjohnsrude lab at @WesternU Studying brains and ears
London, Ontario
Joined March 2021
Excited to present my recent work at the International Conference on Auditory Cortex (#ICAC2025) this Tuesday! If you're interested in fMRI naturalistic paradigms, homotopic coupling, intersubject synchrony, or stimulus-driven dynamic connectivity, come find me at Poster 187.
0
0
0
New 📄 co-authored by @Nima_ZN, @EAMacpher, @ingridjohnsrude in @ASA_JASA: ‘Focality of sound source placement by higher (ninth) order ambisonics and perceptual effects of spectral reproduction errors’ @westernuNCA @westernuCCAA 🔒
pubs.aip.org
Higher-order ambisonic rendering is an increasingly common soundscape reproduction technique that, in theory, enables presentation of virtual sounds at nearly a
0
1
3
🎤 Our recent paper is featured in press release by @AIP_Publishing. Read our story and an interview with me here: https://t.co/BJAsrvPuy3
publishing.aip.org
WASHINGTON, April 15, 2025 – Surround-sound speakers can immerse you in a multimedia experience, but what if there was a speaker that could completely re-create a three-dimensional […]
Surround-sound speakers can immerse you in a multimedia experience, but what if there was a speaker that could completely re-create a three-dimensional soundscape? Read more of the press release! 👇 https://t.co/PD88KdLVE4
0
0
6
To support continued work in this area, we’ve made both the AudioDome Python module and our Head-And-Torso Simulator recordings of experiment 2, publicly available. 🖥️ GitHub – AudioDome Module & Tutorial: https://t.co/PSx2aik8gA 🔓 HATS data on OSF:
lnkd.in
This link will take you to a page that’s not on LinkedIn
1
1
0
This project was part of my MSc thesis at @WesternU , aiming to clarify the strengths and limitations of the AudioDome, laying the foundation for future experiments at @BMI_WesternU and @WesternuWIN.
1
0
1
📢#PublicationAlert Excited to share that this work is now officially published in @ASA_JASA as part of a special issue on "Advances in Soundscape: Emerging Trends and Challenges in Research and Practice"! 📜 Read the paper:
pubs.aip.org
Higher-order ambisonic rendering is an increasingly common soundscape reproduction technique that, in theory, enables presentation of virtual sounds at nearly a
If you are curious about designing experiments with soundscapes, here is a #preprint with Bruno Mesquita, @EAMacpher, and @ingridjohnsrude in which we examine the ability of a 9th-order ambisonic system to reproduce sounds at or below the limits of human spatial resolution:🧵1/10
1
0
3
We (Turing Sour Snakes🐍) will be presenting our findings about the advantage of biological architectures in computational cost-performance trade-off for reinforcement learning agents, on Monday's second session (5 p.m. UTC). Please join us if you are interested!
👏 We are excited to announce the Impact Scholars Program 2024 Seminar Presentations! Join us for a week of inspiring sessions where our scholars share their research. Check out the full schedule and register here: https://t.co/hl4SPwBgi2
1
0
3
Asking an LLM about promoting another LLM to get some literature review for a project about LLMs 🔁
0
0
6
We welcome questions, comments, and feedback regarding this work! #soundscapes #virtual_auditory_environments #spatial_hearing #sound_localization #auditory_perception #ambisonics
0
0
1
10/10 The 4k fidelity limit should be borne in mind when designing experiments and generating stimuli for our system, and we recommend similar groundwork is necessary for other ambisonic systems if they will be used to explore human auditory spatial perception.
1
0
1
9/10 We conclude that rendering sound sources with frequency content above 4kHz (with our system - this threshold can be calculated for other systems) will distort sound localization information that will lead to fake elevation cues.
1
0
1
8/10 Next, in Experiment 3, we conducted an elevation discrimination task for ambisonic vs. SC sound sources. This confirmed that differences in high-frequency content are interpreted as elevation cues – the elevation percept is not present for low-pass (≤4kHz) sound sources.
1
0
1
7/10 However, high-frequency ILD cues from ambisonic sources differed from those of SC sources. At locations with no physical loudspeakers, ambisonic presentation followed the SC trend better than VBAP, showing its practical advantage over VBAP.
1
0
1
6/10 Also, ITD cues match well for frequencies below 1.8kHz when comparing ambisonics and SC sound sources at loudspeaker locations. This was also true for ILD cues for frequencies below 4kHz.
1
0
1
5/10 In Experiment 2, we used a Head-And-Torso Simulator (HATS) to estimate the spectral and binaural cue content of the sound sources of Experiment 1. The spectral power for frequencies above 4kHz is lower when they are presented with ambisonics compared to SC.
1
1
2
4/10 We also estimated the frontal MAA when the participants were facing locations with high and low loudspeaker density. We concluded that the simulation resolution is homogenous across the space, regardless of distance from loudspeakers, as MAAs were all around 1°.
1
0
1
3/10 In Experiment 1, we estimated Minimum Audible Angles (MAA) for wide-band ambisonic noise bursts on the frontal half of the horizontal plane and showed that we can simulate focal sound sources that are spatially resolved at the threshold of human acuity (~1° off the midline).
1
1
2
2/10 In this study, we tested a 95-channel loudspeaker array (AudioDome at @BMI_WesternU) equipped with three 3D spatial sound presentation methods: Single-Channel (SC), Vector-Based Amplitude Panning (VBAP), and 9th-order ambisonics in 3 experiments:
1
0
1