Site avatar

Amber Maimon, PhD

Neuroscience & Human-Computer Interaction (HCI) researcher | Co-head NeuroHCI Research Group



Contact
Site avatar

Amber Maimon, PhD

Research Associate, Co-Head NeuroHCI Research Group, Academic Lab Manager



Computational Psychiatry and Neurotechnology Lab | Human Computer Interaction Lab

Ben Gurion University | University of Haifa




Amber Maimon, PhD

Neuroscience & Human-Computer Interaction (HCI) researcher | Co-head NeuroHCI Research Group



Computational Psychiatry and Neurotechnology Lab | Human Computer Interaction Lab

Ben Gurion University | University of Haifa



SoundSpace: What and Where Through Sound


Journal article


Amber Maimon, Iddo Yehoshua Wald, Rahaf Sobh, Carol Sliman, Yarah Nassar, J. Lanir
Proceedings of the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems, 2026

Semantic Scholar DOI
Cite

Cite

APA   Click to copy
Maimon, A., Wald, I. Y., Sobh, R., Sliman, C., Nassar, Y., & Lanir, J. (2026). SoundSpace: What and Where Through Sound. Proceedings of the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems.


Chicago/Turabian   Click to copy
Maimon, Amber, Iddo Yehoshua Wald, Rahaf Sobh, Carol Sliman, Yarah Nassar, and J. Lanir. “SoundSpace: What and Where Through Sound.” Proceedings of the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems (2026).


MLA   Click to copy
Maimon, Amber, et al. “SoundSpace: What and Where Through Sound.” Proceedings of the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems, 2026.


BibTeX   Click to copy

@article{amber2026a,
  title = {SoundSpace: What and Where Through Sound},
  year = {2026},
  journal = {Proceedings of the Extended Abstracts of the 2026 CHI Conference on Human Factors in Computing Systems},
  author = {Maimon, Amber and Wald, Iddo Yehoshua and Sobh, Rahaf and Sliman, Carol and Nassar, Yarah and Lanir, J.}
}

Abstract

Accessibility technologies for visually impaired users often convey visual scenes through discrete verbal descriptions, which can interrupt natural hearing and increase cognitive load. Other technologies that integrate spatialized audio often focus on orientation, providing spatial cues without conveying what is present. We present SoundSpace, a real-time system that represents object identity and spatial layout through structured auditory cues. SoundSpace builds on prior sensory substitution approaches to combine brief spoken object naming with continuous mappings of distance, vertical position, and horizontal location to loudness, pitch, and stereo panning. To balance spatial awareness against cognitive load, the system separates scene sensing from audio output, providing periodic spatial sweeps and immediate updates when objects move. Open-vocabulary detection and environment profiles allow users to restrict feedback to task-relevant objects, reducing auditory clutter. We describe the design and implementation of SoundSpace and discuss its implications for non-visual spatial perception and active exploration.



Translate to