Augmented reality soundwalk

A cartographic sonification system for mobile apps and off-grid Nada Sfera Soundscape Live Streamer (part of Nada Bumi Research Project).

Related articles:

[1] The Sound of Being There: Audiovisual Cartography with Immersive Virtual Environments (https://link.springer.com/article/10.1007/s42489-019-00003-5)

[2] Real-Time Binaural Rendering with Virtual Vector Base Amplitude Panning https://www.aes.org/e-lib/browse.cfm?elib=20414

[3] CW_binaural~: A binaural synthesis external for Pure Data https://puredata.info/community/conventions/convention09/doukhan.pdf

[4] Binaural Simulation of Echolocation in Pure Data https://tor.halmrast.no/EcholocationBinaural%20Sim%20SB%20AcReport.pdf

[5] Elevation localization and head-related transfer function analysis at low frequencies https://asa.scitation.org/doi/10.1121/1.1349185


GPS Coordinate Data for Sound Object and HTRF Manipulation in Binaural Soundfield

Retrieving the real-time GPS data feed from a mobile phone can easily be done by using a mobile apps build through MobMuPlat platform to communicate with an internal Pure Data (Pd) patch build with the apps or external Pd patch on other device over internet network.

From Spherical Trigonometry, we can find distance of point A (listener) to point B (sound source) through Haversine Formula as below.

Longtitude, whereby 180°/360° = 2 that is East and West (-)

Latitude, whereby 90°/180° = 2 that is North and South

a = sin²((φA-Bφ)/2) + cos φA ⋅ cos φB ⋅ sin²((λA-λB)/2)

c = 2 * atan2( √a, √(1−a) )

d = R * c

Where φ represent the latitudes, λ represent the longitudes and R represent the Earth radius (6,371, 000 meter) and note that angles need to be in radians to pass to trig functions

with the d (distance A-B) value, we can use vector based amplitude with HRTF for depth soundfield image.


From one fixed point listener perspective in binaural soundfield sphere, altitude or elevation is the vertical vector and azimuth is the horizontal vector in angular distance towards the moving sound object.

Binaural soundfield simulation (except depth) using [earplug~] patch object library by Pei Xiang, Hans-Christoph Steiner et al.

From my observation, the elevation sound field simulation from the earplug~ patch does not articulate well. Through real-life situation, I had perceived snap sound beneath my head slightly bassier and ‘less energy’ compare to the snap sound coming above my head. Perhaps, this is due to damping factor by the gravity force which reduce the sound vibration energy propagation through compressible medium (mass-density) over time before reaching to my ears in addition to ‘elevation filtering’ effects at ear pinna (auricle); Monaural spectral features due to pinna and torso diffraction are the primary cues for elevation (V. Ralph Algazi, 2000).

Does a ‘seesaw-like’ (inverse-variation) frequency filtering mechanism (@1KHz as fulcrum) will improve the perception of sound elevation?