I investigate spatial aspects in soundfields and how the auditory system extracts meaningful information from them. I aim to understand, develop, and apply multichannel audio processing and machine learning techniques to better model perception and auditory attention of complex soundfields, and to accurately reproduce them via loudspeakers or headphones. These techniques have implications for a broad range of applications, e.g., sound recording, composition, performance, and interactive music; cognitive sensor arrays; brain-computer interfaces; artificial intelligence; hearing aids; and extended realities. My work aims to spark innovation in fields such as artificial intelligence, synthetic biology, robotics, and new media technologies.

I currently hold more than 75 granted patents and have over 40 peer-reviewed publications. My academic work has been cited (last I looked at Google Scholar) more than 1000 times (h-index: 20, i10-index: 39).

Currently I am a Sr. Staff Researcher/Manager in the Multimedia R&D group for Acoustics and 3D Audio Signal Processing at Qualcomm, Advanced Tech R&D. Previously, I was a postdoctoral fellow at the University of California, Berkeley, affiliated with the International Computer Science Institute (ICSI), the Center for New Music and Technologies (CNMAT), and the Parallel Computing Laboratory (ParLab).

Formative Years

I have always worked in an interdisciplinary scientific and creative context, e.g., with composers, ensembles, sound installation artists, neuroscientists, and computer scientists.

I trained in classical guitar and as an Electrical and Audio Engineer (Dipl.-Ing. Elektrotechnik-Toningenieur) with a focus on acoustics, digital signal processing, and sound recording at Graz University of Technology and University for Music and Dramatic Arts, Austria.

My MSc and BSc in Electrical Engineering and Audio Engineering was under thesis supervision by Profs. Alois Sontacchi and Robert Höldrich. This interdisciplinary program is known for its competitive entrance exam and for its uniquely comprehensive curriculum, which requires physics, circuit engineering, musicianship, music theory, and sound design. During this time I worked for Joanneum Research, Opera Graz, and AKG R&D automotive group.

To more deeply explore questions of spatial perception of music in the context of music technology, for my Ph.D. I studied with Prof. Stephen McAdams, Canada Research Chair in Music Perception and Cognition at the Schulich School of Music, McGill University, and former director of CIRMMT (Centre for Interdisciplinary Research in Music Media and Technology).

My Ph.D. dissertation, Developing sound spatialization tools for musical applications with emphasis on sweet spot and off-center perception, was jointly supervised by Prof. McAdams and by Prof. Jonas Braasch, Associate Professor in the School of Architecture and Director of the Center for Cognition, Communication, and Culture at Rensselaer Polytechnic Institute, Troy, New York.

At McGill University, I also gained course development and teaching experience at graduate and undergraduate levels in music technology and architecture.

For my postdoctoral research, I continued investigating multichannel audio processing techniques and applications, in coordination with the late but deeply missed and never forgotten Prof. David Wessel at the CNMAT (Center for New Music Audio Technologies), ICSI (International Computer Science Institute), and ParLab (The Parallel Computing Laboratory) at the University of California, Berkeley, with funding support by the DAAD (German Academic Exchange service). I also collaborated with a wonderful group of scientists at the Telluride Neuromophic Cognition Engineering Workshop.

The knowledge I gained in my post-doctoral research project, Content Analysis of spatial sound scenes: separation, classifying, and tracking of sounding objects and acoustical properties in 3D using a 144-channel spherical microphone array, yielded several journal publications and my first granted patent.

At UC Berkeley, I also guest lectured, and organized and invited guests for the CNMAT Spatial Audio Lecture Series.


Since 2013 I am researching, developing, and implementing spatial audio techniques for Qualcomm Technologies in San Diego. My corporate research work was enabled through a competitive O-1 Visa application with the support of leaders in their fields, e.g., Profs. Catherine Guastavino (Canada), Jens Blauert (Germany), Stephen McAdams (Canada), Ville Pulkki (Finland), David Wessel (USA), Nelson Morgan (USA), and John Chowning (USA), among others.

Dr. Deep Sen, Senior Director of the 3D Audio and Acoustics R&D Group, hired me to develop novel 3D audio analysis and audio coding algorithms. The major result of my work is a highly efficient audio compression algorithm, which was adopted into the ISO/IEC MPEG-H 3D Audio standard. This algorithm put Qualcomm on the international map of spatial audio research and development, a first-time accomplishment in the company’s 30-year history. Korean broadcasters have already implemented this technology and its roll-out into other regions is ongoing.

My work results in frequent travel, to international audio technology standards (ITU, MPEG, 3GPP) meetings, and for academic and industry talks and workshops.  I am Co-Chair of the Technical Committee for Spatial Audio at Audio Engineering Society (AES) and Rapporteur for Virtual Reality and Other Advanced Immersive Audiovisual Systems for Broadcast at the International Telecommunication Union (ITU).

I am also a scientific reviewer of grant proposals, conferences, and journals. As a technical expert, I am an appointed member of Qualcomm’s internal patent application review board.

I continue to regularly collaborate with a wide variety of creative people in the broadcast and cinematic industry, including award-winning recording and mixing engineers.

Please see my CV, Projects, Publications, and Teaching for details.

You can contact me here.