New Paper on SpatDIF

The latest issue of the Computer Music Journal (MIT Press) includes an article on the Spatial Sound Description Interchange Format (SpatDIF) by Trond Lossius, Jan Schacher, and myself, entitled “The Spatial Sound Description Interchange Format: Principles, Specification, and Examples”.

SpatDIF_CMJ2013

SpatDIF, the Spatial Sound Description Interchange Format, is an ongoing collaborative effort offering a semantic and syntactic specification for storing and transmitting spatial audio scene descriptions. The SpatDIF core is a lightweight minimal solution providing the most essential set of descriptors for spatial sound scenes. Additional descriptors are introduced as extensions, expanding the namespace and scope with respect to authoring, scene description, rendering, and reproduction of spatial sound. A general overview presents the principles informing the specification, as well as the structure and the terminology of the SpatDIF syntax. Two use cases exemplify SpatDIF’s potential for pre-composed pieces as well as interactive installations, and several prototype implementations that have been developed show its real-life utility.

New Year, New City, New Position

At the beginning of 2013, two important things have changed for me. I left Berkeley and academia and moved to San Diego, California to join Qualcomm R&D.

Here, I will continue doing applied research in acoustics, signal processing, and spatial audio technologies. Besides this, I am hoping to improve my surfing skills and to learn Spanish.

Technology Trends in Audio Engineering – Good for Spatial Audio

I just found the time to go through the Technology Trends in Audio Engineering essay written by the leaders of 17 AES technical committees and released half a year ago.

While reading this 18 page document I was positively surprised to see how many different interest groups have an interest in spatial audio related issues.

The 10 groups which mention spatial audio are:

  • Games
  • Audio Recording and Mastering Systems
  • Automotive Audio
  • Coding of Audio Signals
  • High Resolution Audio
  • Signal Processing for Audio
  • Spatial Audio (obviously)
  • Audio for Telecommunications
  • Transmission and Broadcasting
  • Microphones and Applications

The 7 groups that do not seem to think much about spatial audio are:

  • Semantic Audio Analysis
  • Network Audio Systems
  • Human Factors in Audio Systems
  • Hearing and Hearing Loss Prevention
  • Fiber Optics for Audio
  • Audio Forensics
  • Archiving, Restoration, and Digital Libraries

Best paper award at SMC 2012

Today Trond LossiusJan C. Schacher, and me received the Best Paper Award for “SpatDIF: Principles, Specification, and Examples” at the 9th Sound and Music Computing Conference.

 

This paper presents the current state of our long-term effort in creating a community-driven interchange format for spatial audio scenes. More on www.SpatDIF.org.

Here the reference:

@inproceedings{SpatDIF-SMC12,
 Address = {Copenhagen, DK},
 Author = {Nils Peters and Trond Lossius and Jan C. Schacher},
 Booktitle = {Proc. of the 9th Sound and Music Computing Conference},
 Title = {{SpatDIF}: Principles, Specification, and Examples},
 Year = {2012}}

We are now invited to submit a revised and expanded version for publication in The Computer Music Journal (MIT Press).

 

I am similarly excited that the paper “An Automated Testing Suite for Computer Music Environments” I wrote together with Trond Lossius and Timothy Place was also nominated for Best Paper.

New papers

p>I am looking forward to SMC2012 in Copenhagen, Denmark, the 133rd AES Convention in San Francisco, USA, and the ACM Multimedia 2012 in Nara, Japan:

  • Peters N., Schacher J., Lossius T.: SpatDIF: Principles, Specification, and Examples, to appear in Proc. of the 9th Sound and Music Computing Conference (SMC), Copenhagen, Denmark, 2012.
  • Peters N., Lossius T., Place T.: An Automated Testing Suite for Computer Music Environments, to appear in Proc. of the 9th Sound and Music Computing Conference (SMC), Copenhagen, Denmark, 2012.
  • Peters N., Choi J., Lei H.: Matching artificial reverb settings to unknown room recordings: a recommendation system for reverb plugins, to appear at 133rd AES Convention, San Francisco, 2012.
  • Peters N., Lei. H., Friedland G.: Name That Room: Room identification using acoustic features in a recording, to appear at ACM Multimedia 2012, Nara, Japan, 2012.

New book on sound-field reprodution

Between The Ahnert and The Blauert, there is a new book in my library: The Ahrens:


From the description:

This book treats the topic of sound field synthesis with a focus on serving human listeners though the approach can be also exploited in other areas such as underwater acoustics or ultrasonics. The author derives a fundamental formulation based on standard integral equations and the single-layer potential approach is identified as a useful tool in order to derive a general solution. He also proposes extensions to the single-layer potential approach which allow for a derivation of solutions for non-enclosing distributions of secondary sources such as circular, planar, and linear ones. Based on above described formulation it is shown that the two established analytic approaches of Wave Field Synthesis and Near-field Compensated Higher Order Ambisonics constitute specific solutions to the general problem which are covered by the single-layer potential solution and its extensions. The consequences spatial discretization are analyzed in detail for all elementary geometries of secondary source distributions and applications such as the synthesis of the sound field of moving virtual sound sources, focused virtual sound sources, and virtual sound sources with complex radiation properties are discussed.

Another gem is the accompanied website, where Jens provides the Matlab source code for all Matlab figures used in this book – bravo Jens.