Wrap Around Spatial Platform
WasP is an interactive performance environment for multichannel spatial soundscape design and curation. I developed WasP at RMIT University’s SIAL Sound studios as part of a Design Research Institute funded pilot. The pilot produced a report that highlighted the technical challenges facing curators and content creators in the area of public soundscape performance and presentation, and, outlined a number of strategies designed to support these activities. The research project ended in 2009, however, the software programming for WasP has been an ongoing personal project to improve and enhance the experience for creating a soundscape work. This work has resulted in the application evolving considerably over the years. The process for creating WasP presented many design challenges, and solving those challenges could not have been achieved without the concepts and ideas, first established in 2008, and the experinece and knowledge gained from the process of design. Details about the project can be found here.
In 2012, Jordan Lacey, a PhD canditate at RMIT University, used WasP for a public laneways project. In preparation for this project a standalone WasP Player was developed, the parameter database was translated into libraries, several GUI impovements and the inclusion of automation scripts for recalling and playing back multiple projects and project mappings. A compact and stable WasP.v3 was released prior to the laneways project commencing. The project ran, without problems, for close to 3 months. More details about the project can be found here.
The very first WasP prototype (WasP.v1) was created in 2008 using MaxMSP4.6. From my experience working with MaxMSP, prior to the release of Max5, the challenge is finding a balance between efficient programming and effective gui design. In the case of WasP v1, efficiency was given priority and this meant foregoing visual feedback in order to save cpu cycles and, by doing so, avoiding any inherent latency. A problem with this approach is that without visual feedback the user has no way knowing how to go about the process for fine-tuning their work.
With the release of Max5 in late 2009, I revisited the programming for WasP, completing WasP.v2 in 2010. Whilst transfering much of the code across to the new platform, I explored ideas that led to improvements in the way the application managed complex data sets, as well as improving the applications look and feel. Part of this process was finding ways for revealing real-time parameter data, enabling users to fine tune their work. A framework for storing and accessing generated data was implemented using an SQL database. This presented opportunities for searching and filtering types of data but also for creating unique data sets, or groups, based on user defined tags or known parameters.
The final change to WasP.v2, implements a client/server model. This split presents opportinuties for expanding or collapsing the scale of WasP based on the complexity of the listening environment. In complex listening environments, where there may be a large number of speakers installed across more than on type of space, the WasP client can be employed to just generate control and event data, and then broadcast this over network to the server. The sever makes the connection between data and audio, and, broadcasts spatial audio over a multichannel system. The client offers users a contained system that could be used in of the concerns for effectively managing and curating a variety of spatial works in public spaces and a strategy for automating and controlling such works over a multichannel system. The benefits of this application offer users the opportunity to dynamically control and mix within and between individual works, implement data mapping and automation sequences, define matrixing and spatial panning requirements for playback as well as integrate these techniques within a small studio or larger performance environment.