by Lev Bratishenko
Acoustic study of the Greater London Authority (GLA) building, designed by Foster + Partners. (Animation: Arup Acoustics)
The technology to hear spaces that don’t exist has been around for decades. The hardware and software that make it possible have developed alongside leaps in processing power. In recent years virtual visuals in the form of 3D rendering have moved from laboratories to studios, mainframes to desktops, and laptops to smartphones. Virtual listening has lagged behind, but it is coming soon to the personal device. What are we going to do with it when it gets there?
It is a strange fact that most architects don’t have to study acoustics, but there are others involved in buildings that probably shouldn’t have to. Manual methods and early software for predicting how spaces sound are expressed in graphs and charts that risk limiting the discussion of acoustics to expert maneuvers of ego, performed in jargon. Listening is more democratic, even if people listen differently. That’s why musicians take ear training.
Auralisation is the process of making data audible, but before that was reasonable to do for every project, visual tools were used. The global engineering firm Arup has been in the virtual acoustics business since 1990 and operates an acoustic consultancy with sound labs in all major offices. For Norman Foster’s Greater London Authority (GLA) building, completed in 2002, Arup built an in-house plugin for 3D Studio Max to iterate the design.
Sketch of the GLA building. (Image: Foster + Partners)
Arup’s Global Acoustics Lead, Raj Patel, describes the original chamber design as “a big glass box that curves all the way to the top” and analysis indicated that it would have too excellent an echo. Echo is okay for some music, but not for speech. The simulation was first run in 2D for the sake of speed, and once an approach of offset balconies and sound-absorbing treatment looked like it would work, modelled in 3D overnight. “It took every single computer we had in the office, daisy-chained together”, remembers Patel.
Today the auralisation of performing arts spaces is SoundLab’s bread and butter. Sound waves are modelled using algorithms to predict the intensity and direction of reflections off surfaces with different acoustic characteristics in 3D.
For the most complex curved spaces, physical models with miniature speakers and receivers at ultrasonic frequencies are still used. The result is what Patel calls an “acoustic signature” that represents the response of the room to a specific signal.
Physically, the lab is a sphere of speakers housed in a sound isolated room, softly humming with computers. You sit in the middle, exposed on a high chair, and hope nothing horrible happens. What you hear is the live combination of the acoustic signature of a room and a sound, like somebody playing the oboe or talking (recorded without echoes in an anechoic chamber.) The power of the system is that it can switch “rooms” in milliseconds, a kind of teleportation by aural prosthesis.
City Hall, London by Foster + Partners. (Photo: Dennis Gilbert)
Hybrid room acoustics simulation software RAVEN (Room Acoustics for Virtual ENvironments) under development. (Photo: Sönke Pelzer / Institut für Technische Akustik an der RWTH Aachen)
SoundLab is a prosthetic system more like the internet than an implant, and it lets a group of people hear impossible things. This ability has revolutionised some design practices already. For example, new performing arts spaces have a benchmarking stage where the architect, engineer, and client fly to a few important halls to compare them. Vienna’s Musikverein and Amsterdam’s Concertgebouw are often on the list. But how do you compare two rooms when you’ve travelled for a day between them, or listened to different music in each one, or sat in different seats?
SoundLab toggles between simulations of the same piece performed in dozens of halls in real-time. Doing this makes you doubt what anybody means by a “great” room, and forces project teams to find shared vocabulary for acoustic terms like warm and dry, presence and intimacy, while young designers can quickly develop their acoustic intuition. “Instead of having to take them to a hundred buildings, which might take five years, you can just spend a bit of time listening to this room”, explains Patel, “and tell me where it differs from what you expect”.
SoundLab is not limited to concert halls. Arup helped Michael Arad and Handel architects with the World Trade Center memorial, focusing on two problems: designing the soundscape as a visitor moves from Fulton street down to the memorial and into the museum, and making a cost-effective proposal for isolating the museum’s auditorium from subway noise. The disappearance of the city as noise is central to the power of the WTC memorial experience; you enter the plaza (after a surreal airport-security check) and feel transported. Arup built a 3D acoustic model to make sure this would happen. For the auditorium, a glance at the graphs might suggest that sonic isolation of both the room and building is required, but a perceptual investigation using auralisation can ask: “When do you know a train is a train?” What is audible is not the same as what is intrusive.
Lev Bratishenko is a writer whose work has appeared in Abitare, Canadian Architect, Cabinet, Gizmodo, Icon, Maclean’s, Mark, Triple Canopy, and other publications. He lives in Montreal, where he covers classical music for the Montreal Gazette. In 2010 he curated the exhibition The Object is not Online at the Canadian Centre for Architecture.
yesyesyes.ca
World Trade Center Memorial by Michael Arad of Handel Architects, who worked with landscape architects Peter Walker and Partners, and Arup. Snøhetta’s Memorial pavilion is behind. (Photo: Snøhetta)
Wave field synthesis is the newest technology in virtual acoustics. This system produces a large area of accurate 3D sound so you can move about in a space and it still sounds real. Soon, you could put yourself in a hamster ball with an Oculus Rift and “walk through” a design, listening to your footsteps change as you toggle between marble and wood. After this is accomplished, the next technical challenge will be a full sensory simulation with lighting that warms your face, and systems to replicate the instant feeling of opening a window.
High-end fantasy, maybe, but you can already run virtual acoustics software on a laptop. A team from the Institute of Technical Acoustics at RWTH Aachen University has produced a package for the SketchUp platform that claims near real-time auralisation capabilities: draw it and hear it. Will there be benefits beyond acoustics once this becomes part of the standard design toolkit? Patel thinks so. “You become better at designing spaces in general, because you are thinking about how people are sensing and perceiving space”. I
PRODUCT GROUP
MANUFACTURER
New and existing Tumblr users can connect with uncube and share our visual diary.
Uncube is brandnew and wants to look good.
For best performance please update your browser.
Mozilla Firefox,
Internet Explorer 10 (or higher),
Safari,
Chrome,
Opera