SADISS stands for socially aggregated, digitally integrated sound system.
It is a web-based application that bundles smartphones into monumental yet intricate sound systems or choirs/ensembles. It is developed at Institute for Composition, Conducting & Computer music of the Anton Bruckner University (Linz, Austria). The project's homepage you can find here: sadiss.net
Following a client-sever model, the SADISS server distributes information about sounds or text to be synthesized to the smartphones/clients currently registered, which in turn synthesize sound or speech accordingly. Clients connect to the server via the Internet, they do not have to be on the same Wifi, Subnet, etc.
Structure of SADISS' architecture:
There are two basic ways of working with SADISS, one we refer to the as Sound System, the other one as the Choir mode:
Example: For a concert/performance 100 people show up in the audience. Using the SADISS client app on their phones thes scan ONE single QR code displayed at the entrance to the venue to register their phones with the SADISS server which uses the 100 smartphones as individual, synchronised synthesizers creating a massively multichannel, immersive sea of sound.
Using TextToSpeech synthesis, SADISS can also synthesize on (potentially multi-lingual) stream of speech.
Structure of SADISS in sound system mode:
Example: 40 people show up to a performance. Using the SADISS app on their phones, they register via 4 different QR-codes. With 10 audience members having scanned each one of the 4 QR-codes, we have just created four sub-groups (or voices) of 10 members each.
Using headphones, the audience turned (co-)performers listen to pitches to sing/hum and (via text-to-speech synthesis) spoken instructions to follow. This potentially enables us to have the group of people sing 4-part harmonies and/or perform different actions while doing so.
In 'choir' mode for each SADISS voice an individual QR code is created: