Two huge screens provide the performer with his backdrop, one linked up to his laptop, the other to a video camera. The video camera is also linked to the laptop, and points at its screen. The laptop’s screen shows the interactive software EyeCon, whose interface transmits the pictures captured by the video camera. The
assemblage gives shape to a new video music ‘tool’ that places the
performer, and his audience, within a very difference creative and
perceptive context.
In this way, MultiReverse inverts the relationship between sound and image, as it is the moving pictures that generate sound, and not vice versa. Images are no longer a “backdrop” to music. Sound is no longer a “soundtrack” to images. Instead, both combine to form a true, unique audiovisual event that is different every time, as the environmental and emotional context changes. Audiences
normally see performers bent over their keyboards or mixers, without
really knowing what they are up to – the performance might actually be
a recording or set up in such a way that live action counts little to
nothing. By screening the software interface and the live video images,
audiences are given an insight into the kind of real-time creativity
that is going on, augmenting the spectacular nature of the show.
The
objects chosen to make the music – broken circuits, iron chains,
flotsam, burnt plastic, old valves – each have their own shape and
symbolic value, creating the theme to be worked on and developed in
each track
|