Some Remarks on
Musical Instrument Design at STEIM
At STEIM most research is applied towards specific projects of resident artists and composers. A large portion of this work is for live performance with digital electronics. Hardware and software designers are guided to take idiosyncrasy rather than generality as the prime guiding principle but have managed to create recyclable musical tools. An empirical method is promoted for both artist and technologist in order to recover the physicality of music lost in adapting to the abstractions of generic technology. The emphasis is on instrument design as a response to the questions posed by each artists heterogeneous collections of ideas and tools . (Appeared originally in Contemporary Music Review vol 6 pt 1; Harwood Academic Publishers.)
Contrary to the beliefs of some there is no crisis of formal thinking in contemporary music. We live in a structural paradise where the formalisms of a hundred different disciplines are waiting only for the novel application. Certainly in computer music the problem is not lack of form it is the immense mediating distance which confronts each composer when encountering the computer. Despite twenty years of programming for music, the territory gained seems quite small compared with the empire of musical aspiration. Many composers long to regain some sort of musical spontaneity. This was not always so. A great deal of effort in this century has been spent on the invention of distancing techniques for the composition of music. From the Serialists to John Cage to the experimentalists of the post war generation, the project has been to deny the habitual or the hackneyed by developing techniques to restrain or condition the immediate process of choice. Whether the problem was seen to be the limitations of traditional technique or the excesses of romantic self expression, the solutions were either to adopt a formal method to distance choice or to choose a medium for which there was no existing habit. With computer music you get both: the distance comes for free but a distance which can only be viewed as problematical. The emphasis may in fact be shifting back towards a quest for immediacy in music. This is a realignment for both aesthetic and practical reasons. Aesthetically, the return of the composer/performer demands new ways to 'play' music. Practically, the use of computers in music presents a formidable problem for composers and musicians for which an empirical approach may be a solution.
The composers and artists who have found a natural community in a place like STEIM can be gathered into the broad category of live electronic art. It is not so much stylistic similarities which put them together [grouped as they under such in diverse classifications as new music, electronic music, performance art, jazz, free improvisation, interactive art, multimedia, audio art...] as it is a common interest in retaining the element of performance in their work. Paradoxically, despite the problems associated with technology, much of the attraction to live electronics is an attraction to their utility in a performance situation. When live electronics means computers this is an attraction to real time. The choice of real time can have its cost in a the narrowing of possibilities. For some the choice of real time is simply not available within the current state of the art or budget. For other composers any limitation is disagreeable, the attraction of computers is partly a taste for infinite possibility, infinite refinement. But for many composers real time is simply a healthy constraint imposed on a nauseating infinity of possibilities. The lack of musical concreteness in the computer is compounded by the hypnotic prospect of extending computation indefinitely in time in order to achieve an arbitrary degree of complexity. The real time artist may be forced to compromise technically but always has the option to resolve an unsatisfactory computer part simply by playing a little more in the right place.
The impulse of many composers when they first use the computer is to take advantage of its precision and repeatability to develop languages to describe sounds and their aggregates as exactly as possible. However each action has ultimately to be encoded into a form that has no musical analog. The necessity of encoding every detail of one's ideas and the shortness of human life, quickly stimulates the invention of alternative methods. In part this is accomplished by the evolution of higher level representations which leave more of the details to the computer. But music involves more than mere description, it is in the process of musical invention itself that many composers seek help from the computer.
Conveniently the computer is just as well suited to the the invention of a process to generate music as it is to the concept of a singing manuscript. That is, using conventional programming tools, composers can create a model on the computer to give their ideas a more concrete form. This concrete model or simulation is more than just a higher level abstraction, it can be articulated and can automatically translate these articulations into the programming needs of the machine. Thus the narrow logical channels of communication with the computer are expanded.
Working without such concrete models can be compared to asking a mathematician to work in pure logic alone without the use of his rich notational systems or his visual imagination. Models can be based on traditional notions of music or they can take inspiration from the structures in other disciplines which seem to have musical potential. They could be linguistic models, implementing the composers ideas as 'musical grammar' or they could be more physical models in which musical ideas are spatialized and forced to obey the laws of geometry or physics. In their application models can be articulations for control and elaboration, self structuring processes responding to musical sounds or mnemonic devices to guide the performer through a complex landscape of personal musical history. In fact any metaphor which captures the artist's ideas and which can be formalized to produce computer code will do. From Numerology to Pataphysics any model is as potentially useful as another. The trick is to put physical handles on phantom simulations.
An obsession with live electronics has its compensations in the methodology of computer art and music. The need for 'hands on' in performance forces the composer to confront the abstractness of the computer head on. Each link between performer and computer has to be invented before anything can be played. But these 'handles' are just as useful for the development or discovery of the piece as for the performance itself. In fact the physicality of the performance interface helps give definition to the modeling process itself. The physical relation to a model stimulates the imagination and enables the elaboration of the model using spatial and physical metaphors. The image with which the artist works to realize his or her idea is no longer a phantom, it can be touched, navigated and negotiated with. In some cases it may turn out that having physical 'handles' in the modeling process is of even more value than in performance . The narrowness of the logical channels available to converse with the computer are greatly expanded through well designed instrumentation.
If one were to ask for the name of what's left of a trombone when you take away its ability to produce sound you might suspect you were in for a round of language philosophy, but it is precisely that which is missing from the computer as an instrument. So what is the name for that aspect of an instrument which is not involved in sound production but rather in how it is touched or struck or blown? Interface is just too easy, too jargony a word, obscuring the sweaty, effortful relation of a musician to sound. Since there is no more musical concreteness to the computer than there is in a CD player it is essential to think hard about the physicality of an instrument, how it should present itself to the performer. Since there is no physical given there is nothing to do but to invent one's own.
The difficulty of learning to play an new instrument, and its singularity itself preclude even trying a new instrument. We are still heir to the idealism which puts music before the musician and his tools, before all but the idea of music itself. But we can see clearly how music grew and changed with the perfection of the physical means of the instruments and the invention of playing styles. For most musicians this sort of experimentation is seen to be of the historic and golden age sort, with no possibility or need to be resumed. The design of new instruments lies on the fringe: partly inspired, partly crankish eccentricity. So far the art of the interface between physical gesture and abstract function is respected only by aerospace and sports equipment designers.
At STEIM the emphasis is on what could be called 'instrumentation'. In music system design instrumentation extends from the mechanical design of sensors and controllers, through the electronics and software of interfaces and finally to the modeling of the higher level relations between performer and composition. The issues here are many and the need for computer instrumentation varies from the simple need for a controller for the articulation of a well defined musical parameter to analytic interfaces which hear musically and provide data to influence rather than to control.
The simplest instrument to talk about, though not necessarily to make, is the basic controller: a device having a one-to-one relationship between some physical movement and a parameter in the musical model. Though the type of control needed may be simple, the choice of sensor/controller is still critical. A knob, an array of switches, an accelerometer which responds only to rapid movements, a sonar detector to measure the distance from the performer to a fixed point on the stage, each is a potentially good choice in the right circumstances. There are always criteria by which one can narrow the choices of controllers and their configuration but the 'fit' to one's musical intentions or with a performer's style is only decidable through experimentation and probably only through experimentation by the ultimate user.
Most computer instruments in use are those provided by the commercial music industry. Their inadequacy has been obvious from the start -emphasizing rather than narrowing the separation of the musician from the sound. Too often controllers are selected to minimize the physical, selected because they are effortless. Effortlessness in fact is one of the cardinal virtues in the mythology of the computer. It is the spell of 'something for nothing' which brightly colors most people's computer expectations. Despite all experience to the contrary we continue to think of the computer as essentially a labor saving device. Though the principle of effortlessness may guide good word processor design, it may have no comparable utility in the design of a musical instrument. In designing a new instrument it might be just as interesting to make control as difficult as possible. Physical effort is a characteristic of the playing of all musical instruments. Though traditional instruments have been greatly refined over the centuries the main motivation has been to increase ranges, accuracy and subtlety of sound and not to minimize the physical. Effort is so closely related to expression in the playing of traditional instruments. It is the element of energy and desire, of attraction and repulsion in the movement of music. But effort is just as important in the formal construction of music as for it's expression: effort maps complex territories onto the simple grid of pitch and harmony. And it is upon such territories that much of modern musical invention is founded.
Physicalizing musical models adds the dimension of effort and unforeseen possibilities for articulation. Most importantly it is through the physical that time is integrated with other musical components. That is: effort binds time to the measure of control.
The issue of effort points in two different directions, what could be called response and responsiveness. The chain, performer - sensors - digitizer - communication - recognition - interpretation - . . . - composition, can be broken and assembled in many ways, and the problems of response can be treated at more than one point. The parsing of this chain, what might be called systems design, is becoming a critical aspect of the making of electronic music compositions.
The responsiveness or physical feedback of the controller is just 'effort' seen from a designers point of view. Most successes in this area have been made through the careful choice of controllers and materials to give the desired 'feel'. The choice of controller from conceptual considerations alone is inadequate. A simple joystick while having a Cartesian appeal usually proves to be inappropriate as a musical controller because it simply has no feel at all. A track ball might suffer from the same limitation when used by hand but when taken as a foot controller becomes ideal. Likewise a immovable joystick which measures force instead of location could recover the usefulness of the joystick configuration. The possibilities for responsiveness have only begun to be explored. Some interesting experimentation has also been done in the artificial generation of physical feedback allowing for variable responsiveness in an instrument.
The response of an electronic instrument is taken to be the sort of data that an instrument can generate over the range of its use by a performer. I have so far stressed the physicality of an instrument as being a first step towards musical interface design but the response of the instrument does not end there. The physical response of course has to be converted into data the composers musical model can deal with. This minimal need to convert the physical data into a convenient form merges indistinguishably into the much more interesting possibilities for designing of an artificial response for the instrument. That is, without modifying the physical characteristics of the controller, the data it provides can be reinterpreted to change it's behavior and musical 'feel'. The choice of a controller may not be ideal, there may be theatrical or practical reasons for a less than ideal device being employed. But equally as likely the controller feels right but the musical idea is just not as simple as the response of the controller. This is a huge area with a multitude of motivations: from making up for the deficiencies of a sensor [gaps, irregularities, inversion, linearity, smoothing a rough response] or elaborating a simple gesture.
Interpretation is not of course limited to any one level in the processing of the data stream. Gestures can be expanded or elaborated at any level of interpretation from the sensor data, to 'note events', to the samples of sound. Since there is no inherent stylistic concreteness the composer is free to take any gesture and transform by whatever seems appropriate, from one kind to another [e.g. a simple touch triggering a complex melodic form or a point to a curve]; from the habits of one instruments to another [finger patterning to the inner movement of synthetic harmonics]; one time transformed to another [stiffness to gracefulness]; one tactile texture transformed to another [smooth to rough]; one level of abstraction to another [shape to symbol]. Sources for the transforms are equally arbitrary from bop patterns to serialist theorems, and from algorithmic elaborations to transcriptions from a folk tradition. Once the first approximation of the instrument interface has been achieved the play of the compositional possibilities becomes audible.
A single controller can have multiple responses each version treated for radically different consumers. This is often the case in interactive media art where one signal is expected to provide information for more than one process. In LINA (1985), a piece I did in collaboration with the artist Ray Edgar, we decoded the output not of a physical controller but of a cellular automata. The behavior of the automata, a purely mathematical construct simulated on a Macintosh computer, was interpreted in real time for control of both a musical process and a graphical one using a video synthesizer [Fairlight CVI]. The first interpretations were made with the idea of playing through a typical midi synthesizer of the time [two Yamaha TX7's] and the midi code sent to the synthesizer was also sent to a second Macintosh for reinterpretation as control for the video. This meant that the video process could be related to both the automata and the musical process. The automata produces geometrical relations which can be measured and mapped to musical parameters. Much of the work of the piece went into observation of the behavior of the automata as its mathematical 'rules' were varied and experimentation with various possible transformations. The circle was completed when we pointed a camera at the moving image of the automata using it as a kind of textured brush which could be pulled through the synthetic video thus effecting the image directly as well as through the interpretations of its inner relations.
Conventional instruments can be extended by interfacing them to computer controlled synthesizers but perhaps more interestingly the computer facilitates the creation of entirely new ways to play music. Fingerings can be remapped, the sound processed and augmented dynamically. Several instruments made recently at STEIM have been adaptations of the traditional. A contra bass recorder was given a MIDI interface for the performer Michael Barker. We have put MIDI on a bow for the violinist Jon Rose. We have helped Nicolas Collins convert an antique concertina into a sort of digital trompe l'oeil that has kept only the 'look and feel' of the original. Each composer was interested in the expansion of their instrument through the addition of synthetic or sampled voices, but also in using the computer for the elaboration of the control gestures themselves.
Among the visitors to STEIM are a good many composers who are performers but not programmers, who wish to add digital effects or voices or elaboration’s of control to an existing performance setup, supplementing rather than replacing a well developed set of tools. A good example is the case of Jon Rose a violinist and performance artist with a lot of experience with acoustical instruments modified mechanically and via amplification. The image he wanted to realize with our help was of a violin player who. lifting the bow from the strings, continued to play with the bow alone. We fitted the bow with an ultrasonic transducer which could measure the distance from the player to a fixed receiver on the stage. This required the help of a small microcomputer worn on the body of the performer which translated the measurements into MIDI for transmission. Taking a basic response from an existing STEIM application, the next step was Rose's choice of a midi controllable sampler and a few months experimentation to see how the 'bow' could be used to play his music. As he became familiar with the physical constraints of the system he began to ask for transformations in the response the controller. The measured line stretched between the bow and and the fixed receiver could be taken as a knob directly connected to a parameter in the sound or to a process performing manipulations indirectly. A change in distance could become the parameter instead of the absolute distance. The line could be segmented such that threshold crossing triggered unique events or changed the rate of events being transmitted. Of course he wanted all of the above and three sensors instead of one and the work goes on.
Many of the instruments developed for artists at STEIM are available in the Instrument Pool. Prototypes and copies of complete instruments are kept so that in time, with the agreement of the inventors, others may utilize these instruments. Perhaps more valuable to the artists visiting STEIM, are the hardware and software modules which went into the building of these instruments. These include sensors, controllers, analog interfaces, our own miniature data acquisition and translation computers and software modules. Often a completely new piece or instrument can be assembled by adapting existing software and hardware.
Interactivity is a term which covers much of what I've been talking about but here it will be used to distinguish those systems which have been created to 'listen' and respond to musical sound. Typically some sort of pitch conversion hardware is used to code the sound received from an acoustic pickup into pitch and amplitude data. The current most common method is the integrated pitch to midi device of which there are quite a few commercially available. Usually the issue of physical instrument design is by passed in order to enable collaboration with virtuoso performers on conventional instruments. The composer/programmer then has the correspondingly virtuoso task of musical pattern recognition. An incremental solution to this problem is viable and relations can be established gradually between the various aspects of the stream of musical and the machine 'accompaniment'. As a first approximation recognition can be achieved for a simple set of heard signs: particular pitches in extreme octaves or perhaps a set of short motifs interpreted as signals by the process. Such singular events, easily decoded from the sound stream can enable coordination of the behavior of the process with the performer. Once a basic communication is established, the actual movement of musical parameters in time can be isolated one by one to provide a richer dynamic view of the player. Compromises are often necessary in the possibilities available to the performer in order to achieve interaction, but this does not necessarily result in any compromise in composition.
An artificial listener needs to learn to recognize the important and subtle aspects of performed music not just the conventionalized concepts of western musicology and notation. This is necessary first of all because of the radical difference between the sensory mechanisms of an electronic and a human listener but also because many composers are seeking, through computers and electronics, ways to open possibilities for new music rather than only means for reproducing the conventions of music history. Much of great value is being learned from the ongoing research by psychologists into the way we hear music but the methods and even the starting points for composers may not be the same. Musical psychologists study the process of musical cognition in order to provide demonstrations of general theories of mind and to corroborate models of hearing with models of music. They take 'music' to be a given in their work, a choice hardly ever available to the composer. I state this because composers may be forced to take the long way around if they accept the 'authority' of such methods, accept them as the exclusive way to approach music with computers. The choice of sensors and controllers, the invention of models and algorithms is motivated by complex issues of musical forms and musical style, of meaning and theater. The limitations a composer accepts may seem arbitrary or naive to the scientist but arise for practical and aesthetically justifiable reasons. It is as much the problem in collaboration to get technologists to respect the thinking of the artists as it is to educate the artist in the methods of the technology.
[To name a few of the composers at STEIM who have worked from this interactive listening premise: Martin Bartlett, George Lewis, Clarence Barlow, and Chris Brown].
The other side of response shaping is pattern recognition. The difference is often more a matter of point of view than it is of technique. In general whether treating a stream of data for control or recognition, some sort of simplification or selective amplification of the desired features in the signal is made. There is an element of pattern recognition in any controller design where there is a need to isolate aspects of the signal that we wish to link to the musical model. When this process of isolation becomes more a matter of symbols, it is seen as 'pattern recognition'. The recognition of gestures in control allows the expansion of the possibilities of communication with the musical model. A single one dimensional control stream can be analyzed for shapes which in turn can be decoded into higher level moves in a compositional process.
Gesture in a control stream can be seen as isolated signs or as the dynamic measure of quantity. It is perhaps important to consider the difference between these two ways of looking at musical information. On the one hand we look for signs which fit models of music composition as symbol manipulation. Pitches, duration’s, and color can be manipulated as tokens in a musical language such as the traditional harmony. The serialist composers and the generalizations of modern linguistics have both shown us how to see music as an algebra of musical signs. This particular way of looking at music is also one which is eminently adaptable to computer models. Less well formalized but equally important in music is the idea of measure. The nuance of performed and improvised music is well comprehended through symbolic models. This nuance, trajectories in pitch and rhythm and in timbral parameters is felt directly as tension or motion or in general as sense of measure. Just as we directly feel the quantity of motion in an automobile and adapt our driving to it, as musical performers and listeners we can feel the measure in music. In music we appreciate quantity in sensations not as some half digested information but as part of music itself. Perhaps this is one of the distinctions of the aesthetic vs more abstract modes of cognition. This is in way meant to be a criticism of more symbolic models of composition, but to as a reminder that computer instruments and artificial musical intelligences should also be able to comprehend and express measure.
One of the most ambitious of all interactive compositions I have seen, is the ongoing Voyager project of George Lewis. The amount of information which Lewis can extract from the simple stream of interval and amplitude data from a pitch-to-midi converter is unrivaled. Stretching over more than five years and three different hardware systems it has evolved to the point where a rich polyphonic composition process is capable of recognizing and adapting itself to the interval set, long term dynamics, density, articulation and most impressive of all, the tempo and time sense of a live performer. Without trying to impose a single composition on the 'guest soloist', Lewis' interactive composer constantly surprises both performer and audience with its varieties of style and its strategies for accompaniment. The advantages spending 25 years as an improvising musician show in the richness of this rare gem of musical artificial intelligence.
While George Lewis was at STEIM working on the Empty Chair, [a theater piece for several live musicians, interactive composing system and interactive video], I attempted real time timbre recognition using a simple lattice filter chip designed for speech. The desire was to enable an interactive 'player' to respond to textural elements in the sound of a live performer. This would be especially useful information at those times when an complex sound made simple pitch data meaningless. This required two micro processors: one using the filter chip to analyze the acoustical signal into 'frames' of data representing a measure of the timbre of the sound, the other comparing these frames with previously measured ones which were arranged according to our scheme of timbre classes. This scheme was arrived at by simply 'teaching' the program examples of our ad hoc classification. The final result is just a token sent to the player program estimating the best guess as to timbre at that time.
We are all in a learning situation with respect to computers in music and looking for methods to accelerate our learning. The advantage of interactivity extends into the design of software as well, what programmers call 'interactive software development environments'. What this means is languages which allow the making of incremental changes in the software, each change taking only as much time as it takes the programmer to type it. Such methods allow direct manipulation of the composers model in a loop with the composers ears in the middle. The time it takes to cycle this loop is a critical part of the discovery process. If a particular path seems difficult, rapid feedback on one's hypothesis can make the difference between attempting the path or not. If for instance the ratio of interesting to uninteresting discoveries is 1%, a hundred cycles may have to be traversed to find the good bit. If the time to make each change is five minutes, it could take over eight hours to home in on one answer. If we could reduce the average cycle time to say ten seconds it would take twenty minutes to make one hundred experiments, a much more likely time to consider devoting to hunches or even essential refinements in a composition. Because of their well developed interactivity, Lisp and Forth have both been favorites in computer music. Among real time composers, Forth has been popular because it offers complete interactivity, real time structures such as multitasking and schedulers, and unobstructed access to the hardware. The language is low level and not to everyone's taste, but for many the advantages of interactivity outweigh the grammatical disadvantages. (There has been much work to produce a interactive environment and one of these has finally gone public in the form of SuperCollider for the Macintosh PowerPC.)
Learning the language itself is often the highest barrier for artists first confronting technology and the interactive environment is proven to be the best for first time programmers. Those who are writing higher level music languages must continue to put more effort into providing interactive musical programming environments.
The elaborateness of response shaping is limited only by the real time computing power of the processor being employed. Typically at STEIM we rely on multiple processor systems, distributing the tasks at various levels in the chain. Such distributed processing, while adding to the complexity at the systems level, often is a way of simplifying the solution to a problem. We have already a great deal of experience in sensor and controller interfacing using "embedded" industrial microprocessors. These preprocessors linked via MIDI to a larger personal computer such as a Macintosh or Atari, pass an abstracted version of the input data leaving the higher level recognition and composition tasks to the more powerful computers. This partition of tasks was originally necessitated by the 'closed' design of the PC's which we could afford. Making our own small microcomputers using MIDI for communication was easiest way make interfaces,. Now this necessary division of labor is seen as an advantage. With little effort different or more powerful computers can be linked to a well tested interface processor. The partitioning allows the solutions of the problems of one composer to be more easily reused and adapted to the work of the next. Typically as one aspect of a problem becomes 'well understood' it can be factored out into it's own module.
The complexity of networks of controllers, sound processors and general purpose computers has made the composers and programmers who work at STEIM think a lot about what is usually called systems design. The choices of controllers and their interfaces, of personal computers [each with different hardware limitations and varieties of programming tools], synthetic voices [including custom and commercial synthesizers and effects as well as their unique programming needs], communications protocols [e.g. adaptations of MIDI], network design [i.e. the design of the geometry and accessibility of the interconnections between devices] all have to be coordinated and played off one against another till a workable system has been achieved. These networking concerns are shared by computer designers and music instrument makers, but each with different goals which may or may not work in harmony. MIDI was the solution for setups of keyboard players, Ethernet for the needs of institutional computer users: neither is sufficient for the real time composer but with clever adaptation a useful compromise is possible
The Hands offer a good model for the evolution of a musical performance system based on digital electronics. The composer who invented them, Michel Waisvisz, has a history of creating mechanical, electronic and robot devices which taking little inspiration from conventional instruments, look rather to the medium itself. Before The Hands , were the Krackle Boxes, a series of electronic sound synthesizers in which Waisvisz exposed the innards of the synthesizer to direct contact with the hands of a performer. Skin contact changed the capacitance of the electronics and thus the sounds they produced. This ultimate in directness in electronic music lead to the concept behind The Hands , but the medium had changed from analog electronics to midi controlled digital synthesizers. The midi instruments of course had been designed or some would say warped to the mold of the traditional keyboard instruments. The Hands ignored this convention in order to maximize the diversity of the modes of control that could be applied to these synthesizers.
The Hands as an instrument system had a complex evolution over several years. Experiments were made with sensors and controllers of various kinds. Configurations were molded by the composer to fit a particular set of human hands. A small micro computer was made to translate the various controller gestures into midi. There was a strong interaction between the development in the design of sounds, [synthesizer voicing] and the design of the controllers and the response of the midi interface. The new possibilities of response lead to the discovery of ways of playing which were not conceivable from the perspective of the MIDI keyboard. A secondary level of indirect playing via a more powerful computer [The Lick Machine and an Atari ST], was added only after a great deal of experimentation and direct playing [The Touch Monkeys ], had established the territory of the instrument.
Overall the approach was not the definition a general purpose instrument. The ergonomics of the design grew out of the needs of a specific performer and a specific musical conception. Though this evolution the importance of systems aspect of design became clear. Not only was a controller invented but also communication hardware and network protocols; libraries of sounds and voices adapted to the possibilities of the instrument; and software to support elaboration and scoring of the basic control gestures. Each of these threads pursued by means of a different method, but combined into both a playable instrument and a compositional whole.
A multi channel sound spatializer was written at STEIM [IMF, Ryan and Boer1988] for use in live performance. The motivation was the development of my piece The Number Readers  in which both the music and the theater of the piece were greatly enhanced by moving sound sources. The arrival of an inexpensive automated mixer, the Yamaha DMP7, made the project seem especially practical in that the resulting work would not be tied as in many such projects to a unique piece of hardware. The interesting thing about this approach is that it modeled spatialization as a system of patched function generators reminiscent of a patchable modular analog synthesizer. The control of the space was not seen as a description of the location of the apparent source [position control] , but as modulation in the motion of the source [velocity control]. Arbitrary description of position was possible but for this piece the idea of interactively playing the motion was very attractive. Patching was allowed for midi input to control parameters of a function generator; the output of the generators could be patched to modulate other generators or out as midi messages. This gave a simple means to experiment with control of various kinds of motion. The result, programmed in Forth by Rolf Boer was available in less than two months and became a essential part of The Number Readers . The program was interesting in itself as a dynamic continuous controller, not confined to mixing applications. The same data could be used to control the continuous movement of parameters in any midi device.
A great deal of effort at STEIM is put into exploring the possibilities of new technology. The currently most fertile area for music is the new generation of Digital Signal Processing chips. These promise to revive programmability at the level of sound itself in a way that hasn't been seen since the introduction of modular analog synthesizers. They have begun to appear in the new generation of 'high end' commercial digital samplers and recorders, but perhaps more interestingly they can also be added to ordinary personal computers like the Macintosh and IBM enabling the composer to explore the regions which the commercial designers will necessarily ignore. The kind of musical processing which was only available to large institutional computer users is now being decentralized and available to the independent composer. Real time composers moving up to ever more powerful personal computers are meeting the large time sharing computer users moving 'down' towards the same personal 'music workstations' both paradoxically gaining in computational power.
My own compositions have depended heavily on real time digital signal processing for more than ten years. However until this moment the economically feasible hardware has been extremely limited. The first DSP project is the transferring of the old fixed algorithm, discrete component processing onto these new programmable chips. This has given me a benchmark by which to estimate the number of such processors required to make significant improvements over what I can do now. For the sort of constructions that I have made, it is certain that a knowledge of both the mathematics of digital signal processing and of the idiosyncrasies of digital hardware is essential, yet the methodologies of electrical engineering are not necessarily the only ones to adopt in making a musical instrument. Signal processing as it is taught to engineers is guided by such goals as optimum linearity, low distortion and noise: more issues of 'high fidelity' than of music. One can and must use these methods if only to get a foot hold on the digital terrain but there still remains the problem of finding the sorts of constructions which will turn shiny digital monoliths into interesting musical objects. Digital signal processing is less than one generation old, the path one's researches must take may not be found in their still quite new text books. I have found for example more interesting suggestions for sound technique in the journals of visual image processing than in the journals of audio engineering. A more uninhibited even doubting experimental attitude has to be taken towards the formalisms of engineering to make a breakthrough in musical applications.
A daily problem at STEIM is that of collaboration. This is luckily not an institutional problem but just the familiar one of getting artists and technologists to talk to each other. Each artist and each project entails a learning process for both artist and engineer. This is well known, but the burden still falls unequally on the artists who must expose their ignorance to close scrutiny and to translate their ideas into a form the technologist can understand. The presumption is that the language and the methodologies of the technologist, because they are well articulated and successful by to their own standards, should be taken on completely by the artist. Often not only for aesthetic reasons but even for practical ones this is not the path to the solution of the problems of the artist. The imposition one method on any group of artists is arrogant or naive or both. The temptation of programmers is to immediately concentrate on the issues of the translation of ideas into machine logic rather than in comprehending the idea or system of the artist. Switching the emphasis towards 'instrumentation' helps correct this narrowing of vision.
There is no doubt that there is much that can be done in simply translating existing musical concepts directly to the computer. Even the most skeptical should recognize the emergence of a genuinely useful role for computers in traditional musical practice. Methods for these applications have already been discovered and for many the main improvements sought are of utility: speed and user friendliness. Confidence can not so freely be admitted for the methods devised so far for innovative use of the computer in music.
For some composers there was perhaps a feeling of liberation as if a physical barrier had been removed between idea and its expression, but it was probably more practical matters that led the first users of the computer in music to contain the whole of the music within the confines of the 'idea' of computing. Little connection with the outside world was possible and the methods of programming were poorly developed. One was trapped by the poverty of the languages available for expression. One adapted to the methods which were easiest from the programmers point of view. If one's ideas were too concrete they probably couldn't be expressed at all. The criticism so often heard these days of purely 'algorithmic composition' should not be directed so much at the musical limitations of particular forms or even at formalism itself, but at the reliance on a single calculation for an entire composition.
The quest to 'contain' the problem of music under one substance as it were, is partly a product of the difficulty of using something as blank and logically 'simple' as a computer to make music and partly the attractions of remote control or 'look ma no hands' which seems to be inherent in the social mythology of technology. A machine which makes music apparently without human intervention is a magic object which casts its spell over naive and sophisticated alike. It's kin to the belief that you should get something-for-nothing when using the computer, that it makes the difficult easy. The most sophisticated of these myths it that of the general purpose solution.
Much of the blame can be placed on the on the search for 'general purpose solutions'. The appeal of such solutions is more than merely that of practical economy. The computer is of course the most general of all human inventions after only logic and mathematics itself. Its very existence is due to a society of consciousness and not the creation of individual genius. To approach it is to be attracted to the general. But the extension of this ideal of generality into the design of music languages and instruments is not wholly appropriate. In engineering it is a guiding principle for good design like efficiency or linearity, but the process of making music is not well comprehended by such concepts. Music itself draws us to the view that all form is reducible to another in ever ascending hierarchies. But in music there is equally a concern with the particular: in particular forms, in the phenomenal, in personal intuition. While there are always tools and forms which become universal, the search for the universal is not sufficient as a method for making music.
For myself the real promise of the computer can be discovered only through a systematic empiricism which opens as many channels between the world and the ideas of the composer as possible. A method which allows for at least a few direct, non symbolic relations to the process of composition has a tremendous utility. This is not intended to raise philosophical issues but to emphasize the continuing importance of the physicality of playing in music and in the development of the music technology.