Readings:
Reponse:
In a short response, contrast Weizenbaum’s design for an artificial interlocutor with Lewis’s. How can ideological differences be reflected in the construction of the software systems that they propose? What are the fundamental differences between designing a system for intelligent verbal conversation and for musical improvisation?
Weizenbaum’s ELIZA employs a very linear approach to interaction between the user and the intelligent system. The system takes in a single typed statement from the user and generates a typed response based on various keywords within the input statement. In contrast, Lewis designed Voyager to not only respond to the performer but also act as its own instrument player with its own instrument with an emphasis on musicality. Weizenbaum bases his program off of a set of “decomposition rules,” whereas Lewis believes that reducing musical improvisation to a specific set of rules is not possible. Weizenbaum also notes that improvisation draws heavily on the musician’s cultural background, musical practices, and musical personality, therefore his program must draw on the user’s playing as a whole instead of looking for specific features from which a response can be generated.
Lewis delineates between a European perspective on improvisation in which there is a social role of an improviser with a specific function in society. In contrast, the AACM model places importance on the role of the creative artist. He also views it as informed by its cultural contexts, and partially in response to traditional Western music, as improvisation in the moment creates other possibilities for alternative realities– a notion that shifts the importance from an individual to a global future. The technology of improvisation follows this ethos, with the computer consisting of randomly selected “players” that analyze pitch, speed, etc. and generate complex responses. Random numbers can determine other important aspects that are constricted by external parameters. He wishes it to be more of a negotiation, rather than a hierarchy. In contrast, in Weizenanaum’s article describing natural language conversation between human and computer via natural language processing, there are keywords in the human input that lead to the a focus on a specific context, which further then determine the transformation of a rule that creates the computer’s response. It’s basically a series of instructions via “if/when” statements. There is no element of randomization that simulates a real time performer in conversation, and thus centralizes the role of the “agent” in a hierarchy of conversation/improvisation.
There is something disturbing (and faintly misogynistic) about the utilitarianism of ELIZA when juxtaposed with the human distress of the test subject with which it communicates. The source of the disturbance becomes clearer when Weizenbaum further elaborates on the mechanisms behind ELIZA’s responses. This is a fairly rudimentary exercise, essentially figuring out how to make a machine reproduce basic realities of English grammar, yet it is often understood to be more than that. Weizenbaum himself does attempt to move away from the mysticism with which this machine could be understood (as a replacement for a psychotherapist, maybe, or a distillation of the processes therapists / teachers / friends use in conversation) and understand it as a machine, but he does propose future experiments where ELIZA would be given a presumed human identity, so we can’t say he’s been entirely disillusioned.
Meanwhile, Lewis’s Voyager system does not have to fit any explicit demands, and its output is not restricted to a certain syntax. Lewis traces various conceptual threads in arriving at a final conceptual niche for Voyager, but ultimately, he finds an extension of his own musicality, as far as I can tell (further discussion is probably necessary, and if it comes down to it, I could always ask him…). Ultimately, it is up to him what sounds he chooses and how they interact with his (or anyone’s) playing. As Lewis puts it, the subject of his piece is “not technology or computers at all, but musicality itself.” I interpret that to mean that interacting with Voyager means immersing oneself in a musical environment, where one discerns musical expression instead of attempting to discern mechanisms.
In that regard, there might actually be a parallel between the two, because ELIZA is really about language. But it is about the mechanical reality of language, and the mechanical reality of conversation. In contrast, Lewis states that his Voyager is about the expressive – not mechanical or syntactic – reality of music. This is not a fundamental difference between media. I think there are conversation-generation systems that deal with the expressive realities of language, albeit relatively harshly (the guidelines in place are much more complex, after all). Cleverbot is an example.
George Lewis’ design is far more intuitive and less scientifically rigid than Weizenbaum’s, because the system for intelligent conversation is aimed to convince the person doing input that it is a human. Therefore, the description that Weizenbaum does is much more focused on elements of syntax and key words, which inform the computer what to do to create a response. On the other hand, George Lewis describes the ideas behind how he organizes his system with different ideas and philosophies that inform the music and what makes it have emotion or human qualities. In addition, Weizenbaum focuses on one specific type of communication, rather than the variety of combinations of musicians that George Lewis discusses. With language, there are also certain types of responses that can quickly reveal that a machine is responding, much more easily than music.
Weizenbaum’s design is much more formulaic and strict than Lewis’. It seemed that Lewis wanted to free the Voyager of any musical constraints or traditions created by various cultures. Lewis implemented randomness in his design while Weizenbaum made a machine that had a specific algorithm for many possible input cases’ encountered. Lewis’ ability to even make this design decision is due to the differences in understanding music versus language. Both music and language have sets of rules related to them that change within different cultural contexts, however, it is much less difficult to interpret foreign musical information than foreign linguistic information. The interpretation of music is more subjective than with language, so the range in which music can be considered “correct” is larger than the range of language which is surrounded by constraints from grammar, alphabets, and shared understanding between the people conversing.