An artificial intelligence music platform developed by a College of Charleston research team led by computer science professor Bill Manaris made its debut performance on Monday, November 28, 2011. The performance was composed by music professor Yiorgos Vassilandonakis and featured music students Chee-hang See and Amy Tan as the pianists paired with two computers (Monterey Mirrors).

“The composition, ‘2×2,’ focuses on interactivity patterns among the human and virtual performers, who follow a carefully designed formal plan that layers musical material generated by freely combining pre-composed rhythmic and pitch cells,” explains composer Yiorgos Vassilandonakis. “See and Tan are musicians already comfortable with each other on stage, and open to the challenges of a new kind of interaction, as well as a new kind of interface with their instruments, augmented by Monterey Mirror systems. ”

The Monterey Mirror has artificial intelligence capabilities and is an electronic music generator, powered by computer programming, which mirrors a human performer and can participate as an equal in a live performance. Like all mirrors, it reflects back aspects of the performer, enabling the performer to objectively hear what others hear. It is different from a recording, in that it does not repeat musical material verbatim, but instead captures deeper patterns in a musician’s style and makes them apparent. Computer science professor Bill Manaris and both graduate and undergraduate students developed Monterey Mirror and the project was funded by the National Science Foundation. Watch a video about the project.

The Monterey Mirror project is a practical example of computing and the arts working together to inform each other and grow in tandem. The cutting-edge computing technology is readily available, and portable, which makes collaborations like this both practical and exciting.

“Being at a liberal arts and sciences institution, a place that encourages interdisciplinary exploration provided the necessary support environment for Monterey Mirror to be created,” says Bill Manaris. “One Saturday afternoon in July 2010, the Monterey Mirror system came to life and on my guitar, I explored with it various musical ideas for the first time. It was an amazing moment. Since then the system has evolved tremendously.”

“It has been really interesting to recognize how aspects of the composition and performance have guided the development of the software,” says Dana Hughes, a graduate student who worked on the project. “Research such as this has taught me that the role of a computer scientist should not consist of simply writing software to solve problems and automate tasks.  Rather, the role should include exploring what machines are capable of doing, and determining ways to integrate computation in fields such as music and the arts.”

Monterey Mirror is based on Markov models, genetic algorithms, and power-law metrics for music information retrieval. The techniques are at the forefront of computer science. Monterey Mirror trains a Markov model on a human musician’s material; then it uses a genetic algorithm, guided by power-law metrics, to discover musical “responses” that are aesthetically similar to the musician’s style.  Since Monterey Mirror can work with recorded material, it is even possible to generate material in the style of musicians long-gone, such as Miles Davis and Johann Sebastian Bach.

A dedicated educator, Dr. Vassilandonakis has taught Composition and Music Theory at the University of California, Berkeley and the University of Virginia, as well as electronic music at the Centre de Création Musicale, Iannis Xenakis, in Paris, before joining the faculty at the College of Charleston in 2010. In addition to his chamber, vocal, orchestral performances and compositions, he has worked on films in the Hollywood indie movie scene. His credits include composer, conductor, and producer of scores for theater, independent films, television documentaries and commercials, as well as a theme park ride at Universal Studios, Hollywood.

Dr. Bill Manaris, computer science professor, does research on artificial intelligence, human computer interaction, and computing in music and art. Along with students, Manaris created Armonique and Armonique Lite, music retrieval systems focusing on computational aesthetics (similar to Pandora). Earlier research includes SUITEKeys speech Interface, a continuous speech understanding system for motor-impaired users. For the last decade, he has been exploring fractals, power laws, and the golden ratio and their relationship to music, art, and human aesthetics. His work has been funded by various NSF grants.

For more information, contact Bill Manaris at or Yiogros Vassilandonakis at