metacreations

Metacreation

Harmonic Progression (+/-)

Realtime generation of harmonic progressions using controlled Markov selection.

Demonstrates a method of generating harmonic progressions using case-based analysis of existing material that employs a variable-order Markov model. Using a unique method for quantifying harmonic complexity and tension between chord transitions, as well as specifying a desired bass-line, the user specifies a 3 dimensional vector, which the realtime generative algorithm attempts to match during chord sequence generation. The proposed system thus offers a balance between user-requested material and coherence within the database.

Members: Arne Eigenfeldt, Nicolas Gonzalez Thomas.

Video:

Thumb 01Thumb 02
     

Papers & Posters:

  • Eigenfeldt, A. & Pasquier, P. (2010). "Realtime Generation of Harmonic Progressions Using Controlled Markov Selection" Proceedings of the First International Conference on Computational Creativity (ICCCX), ACM Press, Lisbon, Portugal, 16-25. [View PDF]
  • Website: metacreation.net/gonzalezthomas/HPG/index.php

Composition by Negotiation (+/-)

Coming Together: Beauty and Truth is a work for autonomous agents that demonstrates self-organizing behaviours in a musical environment. Given no explicit musical data, agents explore their environment while creating melodic phrases, building beliefs through interactions with other agents via messaging and listening (to both audio and/or MIDI data), generating goals, and executing plans. The artistic focus of Coming Together is the actual process of convergence, heard during performance (each of which usually lasts about ten minutes): the movement from random individualism to converged ensemble interaction. If convergence is successful, four additional agents are instantiated that exploit the emergent harmony and rhythm through brief, but beautiful melodic gestures. Once these agents have completed their work, or if the original "explorer" agents fail to converge, the system resets itself, and the process begins again.

Coming Together: Freesound is an autonomous soundscape composition created by four autonomous artificial agents. Agents choose sounds from a large pre- analyzed database of soundscape recordings (from freesound.org), based upon their spectral content and metadata tags. Agents analyze, in realtime, other agent's audio, and attempt to avoid dominant spectral areas of other agents, selecting sounds that do not mask one another. Furthermore, selections from the database are constrained by metadata tags describing the sounds. Thus, water sounds may trigger other water sounds, or agents can choose to oppose contextual references. As the composition progresses, convergence is further facilitated by lowering the bandwidth of the agent's resonant filters, projecting an artificial harmonic field upon the recordings that are derived from the spectral content of the recordings themselves. Finally, each agent adds granulated instrumental tones at the resonant frequencies, thereby completing the "coming together".

Members: Arne Eigenfeldt.

Video:

Thumb 01Thumb 02
 
Thumb 01Thumb 02
 
Thumb 01Thumb 02
 
Thumb 01Thumb 02
 

Papers & Posters:

  • Eigenfeldt, A. (2010). "Coming Together - Composition by Negotiation" ACM Multimedia, Firenze, Italy. [View PDF]

Kinetic Engine (+/-)

A software drum ensemble incorporating aspects of artificial intelligence.

Kinetic Engine is a realtime generative music system that has been in development since 2005. It has been used as an extended instrument within an improvising ensemble, as a networked performance ensemble, as an interactive installation, and as an independent performance system under the composer’s control. The first two versions were solely concerned with polyphonic rhythmic organisation using multi-agents. Version 3 introduced an evolutionary algorithm for the evolution of a population of rhythms, in realtime, based upon the analysis of music provided. Version 4 explored melodic organisation, again using multi-agents, while the most recent version adds a third order Markov model for harmonic generation.

Members: Arne Eigenfeldt, Philippe Pasquier.

Images:

Video:

Thumb 01Thumb 02
 
Thumb 01Thumb 02
 
Thumb 01Thumb 02
 
Thumb 01Thumb 02
 
Thumb 01Thumb 02
 
Thumb 01Thumb 02
 

Files for Download:

Papers & Posters:

  • Eigenfeldt, A. & Pasquier, P. (2009). "A Realtime Generative Music System using Autonomous Melody, Harmony, and Rhythm Agents." Proceedings of the 12th Generative Art Conference, Milan. [View PDF]
  • Eigenfeldt, A. & Pasquier, P. (2010). "Realtime Generation of Harmonic Progressions Using Controlled Markov Selection." Proceedings of the First International Conference on Computational Creativity, Portugal, 16-25. [View PDF]
  • Eigenfeldt, A. (2009). "Emergent Rhythms through Multi-agency in Max/MSP." Computer Music Modeling and Retrieval: Sense of Sounds, Lecture Notes in Computer Science. [View PDF]
  • Website: www.sfu.ca/~eigenfel/research.html

Map and Timbre Based Sound Selection (+/-)

Self-Organizing Timbre.

A comparison is made between two systems of realtime sample selection using timbral proximity that has relevance for live performance. Samples in large sample libraries are analysed for audio features (amplitude RMS, spectral centroid, spectral flatness, and spectral energy using a Bark auditory modeler), and this data is statistically analysed and stored. Two methods of organisation are described: the first uses fuzzy logic to rate sample similarity, the second uses a self-organising map. The benefits and detriments of each method are described.

Members: Arne Eigenfeldt, Philippe Pasquier.

Images:

Video:

Thumb 01Thumb 02
 
Thumb 01Thumb 02
 
Thumb 01Thumb 02
 

Files for Download:

Papers & Posters:

  • Eigenfeldt, A., Pasquier, P. "Realtime Timbral Organisation: Selecting Samples Based Upon Similarity", Organized Sound, 15:2, 2010 [Journal Link] [View PDF]
  • Eigenfeldt, A. & Pasquier, P. (2009). "Realtime Selection of Percussion Samples Through Timbral Similarity in MAX/MSP" International Computer Music Conference (ICMC 2009), Montreal, Canada. [View PDF]

Audio Metaphor (+/-)

Audio Metaphor is a research project aimed toward the design of new methodologies and tools for sound design and composition practices in the areas of film sound, game sound, and sound art. We continue to identify the processes involved with working with audio recordings in creative environments, and address these in our resaerch by implementing computational systems to assist human operations. We have successfully developed Audio Metaphor for retrieval of audio file recommendations from natural language texts, and, even used phrases automatically generated from Twitter to sonify the current state of the Web2.0. Another success has been in the segmentation and classification of environmental audio with composition specific categories, which was then used to in a generative systems approach allowing users to generate sound design by simply entering text. As we point Audio Metaphor toward perception and cognition, we will continue to contribute to the music information retrieval field through environmental audio classification and segmentation, and moreover, be instrumental in the design and implementation of the new tools for sound designers and artists.

Members: Miles Thorogood, Philippe Pasquier, Arne Eigenfeldt.

Papers & Posters:

  • Thorogood, M, Pasquier, P., and Eigenfeldt, A. (2012). "Audio Metaphor: Audio Information Retrieval for Soundscape Composition" Sound and Music Computing (SMC). Copenhagen, Denmark.
    [View PDF] [BibTeX]
  • Thorogood, M., Pasquier, P. (2013). "Computationally Generated Soundscapes with Audio Metaphor" In Proceedings of the 4th International Conference on Computational Creativity (ICCC). Sydney, Australia.
    [View PDF] [BibTeX]
  • Thorogood, M., Pasquier, P. (2013). "Impress: A Machine Learning Approach to Soundscape Affect Classification for a Music Performance Environment" Proceedings of the 13th International Conference on New Interfaces for Musical Expression (NIME). Daejeon + Seoul, Korea Republic.
    [View PDF] [BibTex]

Music Db (+/-)

A Music Database Query System for
Recombinance-based Composition in Max/MSP

We propose a design and implementation for a music information database and query system, the MusicDB, which can be used for data-driven algorithmic composition. Inspired by David Cope's ideas surrounding composition by “music recombinance”, the MusicDB is implemented as a Java package, and is loaded in MaxMSP using the mxj external. The MusicDB contains a music analysis module, capable of extracting musical information from standard MIDI files, and a search engine. The search engine accepts queries in the form of a simple six-part syntax, and can return a variety of different types of musical information, drawing on the encoded knowledge of musical form stored in the database.

Members: James Maxwell, Arne Eigenfeldt.

Video:

Thumb 01Thumb 02
     

Papers & Posters:

  • Maxwell, J. & Eigenfeldt, A. (2008). "The MusicDB: A Music Database Query System for Recombinance-based Composition in Max/MSP" International Computer Music Conference (ICMC), Belfast, UK: August 24-29. [View PDF]

HSMM (+/-)

Hierarchical Sequential Memory for Music:
A Cognitively-Inspired Approach to Generative Music

We propose a new machine-learning framework called the Hierarchical Sequential Memory for Music, or HSMM. The HSMM is an adaptation of the Hierarchical Temporal Memory (HTM) framework, designed to make it better suited to musical applications. The HSMM is an online learner, capable of recognition, generation, continuation, and completion of musical structures.

Members: James Maxwell, Arne Eigenfeldt, Philippe Pasquier.

Papers & Posters:

  • Maxwell, J., Pasquier, P. & Eigenfeldt, A. (2009). "Hierarchical Sequential Memory for Music: A Cognitive Model" 10th International Society for Music Information Retrieval Conference (ISMIR), Kobe, Japan. [View PDF]
  • Maxwell, J. & Pasquier, P. (2009). "Hierarchical Sequential Memory for Music: A Cognitive Model" Poster, 10th International Society for Music Information Retrieval Conference (ISMIR), Kobe, Japan. [View PDF]
  • Maxwell, J., Eigenfeldt, A. & Pasquier, P. (2012). "Manuscore: Music Notation-Based Computer assisted Composition" International Computer Music Conference (ICMC 2012), Ljubljana, Slovenia. [View PDF]
  • Maxwell, J., Pasquier, P. & Eigenfeldt, A. (2011). "The Closure Based Cueing Model: Cognitively-inspired Learning and Generation of Musical Sequences" Proceedings of the 8th Sound and Music Computing Conference (SMC2011), Padova, Italy. [View PDF]
  • Maxwell, J., Eigenfeldt, A., Pasquier, P. & Gonzalez, N. (2012). "Musicog: A Cognitive Architecture for Music Learning and Generation" Proceedings of the 9th Sound and Music Computing Conference (SMC2012), Copenhagen, Denmark. [View PDF]

Memory Association Machine (+/-)

The "Memory Association Machine" is a site-specific responsive installation inspired by cognitive processes in the series of "Context Machines" (Bogart, 2011, 2013). The artist provides a mechanism that allows the structure of the artwork to change in response to continuous stimulus from its context. Context is defined as those parameters of the environment that are perceivable by the system and make its place in space and time unique. "Memory Association Machine" relates itself to its context using three primary processes: perception, the integration of sensor data into a field of experience, and the free-association through that field. "Memory Association Machine" perceives through a video camera, integrates using a Kohonen Self-Organizing Map, and free-associates through an implementation of Liane M. Gabora’s model of memory and creativity.

Members: Ben Bogart, Thecla Schiphorst, Philippe Pasquier.

Images:

Presentation: Click here to view the presentation.

Papers & Posters:

  • Bogart, B. & Schiphorst, T. (2009). "Memory Association Machine: Growing Form from Context" Pure-Data Convention Proceedings, Sao Paulo, Brazil. [View PDF]
  • Bogart, B. (2008). "Memory Association Machine: An Account of the Realization and Interpretation of an Autonomous Responsive Site-Specific Artwork" Master of Science Thesis, Simon Fraser University, Vancouver, Canada. [View PDF]
  • Bogart, B. (2009). "Memory Association Machine" Handbook of Research on Computational Arts and Creative Informatics, Information Science Reference, Chapter XIII: 213-232. [View PDF]
  • Bogart, B. D. R., & Pasquier, P. (2011). Context Machines: A series of autonomous self-organizing site-specific artworks. In Proceedings of the 17th International Symposium on Electronic Art (ISEA) 2011, Sabanci University, Istanbul, Turkey. Retrieved from http://isea2011.sabanciuniv.edu/paper/context-machines-series-autonomous-self-organizing-site-specific-artworks
  • Bogart, B. D. R., & Pasquier, P. (2013). Context Machines: A series of situated and self-organizing artworks. Leonardo, 46(2), 114–122.
  • Website: www.ekran.org/ben/wp/2007/self-other-organizing-structure-1-2007/index.php

Self-Organized Landscapes (+/-)

“Self-Organized Landscapes” is a series of collages composed of thousands of closeup images that make up an urban landscape. The arrangement of the images is determined by a “Self-Organizing Map”, an approach to AI that organizes data based on its essential structure. This is the same approach used in the “Memory Association Machine” and “Dreaming Machine” installations.

Members: Ben Bogart.

Images:

EMVIZ (+/-)

{Efforts+Movement+Motion+Metaphor+Visualization}

The Poetics of Movement Quality Visualization

EMVIZ is an interactive artistic visualization engine that produces dynamic visual representations of Laban Basic-Efforts which are derived from the rigorous framework of Laban Movement Analysis. Movement data is obtained from a real-time machine-learning system that applies Laban Movement Analysis to extract movement qualities from a moving body. EMVIZ maps the Laban Basic-Efforts to design rules, drawing parameters, and color palettes for creating visual representations that amplify audience ability to appreciate and differentiate between movement qualities.

Members: Pat Subyen, Diego Maranan, Thecla Schiphorst, Philippe Pasquier, Lyn Bartram.

Presentation: Click here to view the presentation.

Papers & Posters:

  • Subyen, P., Maranan, D. S., Schiphorst, T., Pasquier, P. & Batram, L. (2011). "EMVIZ: The Poetics of Movement Quality Visualization." Computational Aesthetic 2011 Eurographics Workshop on Computational Aesthetics in Graphics, Visualization and Imaging, Vancouver, Canada. [to appear] [View PDF]
  • Subyen, P., Maranan, D. S., Carlson, K., Schiphorst, T. & Pasquier, P. (2011). "Flow: Expressing Movement Quality" CHI 2011 Workshop: The user in flux: bringing HCI and digital arts together to interrogate shifting roles in interactive media, Vancouver, Canada.
  • Subyen, P., Maranan, D. S., Schiphorst, T. & Pasquier, P. (2010). "Mapping, Meaning and Motion: Designing Abstract Visualization of Movement Qualities." Digital Resources for the Humanities & Arts 2010: Sensual Technologies Collaborative Practices of Interdisciplinarity, Brunel University, London, UK.
  • Subyen, P., Maranan, D. S., Schiphorst, T. & Pasquier, P.(2010). "Paint With Your Efforts: Interactive installation." E-mixer'10, Surrey Art Gallery, Surrey, Canada.
  • Subyen, P., Maranan, D. S., Schiphorst, T., Pasquier, P. & Bartram, L. (2011). "The Poetics of Movement Quality Visualization." 6th Annual IRMACS Day, Simon Fraser University, Burnaby, Canada.
  • Website: www.emviz.info

Cell Automaton (+/-)

Improvising Automata.

Initially developed for the Machines12 event, this improvising automata is an autonomous artificial agent that is able to improvise sound-music with one or more human musicians. The agent is capable of hearing and analysing an incoming audio signal (both in the temporal and spectral domains). The musical behavior of that proactive agent is controlled by a cellular automaton inspired from Von Neuman's work on abstract machine reproduction. Each cell contains a sound object that is diffused according to the activation level of the cell and updated according to what the agent is hearing.

Members: Philippe Pasquier.

Synesketch (+/-)

Synesketch is a free open-source engine for textual emotion recognition and artistic visualization.

In a nutshell, the software takes sentences as an input (for example, sentences of a chat conversation) and analyses their emotional content in terms of emotional types (happiness, sadness, anger, fear, disgust, surprise), weights (how intense the emotion is), and a valence (is it positive or negative). The recognition technique is grounded on a refined keyword spotting method which employs: (a) a WordNet-based word lexicon, (b) a lexicon of emoticons, common abbreviations and colloquialisms, and (c) a set of heuristic rules.

Once the recognition process is finished, Synesketch visualizes the emotions recognized in the form of real-time generative abstract animated art. There are several visualization systems currently employed – from the simplistic Hooloovoo (a grid of colored squares) to the complex Synemania (a drawing system of various particles, partially based on the algorithms of Jared Tarbell). However, since Synesketch is open source, any developer/designer can create her/his own visualizations.

Synesketch was already used by several developers in various contexts. It was reviewed by the Creative Review magazine and selected for the Alternative Party art festival in Finland. The recognition algorithm behind the Synesketch won the Belgrade Chamber of Commerce Award for the best graduation thesis at the University of Belgrade.

Members: Uros Krcadinac.

Images:

Video:

Thumb 01Thumb 02
     

Website: www.synesketch.krcadinac.com

Beatback (+/-)

Real-time Interactive Percussion for Rhythmic Exploration.

Traditional drum machines and digital drum-kits offer users the ability to practice or perform with a supporting ensemble – such as a bass, guitar and piano – but rarely offer support in the form of an accompanying percussion part. Beatback is a system which develops upon this missing interaction through offering a MIDI enabled drum system which learns and plays in the user's style. In the contexts of rhythmic practise and exploration, Beatback looks at call-response and accompaniment models of interaction to enable new possibilities for rhythmic creativity.

Members: Andrew Hawryshkewich, Philippe Pasquier, Arne Eigenfeldt.

Images:

Presentation: Click here to view the presentation.

Papers & Posters:

  • Hawryshkewich, A., Pasquier, P. & Eigenfeldt, A. (2010). "Beatback: A Real-time Interactive Percussion System for Self Directed Practise" New Interfaces of Musical Expression (NIME), Sydney, Australia. [View PDF]

Freepad (+/-)

A Custom Paper-based MIDI Interface

Using computer vision and collision detection techniques Freepad further explores the development of mixed-reality interfaces for music. The result is an accessible user-definable MIDI interface for anyone with a webcam, pen and paper, which outputs MIDI notes with a velocity value based on the speed of the strike.

Members: Sungkuk Chun, Matthieu Macret, Andrew Hawryshkewich, Keechul Jung, Philippe Pasquier.

Images:

Video:

Thumb 01Thumb 02
     

Presentation: Click here to view the presentation.

Papers & Posters:

  • Chun, S., Hawryshkewich, A., Jung, K. & Pasquier, P. (2010). "Freepad: A Custom Paper-Based MIDI Interface" New Interfaces of Musical Expression (NIME), Sydney, Australia. [View PDF]
  • Website: www.metacreation.net/freepad/index.php

Eavesdropping (+/-)

Audience Interaction in Networked Audio Performance

A networked audio performance environment which mixes the moods of audience members via an artificial agent conductor to form a diverse acoustic ecology based on the auditory display of their mood data. Audience members input their moods to start the performance and a reinforcement learning engine offers participants the opportunity to improve the validity of audio-to-mood mapping.

Members: Jack Stockholm, Philippe Pasquier.

Images:

Papers & Posters:

  • Stockholm, J. & Pasquier, P. (2008). "Eavesdropping: Audience Interaction in Networked Audio Performance" ACM International Conference on Multimedia (ACM MM 2008), Vancouver, Canada, pages 559-568. [View PDF]
  • Stockholm, J. (2008). "Eavesdropping: Network Mediated Performance in Social Space." Leonardo Music Journal, 18 (2008): 55-58. [View PDF]
  • Stockholm, J. & Pasquier, P. (2009). "Reinforcement Learning of Listener Response for Mood Classification of Audio" Proceedings of the First International Workshop on Social Behavior in Music (in conjonction with the IEEE conference on Social Computing). [View PDF]

BeatBender (+/-)

Subsumption Architecture for Autonomous Rhythm Generation

BeatBender is a computer music project that explores a new method for generating emergent rhythmic drum patterns using the subsumption architecture. Rather than explicitly coding symbolic intelligence into the system using procedural algorithms, BeatBender uses a behavior-based model to elicit emergent rhythmic output from six autonomous agents. From an artistic perspective, the rules used to define the agent behavior provide a simple but original composition language. This language allows the composer to express simple and meaningful constraints that direct the behavior of the agent-percussionists. From these simple rules emerge unexpected behavioral interactions that direct the formation of complex rhythmic output. What is striking is that these rhythmic patterns, whose complexity is beyond human grasp, are both musically interesting and aesthetically pleasing.

Members: Aaron Levisohn, Philippe Pasquier.

Images:

Audio:

Papers & Posters:

  • Levisohn, A. & Pasquier, P. (2008). "BeatBender: Subsumption Architecture for Rhythm Generation" ACM International Conference on Advances in Computer Entertainment (ACE 2008), Yokohama, Japan, pages 51-58. [View PDF]
  • Website: www.aaronlevisohn.com/beatbender.html

Automatic Video Game Level Generation (+/-)

Automatic Game Design through Genetic Algorithms

We are developing a generative system that automatically designs levels for skill-based video games such as Super Mario Bros. and The Legend of Zelda. Using evolutionary algorithms, levels are selected and bred together based on how fun they are, as determined by a computational model of player enjoyment. The hope is that this method will help small development teams compete with big-budget studios. Since video game creation typically requires a team of professional artists and designers to craft each piece of game content by hand, this research could considerably reduce the time and expense needed to produce games. Furthermore, procedural generative techniques not only promise to produce virtually limitless amounts of content, but also make it possible to tailor content to the individual player.

Members: Nathan Sorenson, Philippe Pasquier.

Images:

Presentation: Click here to view the presentation.

Papers & Posters:

  • Sorenson, N. & Pasquier, P. (2010). "The Evolution of Fun: Automatic Level Design through Challenge Modeling" Proceedings of the First International Conference on Computational Creativity (ICCCX), ACM Press, Lisbon, Portugal, 258–267. [View PDF]
  • Sorenson, N. & Pasquier, P. (2010). "Towards a Generic Framework for Automated Video Game Level Creation" International Conference on Evolutionary Computation in Games (EvoGame), Istanbul, Springer, 2010. [View PDF]

Naos (+/-)

Biometric Architecture

Naos is a research group dedicated to the investigation of automated biometric classification, and the exploration of biometric architecture. The project's objective is to design an ergonomic brain-computer-architectural interface, compelling biometric visualizations and psychological test examples to explore the legitimacy of these types of systems at the nexus of the biological, computational and architectural.

Its first project is the Naos PlatformTM, a biometrics and psychological testing system. The Naos PlatfomTM includes a Biometrics Capsule, Biometric Tendency Recognition and Classification System (BTRCS)TM and the Naos Adherance IndexTM. The current system implementation utilizes a NeuroskyTM EEG brain-scanning headset, a Thought TechnologiesTM Galvanic Skin Response sensor, biometrics recording software, user profile and biometrics database, architectural capsule system and a psychological test example--the Naos Loyalty Test. This test classifies individuals based on their reactions to people of other races in order to determine their level of prejudice. It serves as a very basic example, one of many, that could be undertaken using the Naos PlatformTM.

Members: Carlos Castellanos, Philippe Pasquier.

Images:

Papers & Posters:

  • Castellanos, C., Pasquier, P., Thie, L., & Che, K. (2008). "Biometric tendency recognition and classification system: an artistic approach." In Proceedings of the 3rd international conference on Digital Interactive Media in Entertainment and Arts (pp. 166-173). Athens, Greece. [View PDF]
  • Website: www.projectnaos.com

Genetic Programming for Synthesizer Evolution (+/-)

Synthesizers are hardware or software instruments designed to generate sounds. Given a sound, the question is: what is a (or the best) synthesizer capable of producing it? This research explores a method for automated synthesizers' design to produce a given target sound. The synthesizer's architecture and its parameters are grown using Genetic Programming (GP), a population-based evolutionary algorithm. The resulting synthesizers are presented as interactive Pure Data patches.

Members: Noemie Perona, Matthieu Macret, Denis Lebel, Philippe Pasquier.

Papers & Posters:

  • Macret, M. & Pasquier, P. (2013). "Automatic Tuning of the OP-1 Synthesizer Using a Multi-objective Genetic Algorithm" Proceedings of Sound and Music Computing Conference [View PDF]

Typological Analysis of Gesture Interaction (+/-)

One of the important aesthetic-compositional features of acousmatic music is the complex gestural interaction between various sound units. We have undertaken a typological analysis of gesture interaction in canonical works of acousmatic music in order to advance understanding of the contributing factors to the creation of "successful" gesture interaction within the genre. Utilizing a perceptual, listener-based, analysis methodology, we have found three levels of typological databases: the micro, meso and macro levels. While results of our analysis may be useful in compositional, pedagogical and musicological endeavours, we envision their use as within generative and interactive computational systems.

Members: Adam Basanta, Arne Eigenfeldt.

Presentation: Click here to view the presentation.

Papers & Posters:

  • Basanta, A. & Eigenfeldt, A. (2010). "Perceptual Analysis of Gesture Interaction in Acousmatic Music" Electroacoustic Music Studies Conference (EMS), Shanghai. [View PDF]

Automatic Calibration of Audio Synthesis (+/-)

Automatic Calibration of Modified FM Synthesis to Harmonic Sounds using Genetic Algorithms

We propose a system that automatically calibrates Modified FM synthesizers. Reproducing the sounds of musical instruments has been successfully achieved by many audio synthesis techniques. However, this task can be difficult and time-consuming especially when there is not intuitive correspondence between a parameter value and the change in the produced sound. Searching the parameter space for a given synthesis technique is, therefore, a task more naturally suited to an automatic optimization scheme. Using a genetic algorithm and a fitness function based on harmonics analysis, our system is able to automatize the calibration of a ModFM synthesis model for the reconstruction of harmonic instrument tones.

Members: Matthieu Macret, Philippe Pasquier.

Papers & Posters:

  • Macret, M., Pasquier, P. & Smyth, T. (2012). "Automatic calibration of modified fm synthesis to harmonic sounds using genetic algorithms" Proceedings of the 9th Sound and Music Computing Conference, Copenhagen, Denmark. 8 pages. [View PDF]

GEDMAS (+/-)

The Generative Electronic Dance Music Algorithmic System

The Generative Electronic Dance Music Algorithmic System (GEDMAS), stems from the Generative Electronica Research Project (GERP) of Simon Fraser University’s Metacreation, Agent and Multi-Agent Systems lab (MAMAS). GEDMAS generates full breakbeat style songs or tracks by probabilistically modelling a corpus of 24 fully produced breakbeat tracks. The tracks generated contain top-level song form structures created using: a 1st-order Markov model and bottom-level instrumental sequences based on the probabilistic models of the analyzed corpus.

Further details and links to published papers by the Metacreation Researchers can be found HERE

Members: Chris Anderson, Arne Eigenfeldt.

Video:

Thumb 01Thumb 02
     

Dream Machine (+/-)

“Dreaming Machine #3” is a site-specific generative installation in the series of "Context Machines" (Bogart, 2011, 2013) and perceives the visual context of installation through a video camera and generates images presented to the viewer. These images represent the phenomenological experience of images we hold in our minds. The authors have developed an Integrative Theory (Bogart 2013) that links perception, mental-imagery, mind-wandering, dreaming and spontaneous creativity into a single conception of simulation that is supported by neurological evidence. The Dreaming Machine is an artwork and computational model that realizes the Integrative Theory. The system uses artificial intelligence methods to make sense of incoming visual material and learns objects and patterns that are used to simulate external reality. The result of the simulation is presented to the viewer as a moving image. The simulation is presented in three distinct modes: In the perceptual mode, the viewer sees a reconstruction of the camera's image build up from already learned components. In the mind-wandering mode, a relatively static camera image causes an hallucinatory response where imagined images, constructed from learned components, are superimposed on the perceptual image. In the dreaming mode, the absence of external information (due to darkness) leads to totally fabricated images that simulate external reality. These dreaming images diverge from physical reality because they are no longer anchored in external sensory information and because the simulator is unable to perfectly reproduce reality.

Papers & Posters:

Agent and Multi-Agent

SC-Sim (+/-)

Social Coherence in Multi-Agent Organizations

This project presents a social coherence-based model and simulation framework to study the dynamics of multi-agent organizations. This simple operational model introduces the notion of social coherence as the main social organizing principle in MAS. Moreover, the model rests on the notion of social commitment to represent all the agents' explicit inter-dependencies, including roles and organizational structures. Sanction policies provide social control mechanisms to regulate the enforcement of social commitments. In this model, social control is actually integrated into the coherence calculus. Local coherence is the driving force that organizes agents' behaviour and from which social coherence emerges. A SC-Sim simulator has been implemented as a Java applet.

Members: Erick Martinez, Philippe Pasquier.

Papers & Posters:

  • E. Martínez, I. Kwiatkowski & P. Pasquier. (2010) "Towards a Model of Social Coherence in Multi-Agent Organizations." In Proceedings of the 9th International Workshop on Coordination, Organization, Institutions and Norms in Multi-Agent Systems, AAMAS-2010, Toronto. [View PDF]

Area Coverage (+/-)

Fault tolerant multi-robot area coverage

In this research we aim to propose a new efficient approach to the problem of multi-agent area coverage where the structure of the map is known to the robots and the robots have a limited visibility range meaning that they cannot detect objects located outside the visibility area. In this project we mainly apply computational geometry concepts and algorithms such as Constrained Delaunay Triangulation and Visibility Graph in order to build a roadmap which can be used by the robots to cover the entire area.

Multi-agent area coverage using a Single Query Roadmap

In this project we propose a mechanism for visually covering an area by means of a group of homogeneous reactive agents through a single-query roadmap called Weighted Multi-Agent RRT, WMA-RRT which is a variation of Rapidly-exploring Random Tree (RRT) roadmap. The agents do not know about the environment and the roadmap is locally available to them. In accordance with the swarm intelligence principles, the agents are simple autonomous entities, capable of interacting with the environment by obeying some explicit rules and performing the corresponding actions. The interaction between the agents is carried out through an indirect communication mechanism and leads to the emergence of complex behaviours such as multi-agent cooperation and coordination, path planning and environment exploration.

Members: Alireza Davoodi, Pooyan Fazli, Ali Nasri Nazif, Alan Mackworth, Philippe Pasquier.

Images:

Papers & Posters:

  • Davoodi, A., Fazli, P., Pasquier, P. & Mackworth, A.K. (2010) "Multi-Robot Area Coverage with Limited Visibility." The Ninth International Conference on Autonomous Agents and Multi-Agent systems (AAMAS), Toronto, Canada. [To appear] [View PDF]
  • Davoodi, A., Nazif, N. & Pasquier, P. (2009) "BDI Agents in Environment Coverage Using a Single Query Roadmap: A Swarm Intelligence Approach." International Workshop on Agent-based Collaboration, Coordination, and Decision Support (ACCDS) in conjunction with the 12th International Conference on Principles of Practice in Multi-Agent Systems (PRIMA). Springer, 16 pages. [View PDF]
  • Fazli, P., Davoodi, A., Pasquier, P. & Mackworth, A. K. (2010) "Fault-tolerant multi-robot area coverage with limited visibility." In Proceedings of the International Workshop on Search and Pursuit/Evasion in the Physical World: Efficiency, Scalability, and Guarantees, Internationale Conference on Robotics and Automation (ICRA 2010), 2010. 6 pages. [View PDF]

Aesthetic Agents (+/-)

Swarm-based Non-photorealistic Rendering using Multiple Images

The creation of expressive styles for digital art is one of the primary goals in non-photorealistic rendering. In this paper, we introduce a swarm-based multi-agent system that is capable of producing expressive imagery through the use of multiple digital images. At birth, agents in our system are assigned a digital image that represents their `aesthetic ideal'. As agents move throughout a digital canvas they try to `realize' their ideal by modifying the pixels in the digital canvas to be closer to the pixels in their aesthetic ideal. When groups of agents with different aesthetic ideals occupy the same canvas, a new image is created through the convergence of their conflicting aesthetic goals. We use our system to explore the concepts and techniques from a number of Modern Art movements. The simple implementation and effective results produced by our system makes a compelling argument for more research using swarm-based multi-agent systems for non-photorealistic rendering.

Members: Justin Love, Philippe Pasquier.

Images:

Papers & Posters:

  • Love, J., Pasquier, P., Wyvill, B., Gibson, S. & Tzanetakis, G. (2011) "Aesthetic Agents: Swarm-based Non-photorealistic Rendering using Multiple Images" Computational Aesthetics in Graphics, Visualization, and Imaging (CA), Vancouver, Canada, 2011. [View PDF]
 

For more information, please contact the members who created the project you are interested in.