Mova: Using Aesthetic Interaction to Interpret and View Movement Data

Mova is an interactive movement tool that uses aesthetic interaction to interpret and view movement data. This publicly accessible open source web based viewer links with our movement database allowing interpretation, evaluation and analysis of multi-modal movement data. Mova integrates an extensible library of feature extraction methods with a visualization engine. Using an open-source web-based environment users can load movement data, select their desired body parts, and choose the features to be extracted and visualized. Mova can be used to observe the movements of performers, validate and compare feature extraction methods, or develop new feature extraction methods and visualizations.

This is a collaborative project with movingstories Lab.

Members

Omid Alemi, Philippe Pasquier

Link to the online system

www.sfu.ca/~oalemi/mova/

Link to the demo

www.sfu.ca/~oalemi/movan/

Link to the source code

github.com/omimo/Mova

Research paper

Omid Alemi, Philippe Pasquier, and Chris Shaw. Mova: Interactive Movement Analytics Platform. In Proceedings of the 2014 International Workshop on Movement and Computing (MOCO’14), June 2014.

Download PDF

Previous
Previous

Moda: Open-Source Democratic Access to Movement Knowledge

Next
Next

Walknet: Affective Movement Recognition and Generation