KVL home page
The Centre for Computing in the Humanities home pageKing's College London home page
The Body and Mask in Ancient Theatre Space - home page

The Body and Mask in Ancient Theatre Space - Research Project

Integrating motion capture data - by Martin Blazeby

Unlike chromakey video technology, where the results of which can only be observed from a single pre-selected viewpoint; motion capture technology permits its results to be observed from any viewpoint in post production form. The disadvantage with this technology is that, left in its raw state, the data captured can only be displayed as a series of 'optical dots' tracking the motion of a performer. This in itself merits significant research analysis to take place, but without the costume, mask and space it value is diminished. However, with skilled expertise of a character animator it is possible to clothe and skin the optical motion data, this then allows for greater flexibility for post research analysis as the data can be interrogated by viewing from any position as required.

In terms of realism verses that of chromakey output, motion capture relies heavily on the quality of the modelled virtual actor. In essence the motion data is based on real movement but if the modelled clothing for example doesn't synchronise with that of the body's motion then this sense of realism is seriously compromised. One advantage that motion capture does have over chromakey is that, once clothed and skinned, shadows can be generated by the virtual actor moving in space, whereas shadows are lost due to chromakey video recording constraints. However, chromakey video output still offers a far greater sense of realism than that of motion capture, but this technology provides an otherwise impossible opportunity, to view an actor within a virtual ancient theatre space, from any location as never experienced before.

The process of taking the recorded motion data captured by Coventry University's optical tracking system and transforming it into a virtual clothed actor is complicated technological procedure, but made possible with the assistance of Dr. Woong Choi from the Kinugasa Research Organization, Ritsumeikan University, Kusatsu, Japan, a specialist in motion capture interfacing. The most important part of this process is the conversion of optical data into skeletal data, this is achieved by firstly creating a biped skeletal template in Autodesk 3D Studio Max, and this is constructed to the exact proportions of the actor. It is hugely advantages at this stage to have the biped rigged with clothing and human skin. The biped is then exported into Autodesk Motion Builder software, and the 'c3d' motion data is then imported, attached and characterised to the biped skeleton. If the biped is scaled correctly in the first instance the motion data should synchronise exactly. The biped along with motion is then imported back into 3D Studio Max replacing the original biped template. The virtual mask generated by the scanning phase of the project is then imported, scaled and assigned to the biped along with clothing.

Having successfully created the virtual actor along with motion, the next step is to merge the actor into the 3D ancient theatre setting associated with the performance. Virtual cameras are then positioned at various locations within the theatre and can be animated or left in fixed positions designed to focus on relevant aspects of the masked performance.


integrating mocap   integrating mocap   integrating mocap   integrating mocap
biped created in 3D Studion Max
 
biped rigged with costume and skin
 
biped with optical motion data
 
virtual actor and virtual theatre

 

----------
<< back to Project pipeline

----------


Motion capture results