One of my contributions as a researcher within the team led by Dr. Julia F. Christensen at the Max Planck Institute for Empirical Aesthetics was to integrate motion capture (MoCap) data into a generic Python 3 setup, where multimodal data facilities (video, audio and MoCap) and Deep Learning frameworks could be unified at will for more practical research and dissemination.
Given its open-source nature, active community, extensive documentation and the fact that is based on Python, we went for Blender as the basis for the multimedia integrated platform. Blender already has video and audio I/O integrated, so the main remaining task was to add MoCap to the mix. Here is where the MVNX Blender add-on comes to play. Another related task was to integrate matplotlib interactively, I’ll leave you a link to how I did it in case you want to check it out.
The used MoCap system, by XSENS, is presented in this paper. The system also comes with a very comprehensive manual. More details about the system internals can also be found in this other paper, and more XSENS publications can be found here.
From the perspective of the user, the system has 2 main components. One is the suit with the sensors, depicted in the first linked paper:
And other is the XSENS proprietary software that allows to record, visualize, analyze and export the MoCap data (see image below and e.g. chapter 13 of the manual):
The system has its quirks (e.g. the MoCap of a single body is consistent, but MoCap positions among several bodies are not), but once you find your way it proves to be a quite effective and reliable way to record MoCap data.
The main system bottleneck is the proprietary software and licensing. First, the system comes with an USB dongle to authorize usage: this is inconvenient because only one person is allowed to work on it at a time, and permission must be physically transferred. But the real limitation is that the system, being proprietary, cannot be freely integrated in custom applications, and these 2 limitations difficult distributed research and dissemination.
The solution, as already introduced, is to export the data in the best way possible and integrate elsewhere. Luckily, section 14 of the manual covers serialization formats with good detail. It exports to several standards, but we went for MVNX, the XML version of their custom MVN format, since it is the most informative one. Page 88 of the manual summarizes the structure:
Check section 14.4 of the manual for more details if you are interested.
Blender is an amazing open-source project for 3D rendering, with a modern user interface, a very powerful and well structured Python-based backend, and multiple competitive rendering engines that integrate with GPUs and multiple platforms.
The Blender manual and corresponding Python API documentation are really a great pleasure to read, and usually you find quickly what you need. The main tasks needed here are:
Integration doesn’t seem trivial at first glance: Blender comes with its own Python interpreter and environment, and Python dependencies can become quickly mixed up, if care is not taken. But as it turns out, it ends up being quite straightforward. Assuming you have downloaded Blender into your favorite operative system and location, you have to locate the python interpreter that comes with it. In our case (Ubuntu 18), it could be found in 2.80/python/bin/python3.7m
(let’s call it <BPYTHON>
). You can run it, and will see that (for most things) it is a regular Python interpreter. So getting started installing pip packages is actually as easy as:
<BPYTHON> -m ensurepip
<BPYTHON> -m pip install --upgrade pip
<BPYTHON> -m pip install <YOUR_PACKAGE> # no --user needed
<BPYTHON> -m pip install <PATH/TO/WHEEL>.whl # also works with wheels
Creation and animation of armatures is well covered in the Animation & Rigging doc section. Extending blender with add-ons is also covered in the Advanced section.
All that remained now was to write a plugin that would:
In the process, the main pitfalls I encountered (as a Blender first-timer) were:
EditBones
and PoseBones
, how certain tasks can only be done in edit mode or pose mode, and also how to keep track of the bone pointers once the mode has changed.Nothing that banging the head against console, documentation and forums wouldn’t fix. This was eventually all solved and you can see the result in this repository. Integrating as an add-on and adding the GUI was an easy part, since I borrowed most of the boilerplate code from the preexisting BVH import/export plugin. The result looks as follows:
The human armature is then automatically built, see the next section for some videos.
One useful application for this is converting the recorded MoCap data into Point Light Displays. To do that, one can basically create a set of icospheric meshes, and attach them to the corresponding bones. The following video shows the 3D viewport in action:
And for a black background with white, 15cm PLDs, the corresponding front and lateral camera video renderings look like this:
As it can be seen, the great advantage of this setup is the flexibility: the exact same recording can be taken from different perspectives, points can be arbitrarily removed, reshaped, rendering can be fully automated and batch computed… and it can be now fully integrated with other modalities by importing video and audio assets into blender. Win win situation!
Original media in this post is licensed under CC BY-NC-ND 4.0. Software licenses are provided separately.