Because innovation is a growth lever, several companies have taken up the challenge of developing in-house tools to optimize their production. GameFusion developed a system for automatically generating characters, Cyber Marionnette, ideal for crowd scenes in feature films or series. Alkymia is offering both a new facial motion capture helmet and an iPad app for saving and cataloguing a production's media. Game Audio Factory and SolidAnim have allied their mocap and sound talents to develop a Kinect-based device for simultaneous facial motion capture and voice recording. Last of all, Mercenaries Engineering has unveiled a new Path Tracing render engine called Guerilla Render.
Research & Development Director
Creator of the visual concept for Renaissance, Cristal winner for best feature in 2006, Marc Miance now runs the R&D company for digital cinema Alkymia, created in 2010. With his production company, Let'So Ya!, he is producing the animation features Why I Did (Not) Eat my Father by Jamel Debbouze and Cyclions by Isabelle Tonnel.
Game Audio Factory
Jean-François Szlapka has been the co-CEO of SolidAnim since its inception in 2007. The company specialises in motion capture and new technologies related to 3D animation and special effects previsualisation for cinema. He is also responsible for coordinating SolidAnim's R&D department.
Since 1995, Andreas Carlén has been creating custom software solutions for the development of games and 2D/3D animation destined for both television and feature film productions. In 2008, he founded GameFusion, a company specialised in cross-media technologies for digital entertainment. He has also created a new set of collaborative software technologies combining traditional animation techniques and interactive game applications.
Cofounder, Lead R&D Engineer
Benjamin Legros co-founded Mercenaries Engineering with Cyril Corvazier in 2005 to develop innovative high-end production software designed specifically for animation and visual effects industries.
Head of Technical Industries and Innovation
GameFusion, Carlen, Cyber Marionnette, Alkymia, Miance, Horyzon, Jamel Debbouze, iPad, Game Audio Factory, SolidAnim, Percevault, Szlapka, facial mocap, postsynchro, Kinect, Gleam Session, Mercenaries Engineering, Guerilla Render, Legros.
GameFusion is a company specialized in providing services and technologies revolving around animation solutions. "The aim of all our research is to improve pipeline productivity," Andreas Carlen states. In any production, creating new characters is often complex and costly. It is for cost-reduction purposes that Cyber Marionnette was developed for Cyber Group Studios. "This is a character-generating tool, available as a stand-alone but also as a plug-in for Maya. Compatible with Linux, Mac OS X and Windows, it can also be used on an iPad (iOS app) and Android. Although it was tested for use on feature films, it is very much adapted to a series' type of configuration."
The aim of its interface, developed in OpenGL and C++, is to simplify the authoring of characters. The founder of GameFusion explains: "You just have to start by choosing the gender and desired 'morphotype' (morphology type). The software provides all modification functionalities for the face and the body, the accessories and materials. The principle is founded on a series of presets. Thanks to its correlation with Maya it's quite easy to get in to the skeleton if you want to modify it along the way."
In the case of an item of clothing to be changed, a click on the area needing modification opens a window to alter its color or texture. Other windows access what is known as global viewing (body, corpulence, muscles), whereas with the facial section it's possible to tweak a number of facial elements, all fully configurable: chin, skull, ear, cheekbone, cheek, jaw, etc. Final generating and exporting are done with a click and the finished result becomes a Maya file that can slip seamlessly into a production pipeline.
Cyrille Martin, head animator, next describes the composition of the data base helping effectuate this "automated" generation. "The data base contains two generic models: male and female, neutral enough so as to be rolled out according to a wide array of possibilities. For deformations we use a system of cages, combined with blendshapes, enabling zone-based work – making it easier to obtain the changes." A more detailed cage – relative to the modifiable parameters – corresponds to the face. The body cage offers structure-deforming functionalities and, in this particular case, any deformations are applied to the bones. The second level of deformation therefore occurs through shapes. Andreas Carlen emphasizes that the composed data base must be decided upstream in the production if real optimization is to be obtained. "With each new preset, we can define a new series of lower-level presets."
With Cyber Marionnette one of the important parts of the development phase was "to preserve the entire set of blendshapes, which is the result obtained through these deformation cages; we're able to save a good percentage of them." The only downside: "since the skin is not directly on the mesh, we have to pre-compute once again to obtain the transformation metrics." Advantage: any changes to one character (blinking, for example) lead to the same changes on all of the other characters. "This is notably ideal for secondary characters or crowds," as Martin points out.
Andreas Carlen explains that when a library is loaded in, the corresponding rigs are also loaded in, so they intertwine automatically… if such has been designated upstream, of course.
Could we get data on productivity gains?
It's hard to say, since it's more case-by-case. What inevitably must be modeled are the characters, but this is done once and for all.
What is the business model?
We worked on a per-studio license base, at least in the Cyber Group case, since they really helped to perfect this tool.
Game Audio Factory and SolidAnim are two French companies located in Angoulême (SolidAnim also has premises in Ivry-sur-Seine, near Paris), with the former specialized in sound, and the latter in R&D and service provision in motion capture. Developed as a joint project, Gleam Session is a marker-less facial mocap system with integrated audio recording. The two companies had noted that motion capture was moving more within reach, both technically and pricewise, and together they began to exchange on another system which could be based on Microsoft's Kinect. "We opted for 3D tracking," Jean-Francois Szlapka explains, "because we wanted to preserve the in-depth data, while freeing ourselves from markers. Here we stay within the confines of the visual sphere, meaning there's no lip sync."
In concrete terms the Kinect catches a signal – full of noise – that nonetheless allows obtention of a depth of field. The co-founder of SolidAnim admits that the signal is not fully precise but he states that the algorithm set up in-house makes it easily possible to fine-tune the results. "Then we place the video on top of the in-depth information. The system then recognizes the face in the image and automatically generates a deformation, with low-defined textures. Next all that is needed is to start the animation."
The advantages of the system are: reasonable cost, no prep time, real-time previsualization and results that are equivalent to a first animation pass, which is then reworked by the animators… "It must be remembered that mocap is just a first step and that it's necessary to do the animation afterwards."
Continuing the presentation, Vincent Percevault describes the audio work. "In this technical jigsaw puzzle that's capture, one piece was missing to synchronize the body and facial capture and voice recording." The most widely-used solution is known as post-audio ADR, where actors' dialogues are redone in a sound booth, re-taped over the studio-recorded voices. Both speakers agree it's a waste of time.
Gleam Session allows for all data – sound and facial mocap – to be recorded simultaneously. Even if this comes out a bit rough, the SolidAnim algorithm, as for the motion, gives quality real-time results through a PrimeSense sensor, available on nearly all software like Protools but also fully present in studios. Vincent Percevault confirms: "We save an amazing amount of time on the post-synchro phase." The only apparent hitch is the loss of correlation with the body motion capture, "but this can also be seen as a plus, since the actors can focus either on their physical performance, or on their emotion when doing dialogues."
Gleam Session has already been used for the car race video, The Crew, published by Ubisoft and shown at E3 in June 2013. "We did all the facial animation of the game, to wit 1,500 lines of dialogue, with near photo-realistic rendering," the Game Audio Factory founder recalls. The Télé Loisirs TV program magazine also used Gleam Session for a two- and a half-minute video feature on its website, called Le Cinema de Zlatan. It shows a football-player puppet presenting his favorite movie flicks of the week. "Taping a show takes a day: in the morning it's body capture of the stand-in Zlatan Ibrahimovi?; in the afternoon, it's facial capture and voice recording via Gleam Session. All that remains to be done is the animation post-synchronization," Jean-François Szlapka notes.
Already, Game Audio Factory and SolidAnim have plans to improve the system with real-time rendering, multilingual options and perhaps even, in a 2.0 version, real-time broadcasting with an embedded device for further optimization.
After having founded Attitude Studio, Marc Miance developed two companies: Let'So Ya, turned toward executive production, and Alkymia, more directed at R&D. Here he presents two innovations emanating from both his work sectors used in producing the feature film The Evolution Man (or Pourquoi j'ai (pas) mangé mon père), directed by actor Jamel Debbouze.
This feature-length movie, adapted from Roy Lewis' novel, has been made in motion capture "with many well-known actors including even Jamel, which in itself raises the stakes for this type of production," Marc Miance notes in introduction. "We did not want to de-correlate the facial and the body animation because otherwise it will give the impression that the character is out of sync. This staggered effect is mainly due to the fact that the eye balance no longer is in step with the body balance," he explains. Miance had done the entire production of Renaissance, also in motion capture.
To fit his needs, Marc Miance therefore shopped around on the facial capture head-mount market. "It's a long, tedious and costly process to stick the markers on the actors, especially since positioning them slows the shoot time. What's more, the consumables are prohibitively ruinous. Just as a reminder, for the Avatar movie, the marker budget was about one million euros! Marker-free mocap has great possibilities, with a tracking that can reach up to 200,000 points/frame on a face. There is also the data processing solution, based both on metrics and expression. Companies like Cubic Motion offer it either as a service or as software."
Just the data capture itself is not so easy: according to Marc Miance most of the cameras are of poor quality. "But we did not want to face developing our own proprietary tools, since I know how hazardous it can be." However, over the long run, that was the chosen option, with a Headcam designed by Laurent Martin and Jean-Paul Dasilva. Light, precise and sturdy, it weighs 350 g, equipped with fastenings with no pressure points "to prevent headaches and adapt to different skull shapes. To perfect this, we scanned lots of shapes with CAD software and then came up with a generic resin model before finishing with a series of three fiber head mounts, equipped with laces for the fastening system. The headpieces are slightly heavier on the left than on the right, to properly balance out with the excess weight of the camera and arm. 'Excess' is a big word because this camera – a Horus made by the specialized imbedded-camera company – only weighs 35 g. To obtain this featherweight we unhooked all the electronics and placed them in an outside box," Marc Miance reveals.
Voice recording is done in wireless manner: "We tested out digital HF but it cut off all the time. So we miniaturized the video server (weighing 300 g), placed it on the actor, and guided it all by Wi-Fi. That way we can trigger the capture and stop it as needed. All that's required is a stable Wi-Fi field. The other advantage: should the field cut off, the recorder continues to work."
The Headcam has already proven its mettle since it was used for 9 weeks of shoot, costing "less than 1,000 € per minute for facial mocap".
Similar to the facial mocap head mount, Horyzon was developed to fill an occasional need in production, before being permanently installed on the production pipeline. "Horyzon is an iOS application for iPad which works like an e-mail in-box but, instead of e-mails the different media of a film production are incoming," Miance continues. The idea sparked a few years back and has been in production for three years. "To work with a director such as Jamel Debbouze, since he must travel often, it could have become tedious to exchange on the movie's progress. It's simple to read a single e-mail but much less simple to read hundreds and, if it's necessary to green-light a shot, an animatics, etc.; things can become very complex indeed."
The idea is to start with the community concept corresponding to a production's various stations/steps, applying filter parameters to grant or refuse modification authorizations, depending on the profiles. So a single medium is thus sent to a determined list of people and with the iPad the Quicktime file can be displayed, and possible modifications annotated, within the breadth of a single frame. The pad's webcam also makes it possible to register precise commentary since it's oral and there's no time lost. Each commentary is stacked in, and so the past history of the contributions can be reviewed.
Meanwhile, given the amount of stored data, Horyzon also has a gateway which opens onto a timeline for the entire film, semantically ranked for better housekeeping. As production moves forward, the timeline is fed into and the movie becomes structured, step by step. It's possible to access sequences which are still in the animatics phase, or others still in animation, or more advanced still. An automated versioning system also gives a useful view of the different people working on a shot, as well as a view of previous versions.
In order to avoid too many intermediaries, the network's architecture is based on a community manager, of workers and followers. As such, this can be considered to be a dynamic data base but "not a production-tracking tool," Marc Miance affirms. "We talk more about a dynamic window looking in on production at a certain point in time. Actually, we use Shotgun as a production-tracking solution, and it can be plugged into Horyzon."
Do you plan to market these two products?
The head mount and the arm-camera will indeed be launched on the market in September 2013, and Horyzon in early 2014, as an application.
Mercenaries Engineering is a Parisian-based company founded in 2005 by Cyril Corvazier and Benjamin Legros. It develops production tools for animation and VFX, notably its new lighting and rendering motor, Guerilla Render. "Our problem was to take into account both artistic and production imperatives," Benjamin Legros explains. "In the first case, the aim was to obtain the targeted image, to combine simplicity and flexibility; in the second: to obtain an operational work flow that would master the complexity while still ensuring sturdiness and performance."
It is a Path Tracing render engine, like Arnold, used to do general lighting, to manage the IBL, shaders and physical lights. Its flexibility and programming capacity mean that it uses industry standards as a base (Udim textures, RSL, DSO) with baking functions. Guerilla Render covers the following functions: fast ray tracing/casting, rapid ray-traced subsurface scattering, shaders that are optimized and compiled with LLVM (Low Level Virtual Machine). "Our strong point is that we can get progressive rendering. Within a few seconds we come up with a result that may seem a little rough, but it already provides a good image and a correct view of the path to follow."
The second aspect of the tool is its potential to develop shaders, surfacing (texture and materials positioning), to work on a combination of scenes, to pre-light and light the scenes and, last of all, to expedite the renders to the farm. "We've designed it as a Maya plug-in, with baked geometry. We begin by assigning shaders to an asset. Then the animation has to be done; once it's finished, we export it all into Guerilla."
The central tool of these two innovation paths is called Render Graph, a node- and hierarchy-based graph – de-correlated from the acting – to assign textures, materials and other parameters. Its main advantage is its capability to easily replicate adjustments from one scene to the next, resulting in obvious productivity gains.
Guerilla Render will be presented at 2013 Siggraph and has already been used for the production of five VFX features, one animation feature, and soon will be used on Ballerina, the animation film produced by the French company Quad.
Drafted by Stéphane Malagnac, Prop'Ose, France
Translated by Sheila Adrian
The Annecy 2013 Conferences Summaries are produced with the support of:
under the editorial direction of René Broca and Christian Jacquemart