Keynotes
Kenny Mitchell (Disney Research)
Kenny Mitchell is an Imagineer and research head for the Walt Disney Company Ltd, with lab located at Edinburgh University's business campus (an outpost of Disney Research Zurich). Over the past 16 years he has shipped games using high-end graphics technologies including voxels, volumetric light scattering, motion blur and curved surfaces. His PhD introduced the use of real-time 3D for information visualisation on consumer hardware, including a novel recursive perspective projection technique.
In between contributing to the technically acclaimed racing game, Split Second, Spielberg's Boom Blox (BAFTA award winner), Disney Infinity and the Harry Potter franchise games he is involved in developing new intellectual properties. His work on video games and mixed reality technologies includes collaboration with all Disney business units and many successful funded University collaborations. He is a member of the EPSRC strategic advisory network and an a number of computing school advisory boards. He is the most senior Disney Research representative in the UK.
Andrew Willmott
Andrew Willmott is a veteran engineering and research lead in the video game industry. Over the past twelve years he worked on a variety of simulation games for Maxis, including The Sims, SimCity, and Spore, culminating in a position as one of EA's most senior engineers. Earlier this year he co-founded the gaming startup Jellygrade, with the aim of bringing a fresh take on simulation games to mobile devices.
His core area of interest is computer graphics, particularly visual effects, and much of his focus has been on developing novel real-time solutions in this area. Some examples include lighting player-created levels, generating and rendering 3D planets, procedural biorama synthesis, simplifying meshes in real time, and the use of directional occlusion volumes for ambient occlusion.
Andrew holds a PhD from Carnegie Mellon University, where he worked with Paul Heckbert on finite-element global illumination, specialising in the processing of massive polygonal meshes. He was the engineering lead of the team that won a technical achievement BAFTA for the game Spore in 2009, and is a current BAFTA member. He lives in London with his wife Alma.
Christian Theobalt
Capturing and Editing Reality - Reconstruction and Modification of Models of the Real World in Motion
Even though many challenges remain unsolved, in recent years computer graphics algorithms to render photo-realistic imagery have seen tremendous progress. An important prerequisite for high-quality renderings is the availability of good models of the scenes to be rendered, namely models of shape, motion and appearance. Unfortunately, the technology to create such models has not kept pace with the technology to render the imagery. In fact, we observe a content creation bottleneck, as it often takes man months of tedious manual work by a animation artists to craft models of moving virtual scenes.
To overcome this limitation, the research community has been developing techniques to capture models of dynamic scenes from real world examples, for instance methods that rely on footage recorded with cameras or other sensors. One example are performance capture methods that measure detailed dynamic surface models, for example of actors or an actor's face, from multi-view video and without markers in the scene. Even though such 4D capture methods made big strides ahead, they are still at an early stage. Their application is limited to scenes of moderate complexity in controlled environments, reconstructed detail is limited, and captured content cannot be easily modified, to name only a few restrictions. In this talk, I will elaborate on some ideas on how to go beyond this limited scope of 4D reconstruction, and show some results from our recent work. For instance, I will show how we can capture more complex scenes with many objects or subjects in close interaction, and I will show how we can capture higher shape detail as well as material parameters of scenes. The talk will also cover how we can capitalize on more sophisticated light transport models to enable high-quality reconstruction in much more uncontrolled scenes, eventually also outdoors, and with fewer cameras. If time allows, I will also demonstrate how to exploit such reconstruction methods to conveniently modify captured scenes, how to synthesize new content, and how to perform advanced edits on video footage by exploiting reconstructions of what happens in the scenes.
Christian Theobalt is a Professor of Compter Science and the head of the research group "Graphics, Vision, & Video" at the Max-Planck-Institut fuer Informatik, Saarbruecken, Germany. From 2007 until 2009 he was a Visiting Assistant Professor in the Department of Computer Science at Stanford University. He received his MSc degree in Artificial Intelligence from the University of Edinburgh, Scotland, and his Diplom (MS) degree in Computer Science from Saarland University, in 2000 and 2001 respectively. From 2001 to 2005 he was a researcher and PhD candidate in Hans-Peter Seidel's Computer Graphics Group at MPI Informatik. In 2005, he received his PhD (Dr.-Ing.) from Saarland University and MPI.
Most of his research deals with algorithmic problems that lie on the boundary between the fields of Computer Vision and Computer Graphics, such as dynamic 3D scene reconstruction and marker-less motion capture, computer animation, appearance and reflectance modeling, machine learning for graphics and vision, new sensors for 3D acquisition, advanced video processing, as well as image- and physically-based rendering.
For his work, he received several awards including the Otto Hahn Medal of the Max-Planck Society in 2007, the EUROGRAPHICS Young Researcher Award in 2009, and the German Pattern Recognition Award 2012. He is also a Principal Investigator and a member of the Steering Committee of the Intel Visual Computing Institute in Saarbruecken.