This last weekend was quite enjoyable. On Friday the Digital Arts Division of the Wits School of the Arts hosted a workshop with Friedrich Kirschner exploring the theory and tools used in making machinima (see this previous post for an explanation of machinima). After a presentation explaining the history of machinima and the different features of a modern computer game, Friedrich showed us how we are able to modify these different features to change the user experience. We had a short lunch break and returned for the practical session.
There were about about 16 of us in the lab and we all started up the game Unreal Tournament 2004. Friedrich hosted a network game on one of the PCs which we then joined. Chaos ensued for the next 5 minutes as we proceeded to run around blasting each other into little pieces. Which was lots of fun. But he then showed us how we could still have fun within the computer game world without blowing each other up, by playing games like hide-and-go-seek. After a few runs of this he moved us on to coordinated dancing. He made us direct our game characters to a flat piece of ground where we lined up in single file. Friedrich played some music through his laptop and and then instructed every alternate character to jump on a certain beat, with the other crouching at the same time. It seems quite simple but it took a good few minutes for us to get it right. The point of all this was to get us used to the game, as well as disciplining ourselves not to use the rocket launcher you’re holding in your hand (currently pointed at the next person in line) – whilst following instructions from the ‘director’.
Then the cool stuff began. We made a music video – inside the game! He assigned different roles to various people in the workshop: director, executive producer, set scout, make-up and costume designer, choreographer, cameraman and actors/dancers. We got 45 minutes to each work out what we needed to do to make this music video. I joined the other dancers in the studio room where we physically acted out all the ‘dance’ choreography (the game characters aren’t afforded any really complicated movement – mostly running, crouching and jumping). Then we all logged in the game and ‘filmed’ the music video. The cameraman was actually a player in the game (technically a ‘spectator’) who moved around the scenes and recorded what he was seeing using software called fraps. In between dancers blasting each other with plasma rifles we managed to shoot a few scenes. My robot character starred in a short cutaway where moments after crouching in front of a female dancer I was exploded into little pieces. My 4.2 seconds of fame. It was lots of fun.
The next day we met at the Goethe Institute of Johannesburg for the ink scanning event. The point of this is to generate a 3D model from a real life object (in this case, a person). One can then use this 3D model in a game, which is pretty cool. The process is still very rough around the edges but works beautifully as a proof of concept. Friedrich had set up a large blow-up pool filled with water (and a rubber duck). A cheap webcam was placed overhead and connected to his laptop. He first did some tests to see what colour ink he would be using. The important thing is the contrast between the ink and the person’s skin colour. He tried black ink and the darkest skinned person, and white ink and the lightest skinned person. Whichever worked best would be used, as every other persons skin would perform better with that ink choice. We settled on black ink. The principle is that with an object initially above the water you can see the entire profile. But as you lower the object into the opaque liquid the camera above sees slightly less and less depending on the height of the object, until finally when fully submerged, the camera sees nothing but the ink. While the ink makes the water opaque there so little of it in the pool that it does not stain your skin or your clothes at all.
The ink scanning works like this: The person being scanned jumps in the small pool and gets accustomed to the water. A black wooden rig in the shape of a cross is placed in the pool which the scannee then lies on. There is one wide plank for the length of his body and a thinner plank perpendicular to this for placing ones arms on. Friedrich then begins recording with the webcam above and at the same time two other people help to slowly lower the rig into the ink. As soon as the person is submerged the recording is stopped and then he/she is raised out of the water. It feels quite weird to be lowered and raised into the water, quite unnatural, as you have no control over the speed of the operation.
The software then takes apart each frame of the recording and divides all the colours it sees into only two – dark (ink) and light (the body). A layer of the 3D model is created because in every successive frame less and less of the body is visible – the software stacks these layers up against one other to produce a 3D model. All the dark portions are ignored and what’s left is a model made up of all the light layers. It’s pretty hard to picture, but check out this video from Friedrich’s site for a visual explanation.
All in all it was a most enjoyable and educational week! I’m sorry to have missed the second workshop day on Sunday (I had plans with my family), but I heard it was also really interesting. Friedrich showed the attendees more on how to edit the game characters and environment for shooting movies.
The software (moviesandbox) which Friedrich made and uses is available for download from his website. He encourages people to try it out and modify it for your own use.