Microsoft has been working on the next big thing after the Xbox – unsurprisingly trying to steer the attention away from the Wii… and the result is the NATAL project. In their website they present it like this: “a revolutionary new way to play: no controller required. See a ball? Kick it, hit it, trap it or catch it. If you know how to move your hands, shake your hips or speak you and your friends can jump into the fun — the only experience needed is life experience”.
So there we are again, once more we are confronted with the big fantasy of playing and conversing with the computer (television/console) as if it was a human… our “dream” of “natural interaction” is becoming closer and closer, it seems… But let’s have a look to what project NATAL really does:
Have a look to the promotional video on Microsoft’s Xbox’s website: a happy family plays football using natural body movements (but funnily enough without a ball!), a teenager boy scans his skateboard and sees himself on screen, a teenager girl does shopping online visualizing how a dress will fit her (the examples are just a tiny gender restricted!)… and finally a happy family plays quizzes together (using voice and movement recognition)… The novelty is NOT the activity but the fact that there is no physical interface (no joystick, no remote) and that the image on the screen reacts to the players’ body and voice interaction by speaking back and animating characters as if they were speaking to the user. Natal actually works using a camera to track the user’s movements via full skeletal mapping. It also recognizes voices and vocal commands.
This is definitively quite impressive… it seems to come straight from a sci-fi move… to the point that one questions how much the demo is fake (or at least cleverly rehearsed and programmed). But even if Natal is a few years away, the question still remains for me… how are we going to deal with physical and oral interaction when it comes to narrative?
In early June 2009, at the E3 video conference, Project Natal (or Xbox LIVE) was presented to the world showing games but also presenting possible narrative structures – where the user/player directly speaks with a fictional character. In its Lionhead Demo, a fictional boy has an intelligent and sensible chat with with the user/player… now what is the potential of such technology when applied to fictional stories? And to me, more importantly, could this be applied to interactive documentary? When simulation becomes very accurate, and when fiction verges towards edutainment… do we get a type of interactive documentary? If games such as America’s Army can be stretched to be considered documentaries – in the sense that they simulate real army life and that they are populated by existing characters – why could a Natal character not become a virtual representation of a real person?
Without entering in the discussion of whether true dialogue can happen between a human and a machine (therefore disregarding the Turing test)… my question is: even if satisfactory dialogue was possible with our television screen… what would we do with it? Simulation used to be a way to grasp and learn things that we could not do in our daily life (i.e. flying a plane or killing aliens in a video game) but what purpose does it serve when it simulates our ordinary life?
When I kick a virtual football (that does only live in a TV screen) with a real movement of my body and I see a virtual me (avatar) doing my own movement on a screen… what am I doing? Do I simulate beeing myself? Do I visualize what use to be the feeling of beeing me in movement?
I cannot help to struggle with this concept… I find it surreal. And yet… if I simulate myself in the context of a Natal narrative… am I documenting myself?
This entry was posted on Monday, June 15th, 2009