DataViz_2

April 17, 2013 · Posted in Augmented Reality, Data Visualization, Kinect · Comment 

I’ve arrived in Newcastle and set-up in my studio at ISIS Arts.  Last night I gave a talk/demo about the project.I’m using Processing as a dev elopment environment. It allows for sketches which can function as small bits of code. So far I’ve got the Kinect Camera working and it triggers animations based on my hand movement.  I’ve also got an xml feed that’s showing Newcastle weather, a metro-map textured onto a  cylinder and a stream of traffic warnings.The demo was well attended and I was able to touch base with my friends from Pixel Palace. Also in attendance were several New Media curators from CRUMB and CultureLab and a hack space collective as well as several new media arts.  It was a wonderful audience and I was quite honored by their presence.
Today I met with NGI (Newcastle-Gateshead Initiative) who are co-sponsoring my residency.  They are an organization whose purpose is to attract businesses to the region. Part of my residency is to use NGI’s data in my visualization. My initial proposal was to create a natural interface similar to the image sequence from the the movie Minority Report rather than have 2D images I proposed a 3D space. This is in keeping with the 3D point cloud info that can be gained from using the Kinect Camera.  As I get further into the R&D I realize that the print/page/publication metaphor is inappropriate for the information sphere. We don’t need to translate into screenic 2D space. We can use 3D mesh space and navigate information.
I’m getting pretty excited about this research.  When I first started using the internet in the nineties to make art the html code was so primitive that it set up the conditions for anarchic experimentation.  As publications began to migrate to the web they looked for ways to duplicate their printed page layouts.  This has never been a satisfactory way to present digital information. The screen as page is rather boring and clumsy.  Using a 3D information space is much part of my research into Augmented Reality and the potential for creating an information layer that is part of our real life experience. Obviously reading and texting is disruptive to our physical movement and perceptions.  I’m looking at a variety ways to create seamless information interfaces that are more naturalist and less disruptive.  3D space and 3D icons or virtual objects floating in our field of vision are much easier to deal with. They can be focused on or ignored depending on our choice. Sound is also a good environmental data set. This has all ready been incorporated into our environment for text-to-speech driving directions. Most people have this on their smartphones.
My notion is to strip down unnecessary information that would be normal for a print metaphor but are not for a digital world. For example you don’t need an address you only need a geo-locator that feeds into your cellphone or Google map or AppleMaps or perhaps a topographical  interface.  Or a synth-voice reading arrivals and departures of flights in an airport or train station or perhaps the number of parking spaces and the address of the garage etc…
I believe that cellphones and tablets are only temporary interfaces that will be replaced by data-glasses. This will create an AR 3D space integrated into our normal real live space.  Since this is happening the question as an artist is how to create experiences and sensations that deal with this new space.What is increasingly apparent to me is that the old metaphors and structures;  renaissance perspective/procensium arch/ 2D photo/film and static print/language are only conventions. They should be re-thought for the 3D information space of the 21st Century.