Vizcera: making of

Vizcera is a project I made in collaboration with Digital Art Live, exhibited in DAL interactive space at the Edge center (formerly Aotea Center) Auckland, New Zealand. In this post I briefly talk about the evolution of the project.

The video above was captured during development and does not reflect the final performance, as I fixed a lot of technical issues seen such as obvious lag as well as the sound you can’t hear in the recording. Unfortunately I missed out on capturing my own footage of the installation while it was still up.

Vizcera started as an assignment project for a paper during the first year of my masters degree at Colab AUT. At the beginning it was nothing but my attempts at learning to code and making interactive things using Unity.

Shortly after the assignment requirements were no longer in question, another assignment took over, pushing me to re-purpose the work to optimize time and workload, so the software kept growing and I kept learning more things. Eventually this lead me to explore motion control and microsoft Kinect.

 

At first, the piece was exhibited at Digital Art Live showcase night (image bottom left). The composition was formatted to encompass the participant in the middle of the color circle, where they would reach out to it, enabling sounds and colors to fire off the segments.

Later on in the year, I put the work up on the big DAL screen, which consisted of 12 monitors in a 4×3 grid (image bottom right). The circular composition no longer suited the setting – the screen was above the participant, was significantly wider than it was taller so I rearranged it to suit the kind of interaction the new space enabled.

Setting up and debugging works that are designed for a specific space is a process unlike simply making sure it works on your laptop. It has to work within the space, on the hardware installed. For example I had to setup a custom camera to work with the DAL screen, as it actually is meant to be a 4×4 grid, but had no lower set of screens, so it would only output the top 3/4 of the image. Things like that along with determining the effective area of kinect and the optimal point for the participant.

I started this project not knowing how to structure software, nor how to work with Kinect or code for that matter. Piecing together frameworks and workarounds, I made it all up as I went along.

What started as a simple challenge to myself, a way to do more advanced things for an assignment than I was capable of, spiraled out into an exhibit I never thought I’d be making. It forced me to think about the software as well as people interacting with it in a real physical space, two vastly differently configured spaces in fact.
It was a huge learning curve for me and I’m thrilled I got to go through this process and I’m thankful for the support I received from Digital Art Live in giving me the opportunity and pushing me to reach new highs.