Posts Tagged: gamedev

Global Gamejam

frogshark_ggj15

This past weekend, my team and I participated in a global gamejam, an equivalent of the 48 hour film festival, but for game development.

We came up with a small 2 player game. Read more about our jam, and play Whalebus!

Swordy PAX AUS Badges

badgeinhand

I made a post on Swordy devlog about the wooden badges I’ve made as part of our PAX AUS exhibit & promotion effort. Click on the image for more photos, detailed description of the manufacturing process, along with some post-mortem reflection.

Experiment in photogrammetry

I stumbled upon a music video Chorus by Holly Herndon.
The video features 3D scans of a space using Photogrammetry software like as PhotoScan or 123DCatch.

I wanted to familiarize myself with this technology.
For convenience, I used one of the abundant plasticine abominations we have around Colab as a subject.
The tech itself is nothing complicated in terms of usability. Take lots of static shots of various angles, load them up and you get a point cloud upon processing (of course I imagine the science and maths behind this process is quite complex).
The potential is there to scan real life 3d objects into a virtual space for further digital manipulation, using a regular camera without expensive scanning equipment.

Definitely an awesome technique to have in the quiver of experimental stuff you can do with computers.

I like the digital processing artifacts and loss of data that’s happening here. A sort of glitch aesthetic, which I think it’s more interesting than a perfect 3D scan of a real world thing.

Back to work

I started working on some sound effects for Swordy over semester break.

Recording effects is a fun process, as it’s more tactile and immediate than trying to synthesize everything in software. I can use my voice or whatever objects I can find as foley.

The toughest part of recording, is the realtime aspect of it. If I have 10 minutes of audio, I have to spend 10 minutes listening to it, and the time increases exponentially as I clean up, tweak EQ, try to extract pieces, cut and export individual effects.

Audacity is the best tool there is for super fast edits, I always revert back to it for splicing and crossfades.

Notes: Katamari

“If you are going to play a game that resembles a movie, you should just watch a movie, and if you are going to play a game that shows realistic cars, wouldn’t it be more fun to drive a real car?”Keita Takahashi, designer of Katamari Damacy

“roll stuff up and make it bigger.” – baseline idea/design principle/core mechanic behind Katamari as Takahashi describes it. See “toy”, designing an awesome videogame.

Yu Miyake’s explanation for how the theme song came to be is worth a read just for the smile that paragraph will put on your face.
song in question – https://www.youtube.com/watch?v=95jD5tMFjhs

Projection play

Semester 1 is over, meaning no more sidetrack assignments. I have a year from now to produce a thesis paper.

In review, semester 1 was a very helpful exercise and a lifestyle shift in a way I produce work and talk about things. Academia has a different mode of operating and going through that space changed a lot of things for me.

The best tangible thing that came out of this is of course the interactive piece of work that I’ve produced for Pigsty. I was able to take it a bit further for another assignment, introducing motion control. Using kinect, I was interrogating the play space of the application from the point of view of a participant, observer, the software itself and misalignment between their literal and metaphorical vantage points. I’m hoping to put a few more features into the app, like screenshot saving, some user options and make kinect control available for public download, but perhaps after some of the exhibitions I’m pitching this work for.

projected

Post submission

Download [vizcera] 0.1 (WIN) (OSX)

Last night, after little sleep and some obligatory writing, I went to Phil’s house, eager to show off my creation and watch him play the [previous version of the] app. I haven’t yet seen anyone interact with it, and I knew the aesthetic aligned with a lot of Phill’s own works.

Half an hour into play and discussion about the sensations, feelings and relationships my work was establishing within that play session, he suggested to remove buffer clearing from the camera to reinforce the painting aspect of the experience. A few seconds later another hour of play ensued.

This move took this from the domain mainly dominated by audio in terms of focus and output, towards the visual. I knew the previous iteration wasn’t final, it’s just what I had at the time the time ran out. Playing that one now, feels like something is missing.
The experience in this new build is drastically different. The aquatic feeling is far less, as it is replaced by a more temporal aspect of particle behavior. Layers upon layers of footprints create the evolving landscape of motion. Sound becomes less important, a background accompaniment to your very presence in the space.

I cleaned up the last minute code I did the night before submission and here’s the first version of what I would call a “piece of software I made”.

I will be using this as a platform for further experimentation and learning, especially that I have another university paper to fulfill, which is the effector to shape the process and outcome of this experiment in play.

Pigsty Play Submission

Final submission (WIN) (OSX)

WASD, LMB to clear visuals. Gamepad compatible.
Best played with headphones in the dark.

My final submission for Serious Play paper at Colab, AUT.

notes: Unity 5 audio keynote

Unity is drastically changing it’s audio tools in the 5th release.

New compression format, lower memory foot print;
resource handling improvements;
streaming;
audio metadata access;

performance profiling Audio mixer “first offering” (more big features to go?)(interactive audio tools next)

mixing & mastering.
DSP effects anywhere in the chain
ADSR etc in editor
runtime editing (mix while the game is playing)
modeled on existing DAWs (eg. Ableton, Logic etc)
ability to create & chain together multiple mixers
routing of sound groups as hierarchies (groups mixed down together)
3D positioning, doppler etc (spacial sound concepts) applied to source before the mixer.
effects anywhere
effects are stacked sequentially (like in a daw)
native plugin effects – custom DSP, custom UI
Snapshots – save states of your edits (awesome!)
transition between states at runtime (even awesomer!)
expose individual effect parameters – can be script drivensends / returns
ducking (side chain compression) – any audio group

A very exciting update for audio in Unity. A very slack presentation, however as an overview, makes me wonder whether I should just hold out till 5.0 or if we will end up using both Unity native and Fmod sound at the same time.

Bought a Kinect

Picked up a kinect on trademe for nzd$30 because why not. Would be cool to set up a motion controlled sound experiment.

Was looking through MS SDK toolkit and their demos to see how it works. You can’t be too close or too far from the camera. Luckily my room has just enough space to capture whole skeleton.

Haven’t been able to get it to work with Unity yet, but there are plenty of resources online. For tonight I’m happy sending the green stickman back to infinity.