I realize I’ve been lying to myself for a long time. I’ve been telling myself that “in the time that it would take me to get any good at writing code, I could get proportionately better at something else”.
Now I find myself in a place where I neither did code, nor learned that “other” thing I would supposedly be better at in the time I could’ve learned to code.
As hard as it is, I need to make steps towards code proficiency beyond simple scripts. Implementing my own audio and setting up relationships to drive my sound design is a perfect opportunity to stop giving myself excuses and get it done.
I want to purposefully avoid using tools like playmaker and node based programming aids. Even though I could make things faster, and it’s fine for prototyping, it is not good for production and my learning.
Thankfully, my team @frogshark is here to help 😀
time to get over it and do some code.
Drawing on the mistakes of the first attempt at audio blending, I’ve simplified the setup. Now, all samples get scrubbed simultaneously. Based on the parameter knob, only one plays at any given time. There is no timing issues the first test presented because the playhead doesn’t skip, but keeps looping smoothly. The experience is much more seamless, there are no fades and the sample switch over instantly at the right time.
I also realized the samples didn’t actually loop. It felt like they did because it was long enough, but I needed to add a loop region for it to actually jump to start.
That’s also something to consider for optimization sake. No point loading a 14 second loop with 14 repeats into memory when I can just do one, except, that the instrument that I used to design the loop has subtle randomization happening on the drumkit letting it sound more natural, something I can’t do to easily in fmod unless I assemble the drum kit there from individual samples.
Next: figure out how to avoid hard coding event names into fmod handler script and recognize the available parameters automatically. That way I can make connections in the inspector and prefab the pieces instead of creating a script for every event or knob I wan to modify.
Also, I should make something with more sounds and more interactivity.
Revised excerpt from Reflective statement 2:
Electroplankton is an electronic music game. It’s not really a standard game in the way you’re used to. There are no winning conditions or reward / punishment mechanics of any kind. I played it through an emulator and am lucky to be equipped with a touch screen enabled laptop for this. It enabled me to have a similar experience to the one intended, as well as few unintended.
Eventually I was more interested in the main menu, because of the musical aspects of it’s design. The menu has bubble sounds in the background and each mini game selection is followed by a sound which there are 10 of. I started going up and down the menu just to make the notes play. I was doing it with a mouse at first, which I felt was more tactile than using the screen at the time. The hit and click provided a kind of haptic feedback a touch screen didn’t. That in itself generated a noise of plastic hitting plastic which resonated on top of the notes generated by the game plus the sound of the mouse gliding over the table in between notes if I chose to go in the other direction pressing left or right in the UI, a utility noise akin to fingers gliding on guitar strings. Now I had this multi-voice instrument I was playing. Later I realized I could use the keyboard and that made switching level selection much faster. Another type of melody emerged. When reaching the last level item going 1 to 10, the selection would wrap around back to 1. This made a melody go from a high octave down or back up using this feature. Then I started using the touch screen. At this point what I was doing, was pushing the space of that game menu to the limit of it’s functional ability. I immersed in play and extracted the kind of interaction it wasn’t designed to facilitate.
Thinking about this kind of interaction and its performative aspects, a certain relationship aesthetic arises. The particular elements of Electroplankton served as a musical instrument quite literally. The Electroplankton main menu playspace wasn’t designed, it manifested and I exploited it when I decided that it was more interesting than the minigames. I like the subversive nature of such interaction, by pushing systems to their spontaneous and unintended dimensions.
Over the break, I got a privilege to participate in The Project expo at AUT. People came to talk, listen and discuss the various topics of digital disruption.
I was there to showcase Warp, an Oculus VR demo we created at Frogshark (before it was officially formed). We showcased the demo at DigitalnatioNZ expo in 2013, and it hasn’t changed since, as we went onto working on Swordy. The event however wanted to have a VR demo on display, so it was an opportunity for me to see reactions of a different kind of audience to VR tech.
There was mostly mature audience. The highlight of the event for me was seeing people who never played video games, never experience anything like VR, some never even held a controller in their hands, try something like this for the first time.
I couldn’t hook up the sound to the TV, so it was mute, however there was a good number of people who were making their own “pew pew pew” sounds to compensate.
Majority of people who tried, at first stared straight ahead like they’re used to on a normal screen. I would have to prompt people to look around actively. No one noticed they had no legs until I pointed that out.
It was evident again how important it was for the controls to be minimal. Only the left stick and the trigger were required. Majority never used an xbox controller, and would get lost to find the left analogue, but eventually would get everything within seconds.
One particular lady (pictured above) claimed to never have held a controller before (she was also one of those that made their own sounds, and duck and lean with their body in response to VR), have totally engaged with the game. She instantly got the head look aim mechanic, understood the ship controls the moment she touched the analogue sticks, shot all the enemies within seconds as they appeared, and trialed the rest of the buttons on the controller to discover the barrel rolls, which she also accompanied with body movement and vocal sound effects. That lady was awesome.
There was also only one person who insisted that inverted Y is the right way, and he was a young gamer (I would agree if I was playing with an actual joystick, in a free-flight mode, not an on-rails 2.5D Starfox VR hybrid).
Other than that, it was great to see the responses from an older audience to something this unfamiliar.
In fact, the engagement didn’t begin just the time from when they put the headset on, but also the whole ritual that begins way before that. The inquisitive gaze from the distance, the careful approach, the childlike expression of curiosity and the internal battle of excitement vs. unfamiliar.
Something very natal is expressed in that ritual, as if like cavemen themselves, gazing upon fire for the first time. The careful danger dance, the curious courageous touch and the satisfied retreat, to tell the others they have done it.
I couldn’t figure out how to make the loops be stacked with synchronous playback while having them switch blended on the value. Instead, I set up a system where loop regions for every intensity level had sends based on the main parameter knob. Every loop region has a range of that parameter within which the loop repeats. If I turn the knob, the playhead jumps to another loop for that range of values, and plays it from the beginning.
It was too abrupt at first, because the playhead isn’t jumping to the same time in the bar, but to the start of the loop region. Because the are no bars, these are freeform events within events with audio files inside of them. To alleviate the harsh pop, I added ADSR to every loop. That fades off and blends them smoother, although there’s some doubling up going on, or they go quiet with quick changes from value to value trying to catch up. Everything feels desynchronized still.
I need to test a system with loops that just blend and stack with each other well, to keep the playhead in place while swapping out samples.
I’ve made a small test scene to to try fmod blending based on distance between two boxes.
For the fmod integration, I’ve struggled through code a little bit, as it has just been released, and the are no documentation for the C# unity API just yet (the fmod.org forum staff said they’re working on it). There are a few code examples and a few example scenes that I based mine off.
Once I got all the fmod code to compile, I couldn’t get sound to play back. I heard the fmod banks play back in the inspector, so the fmodstudiosystem initialized properly, however I couldn’t hear anything. Turned out my actual listener was too far away from the event emitter, and couldn’t hear it. Silly mistake, but I assumed that 3D sound functionality wasn’t there by default.
When I designed the intensity variation loops, the first 4 were based off each other, however the last 2 have slightly more variant beat. They blend back to back well with others, but not so much if you were to chop in between.
Finally I got to do some experimentation and play.
Here I have 6 loops, of varying density and intensity low to high, for a simple tribal drum beat. The idea is at first to make a test of these 6 separate stages blending with one another based on some parameter in Unity.
FMOD is a big industry standard middleware for video game audio engineering, and is also already driving Unity’s native sound engine, though it obfuscates all access to FMOD.
Luckily however, FMOD.org just released FMOD Studio Pro, license free for indie developers, and offer an integration package for Unity that provides a wrapper to interface Unity with FMOD directly. It gives full access to effects, filters and custom event structures in a project.
These loops so far, have been composed in Ableton Live, using a Kontak 5 “West Africa” instrument set from Native Instruments. I used the inbuilt pattern maker to create the rhythms.
I found this process a massive exercise in play itself. The pattern set to loop, would constantly play back, as I was editing instrument states (rombus, square, x). Often I would get carried away by the organic nature of what I was doing, completely destroying the loop. I would make it incompatible with previous intensity settings and run way off the rhythm, often having to start again using the previous stage as a starting point. It was productive kind of fun.
Having time over the Easter holiday, I organized my workspace, pulling everything I need out of storage after moving back in with my parents.
Moving houses is the greatest way to know what amount of useless crap you own that you don’t realize you have. Among computer gear, tools and utility items, creep in the stacks of needless things you only end up trying to find space for. That box of paper clips you bought for that one piece of wire you used for something, a poker set you never played, a pencil holder full of rubber bands and pens collected over time, and that particular such thing which there’s one too many of, that you think might be of use one day. Many of these things I can’t even trace back to how I came into possession of.
After a few rubbish bin purging trademe listing shelf stacking days, I built my little space. The cable management, the spacial sacrifice, removal, and almost forcing it upon myself to be surrounded with only the tools I need.
It isn’t perfect, but it is mine.
We went in with mentality that getting a mention is good, and any exposure is good exposure, and indeed, we were able to show both Warp and Swordy, which is great.
We knew however that this wasn’t about us. This was about VR, and portraying New Zealand as a place for innovative new technology, which is also a worthy pitch that I was more than willing to participate in.
It is interesting however to see the final result. We spent around 1hr with David’s team, filming, doing interviews. Fascinating to see how condensed the final edit is, what was used and what wasn’t and how everything was conducted behind the scenes.
We weren’t aware for example, of what stream of material we were going to be among in the edit. We only knew about our direct particular involvement and the roller-coaster playtest with an elderly gentleman, which happened in the same room as us. That, I think could feel less distilled and set up, if it was more genuine in the approach of getting an emotional response out of him, but still worked out fine and everyone had fun.
I think overall, this was a great opportunity and it worked out really well. Swordy is on the national TV, score!
Two weeks ago, we (frogshark team) visited Thought-Wired, an Auckland based company, that is doing R&D on thought control interfaces and neurotech. Dmitri Selitskiy showed us where things currently at in that tech space, and it was interesting to see how they’re trying to build a platform for developers like us, to be able to interface with all these different propitiatory BCI hardware.
I got to try the Emotiv BCI. It’s really quite impressive, to get to experience the early tech in this area. It’s not as plug-and-play as one would expect from a consumer product. You have to lubricate the electrodes, you have to spend hours ‘training’ the system to adjust to your specific brainwave activity. It’s a really steep ramp to get it to “just work”.
I think it’s still in very early stages, it will probably be a couple of years before we see some real home consumer applications, but the potential is already here for therapy and medical research. This kind of technology is a blessing for people with disabilities.
The gaming applications for BCI’s however are incredible, and I’m not just talking about playing star wars, or letting disabled people play traditional games, but also the new opportunities of gameplay.
What if I’m using the controller to perform standard mechanical actions that I don’t have to “think” about and just perform from kinesthetic memory, and how does that coexist with brainwave control schemes?
Is there going to be some kind of hyper-threading where the BCI doesn’t just pick up on particular thoughts, but on particular “ways of thinking”?
All these would certainly raise some eyebrows among privacy advocates, however, ethics aside, the future for this technology is very bright.