If you were able to discover this effect in the previous builds, if you just touch a base zone and go out of it without completely coming through, it triggers off, leaving you in the “zero zone” where the last played bass goes through a low pass filter. I really love the floating feeling of the LP filtering based on velocity, so I made a scene that isolates that effect, using the most appropriate track out of the old 4.
There is something about this that just feels good. It’s meditative and hypnotizing. I wonder how such qualities of carrying out certain actions for sheer pleasure be used as game mechanics or in game actions in respect to game states or mechanics in Swordy.
Made the wall collision impact sound more pleasant, but the main thing, was a spontaneous piece of code I added and the subsequent discovery. I was running the app in dual game/editor screen while trying to convert my friend Jenna to Unity, and duplicated the cube. Both were controllable and both triggered the collision sounds and background track graphics. So why stop at 2?
Press A(xbox) / X(ps) (or left click on a mouse) to spawn more cubes!
There’s no limit, and the clone function runs every frame if you keep it pressed (not advisable). GO! 😀
On the ground floor, this was an exercise in coding in Unity and fmod one shot / event parameter manipulation integration test. Otherwise, it’s also an exercise in visual language for a sonic environment that I’ve set up. And further exploration in spacial relationships and sound.
I spent longer figuring out how to do the RGB split post effect on bound collision than anything. I feel like every sound event must be accompanied by visual feedback, that way both sight and hearing senses get stimulated at the same time, and there’s reinforcement of what triggers the sound event or the visual effect, otherwise known as Juice.
recommended: run in windowed mode, max settings with a controller (or keyboard wasd)
Drawing on the mistakes of the first attempt at audio blending, I’ve simplified the setup. Now, all samples get scrubbed simultaneously. Based on the parameter knob, only one plays at any given time. There is no timing issues the first test presented because the playhead doesn’t skip, but keeps looping smoothly. The experience is much more seamless, there are no fades and the sample switch over instantly at the right time.
I also realized the samples didn’t actually loop. It felt like they did because it was long enough, but I needed to add a loop region for it to actually jump to start.
That’s also something to consider for optimization sake. No point loading a 14 second loop with 14 repeats into memory when I can just do one, except, that the instrument that I used to design the loop has subtle randomization happening on the drumkit letting it sound more natural, something I can’t do to easily in fmod unless I assemble the drum kit there from individual samples.
Next: figure out how to avoid hard coding event names into fmod handler script and recognize the available parameters automatically. That way I can make connections in the inspector and prefab the pieces instead of creating a script for every event or knob I wan to modify.
Also, I should make something with more sounds and more interactivity.
I couldn’t figure out how to make the loops be stacked with synchronous playback while having them switch blended on the value. Instead, I set up a system where loop regions for every intensity level had sends based on the main parameter knob. Every loop region has a range of that parameter within which the loop repeats. If I turn the knob, the playhead jumps to another loop for that range of values, and plays it from the beginning.
It was too abrupt at first, because the playhead isn’t jumping to the same time in the bar, but to the start of the loop region. Because the are no bars, these are freeform events within events with audio files inside of them. To alleviate the harsh pop, I added ADSR to every loop. That fades off and blends them smoother, although there’s some doubling up going on, or they go quiet with quick changes from value to value trying to catch up. Everything feels desynchronized still.
I need to test a system with loops that just blend and stack with each other well, to keep the playhead in place while swapping out samples.
I’ve made a small test scene to to try fmod blending based on distance between two boxes.
For the fmod integration, I’ve struggled through code a little bit, as it has just been released, and the are no documentation for the C# unity API just yet (the fmod.org forum staff said they’re working on it). There are a few code examples and a few example scenes that I based mine off.
Once I got all the fmod code to compile, I couldn’t get sound to play back. I heard the fmod banks play back in the inspector, so the fmodstudiosystem initialized properly, however I couldn’t hear anything. Turned out my actual listener was too far away from the event emitter, and couldn’t hear it. Silly mistake, but I assumed that 3D sound functionality wasn’t there by default.
When I designed the intensity variation loops, the first 4 were based off each other, however the last 2 have slightly more variant beat. They blend back to back well with others, but not so much if you were to chop in between.