Picked up a kinect on trademe for nzd$30 because why not. Would be cool to set up a motion controlled sound experiment.
Was looking through MS SDK toolkit and their demos to see how it works. You can’t be too close or too far from the camera. Luckily my room has just enough space to capture whole skeleton.
Haven’t been able to get it to work with Unity yet, but there are plenty of resources online. For tonight I’m happy sending the green stickman back to infinity.
Drawing on the mistakes of the first attempt at audio blending, I’ve simplified the setup. Now, all samples get scrubbed simultaneously. Based on the parameter knob, only one plays at any given time. There is no timing issues the first test presented because the playhead doesn’t skip, but keeps looping smoothly. The experience is much more seamless, there are no fades and the sample switch over instantly at the right time.
I also realized the samples didn’t actually loop. It felt like they did because it was long enough, but I needed to add a loop region for it to actually jump to start.
That’s also something to consider for optimization sake. No point loading a 14 second loop with 14 repeats into memory when I can just do one, except, that the instrument that I used to design the loop has subtle randomization happening on the drumkit letting it sound more natural, something I can’t do to easily in fmod unless I assemble the drum kit there from individual samples.
Next: figure out how to avoid hard coding event names into fmod handler script and recognize the available parameters automatically. That way I can make connections in the inspector and prefab the pieces instead of creating a script for every event or knob I wan to modify.
Also, I should make something with more sounds and more interactivity.