|01 January 2013||#1|
Join Date: Mar 2012
Timing Thinking Particles
I have a question concerning Thinking Particles. For my current project I have to create the first couple of raindrops of a massive storm. Along with these first couple of raindrops each drop is mickey-moused with piano notes.
Is it possible to control the generation as far and especially easy to simply time these to the music? Or do I have to work in post-pro and just generate a couple of single drops and comp them to the music? Which is easier?
Thanks for your help,
P.S. Now that I further think about it, it is of course easier to comp it, so I rephrase the question to wether it is in anyway possible to time the generation to music.
|01 January 2013||#2|
Björn Dirk Marl
Maxon Computer GmbH
Join Date: Sep 2002
Yes, you can control the emission with sound files, either by using the Sound effector and sample node or by directly using the sound node. In both cases things are easier if you have a track that only contains the beats you want to use for controlling.
- www.bonkers.de -
The views expressed on this post are my personal opinions and do not represent the views of my employer.
|01 January 2013||#3|
Join Date: Aug 2002
i personally think, that neither the sound node, nor the sound effector are up to that.
you might be able to get somewhat useable results by filtering the samples/frequency
bands, but the whole approach will be a mess.
if you REALLY want to do this programatically, because you would have otherwise
animate thousands of keyframes, i would suggest using a sound processing lib, like
for example cSounds. there is a python wrapper for cSounds, so theoretcially all work/
code could be done from a python node, although an acutal plugin would be easier i
think. but you should be aware of the fact, that even with the help of an external lib,
sound/music processing and analysis is a pretty complicated task.
you could go for semi-automated solution. drive the particle count by the sum of all
frequency bands of the current frame and do the mapping of the actual notes by hand.
on top of that you could add some simple filtering, so that the emitted notes are being
blue, when the lower frequency bands domiate and red when the higher dominate.
i personally would do it all by hand for 'the first few raindrops'.
edit : as you said 'piano' notes. you could also try to get hold of a midi file or some kind
of textfile music notation of that song and use this data directly in python or cpp.
Last edited by littledevil : 01 January 2013 at 12:06 PM.
|Thread Closed share thread|