Page 2 of 2
Posted: Wed Feb 29, 2012 10:31 pm
Thanks for the extra details and kind reply.
I am familiar with those solutions, and did a massive research on the subject(by the way, while I look for a "cheap" solution, as I try to keep the budget reasonable, I am willing to invest if the solution is unbelievably awesome...but I will leave that issue for other times)
Let us put the real time issue aside- Is there a way to use the simple switch layer sync tool, but instead of using only one frame images at a time (like in mouth movement), to be able to use very short animation poses and movements, so the switch layer will sync (even not so accurately) with the sound but will show layers that are built from more than one frame at a time (each layer, within the switch layer consists a little animation pose or movement)?
Also, Is there a difference if I un-check "interpolate sub layers" (or not), while using png images only (no vector illustration is used) ?
In case I will be able to achieve the above, I can illustrate some "basic" movements of the specific character, that will be synced using the switch layer method, and I can always add some additional actions and clean the sync and movement afterwards, but most of the basic action and movement will be created automatically, so I save a lot of time.
Thank you, again, for everything.
Posted: Wed Feb 29, 2012 10:55 pm
AS only has layers to be switched, but not animations (see it as vertical switching in the layer tab, instead of horizontal switching in the timeline).
Have you ever researched for game engines? Those are especially made for this kind of problem: having pieces of animation, and combine them following some script. I've done stuff like this in Flash and even in Director, programming all that logic myself.
You need some brand of game logic here, so you can script your animation - that's one part. And you have to create those animated bits as well - that's another part, and can be done in AS as well as in any other animation software - you could even do the same with video clips of life action.
But game engines is completely out of scope of this forum. I recommend to search and find a dedicated place for that and ask there.
Posted: Thu Mar 01, 2012 1:02 am
Thank you for the futher thought.
These days I already began such a research. You are 100% right.
I am trying to stay away from scripts as I can, so I look for simple (based on drag and drop) solutions.
I already asked about that issue on the forum and got some wonderful ideas.
I hope that I will find something suitable.
Thank for everything.
Ahhh the good ol times
Posted: Thu Mar 01, 2012 10:53 am
Is there something similar to the automatic lip-sync option, but for the whole body movement?
There are lots of softwares available which claim
to be able to do that, but the cheap ones don't work, AFAIK. Search for "real time motion capture". In that setup you not only need a software which is able to extract motion data from video input, but also an animation software which does understand the data format in order to create animation from the motion data.
AS doesn't support this.
...when these things were expensive. You might have heard of the Kinect which has changed this picture a bit. There are actually various options for skeleton body tracking - free and commercial. It depends on your requirement (mostly platform and the amount of lag you can accept for your project. E.g. you can combine the relatively slow Kinect with a relatively fast Sony eye toy cam but this is a bit more advanced.).
The spirit in general here is to try things out instead of a "never done this, doesn't work" attitude.
Here is how I would do it.
1. Get a Kinect and e.g. the Primesense/ Open NI / NITE setup.
2. Use Processing or openframeworks with an OSC library to get the tracking data, filter/smooth it there and send the skeleton parameter data to...
3. ... a lua script importing an OSC library as well
4. Check if the lua interpreter that runs in Anime Studio allows to import external libraries and works with the lua script that receives networked data via OSC. This is the icky part so I would recommend to try that
The nice thing is that this is a networked setup which you can run on one machine or separate out if needed. OSC, processing and openframeworks are used by media artists back and forth so you will find knowledgeable people as well.
I'd estimate 0.5 days for step 4, 1-2 days for a running proof of concept or 3-4 days if you haven't heard of the tools mentioned, have to set everything up and google a bit.
Posted: Thu Mar 01, 2012 2:59 pm
I recently made a mouse-controlled 2D virtual hand puppet in Processing:
I know there are kinect libraries available, not sure how usable they are:
http://www.processing.org/reference/lib ... ter_vision
You could also possibly do hand-puppet-level lip sync by reading the amplitude of the WAV file as it's playing and constantly updating the mouth-openness proportionate to that.
EDIT: I tried that last idea. Not the best lip sync you've ever seen, but an interesting experiment and only a 40-line program:
Posted: Thu Mar 01, 2012 5:13 pm
Amazing solutions. Thank you!
I will check each and every idea.
Posted: Thu Mar 01, 2012 5:21 pm
(if you don't understand this, Google: animate on two's)
one's and two's is this for frame by frame animation.
I'm suggesting that every four frames that he should add key frame to match the video reference. Not saying it but on two's is my way of trying to get a newbie to look at other things, not just AS or this forum.