Page 1 of 2
Animated/digital puppetry and automatic body movement sync.
Posted: Tue Feb 28, 2012 6:29 pm
I will truly appreciate your thoughts for the following-
I am working with anime studio pro 8 and mostly use a "cut out" animation style and trying to create real-time animated characters (full body), so an actor/voice over can animate the character like a puppet, in real time.
A long time ago, I came across with web camera application that enables some one to take a character (already built one) an to talk to the web cam (or mic) and thus, to make the animated character move (just like an avatar)
I am willing to create and illustrate my own characters, and somehow to use a software of some kind to be able to make it move it like a puppet, so I can talk to a mic and make it move (automatically preferred)
If other options, such as moving legs, face gestures etc' are available, it will be even better.
I searched for some programs, but most of them deal with 3D and are very expensive and I prefer to use anime studio pro 8, if possible.
Do you know of any reasonable alternatives?
In case I do not use real-time animation- Is there something similar to the automatic lip-sync option, but for the whole body movement? (I tried to use switch layers for some body movements, but it was very time consuming and the movements are jumpy )- Any other ideas?
Thank you, in advanced, for any advice.
Posted: Tue Feb 28, 2012 6:59 pm
Is there something similar to the automatic lip-sync option, but for the whole body movement?
There are lots of softwares available which claim
to be able to do that, but the cheap ones don't work, AFAIK. Search for "real time motion capture". In that setup you not only need a software which is able to extract motion data from video input, but also an animation software which does understand the data format in order to create animation from the motion data.
AS doesn't support this.
Posted: Tue Feb 28, 2012 9:29 pm
Personally, I would get a camera and record the motions that you want. Then use the recording as a reference video in AS. That will give the timing, but for my taste in animation, 2d cutout is slightly to extremely faster than normal.
I would animate on fours and use two's if necessary. (if you don't understand this, Google: animate on two's)
Posted: Tue Feb 28, 2012 11:05 pm
Thank you for the quick and helpful replies.
I can leave the real time part aside.
I look for something similar to the automatic lipsync, but with movement.
It does not need to be that accurate, just making the character
move synced to the sound
I might give the switch layer solution another try, but still does not know how to do that for he whole character(each limb is the same as regular switch layer sync)
Any idea will be thankfuly welcome.
Posted: Wed Feb 29, 2012 5:52 am
dueyftw wrote:(if you don't understand this, Google: animate on two's)
one's and two's is this for frame by frame animation.
Posted: Wed Feb 29, 2012 6:57 am
Thank you for the reply
I am familiar with those terms.
As mentioned, the challenge is to be able to create some poses for a character (body, for the different limbs, facial expressions and mouth) and to be able to use an automatic sound sync.
On this specific project, only the mouth sync should look accurate (or close to accurate), but the other parts just need to look enough animated, according to the sound.
The reason for all of this is to be able to use a very long speech (sound only or video of the speaker) and to be able to provide an animated character that present it and completing such project as fast as possible.
Trying to animate such a long speech manually will take a huge amount of time.
I am trying to think about different solutions to be able to provide (as fast as possible) an animated character that will act just like an "avatar" and will be able to present such a long speech while moving around creatively (body movements, facial expressions, limbs movement, eyes and pupils movements etc')
For example- I have an half hour speech of someone (I can use sound or a video or a real time video capturing of the speaker) and I need an animated bird that will present the speech entertainingly.
For the mouth, I can create different mouth poses and use switch layer lip sync, but for the wings, head movement, eyes, bone manipulation etc', I still try to find a reasonable solution (some times, a single "gesture" needs more time/frames- like a jump, so I still trying to understand how to achieve such an option, so the automatic "sync" will be able to use more than one frame at a time, like in the switch layer tool)
As mentioned, I usually import png alpha parts that I illustrate in other programs, so the movements are not vector based (no automatic tweening is used)- This needs to be taken into consideration.
Final note- I searched the web for facial/body movement capture solutions, but still was not able to find something in a reasonable price.
I believe that there are options to use a video, or a video/web camera real time capture and to be able to achieve the above, without too much effort (such as joint points indicators, special equipment like Mocap etc'), but could not yet found such.
I also looked for other creative solutions, such as using a video and by using masks( in AE), taking the original eyes and mouth movement and to put it on an animated character, but this technique does not solve the automatic synced body movement.
So, I am still on the search...
Again, any idea will be very helpful and thankfully welcome.
Posted: Wed Feb 29, 2012 10:25 am
Take a look at this:
Don't know what the current state of development is. Seems to have stalled in recent months...
Posted: Wed Feb 29, 2012 10:57 am
Let me get this straight:
You don't want to animate anything yourself, you want a completely automatic solution, you want that solution to be very cheap, and you want to put a half hour speach into an uninterrupted piece of "animation"?
Sorry, mate, you're in the wrong forum. Ask your mother to sew you a sock puppet and grab a video camera, that will perfectly suit your needs.
Posted: Wed Feb 29, 2012 3:22 pm
slowtiger wrote:Sorry, mate, you're in the wrong forum. Ask your mother to sew you a sock puppet and grab a video camera, that will perfectly suit your needs.
lol - true
Man I *love* you slowtiger. please don't ever change.
Posted: Wed Feb 29, 2012 4:30 pm
Posted: Wed Feb 29, 2012 4:40 pm
I did this using anime studio pro. the motion of the band has all been captured on video and tracked in ASPRO.
' ipi ' does a great job and can generate bvh for 3d apps using kinect, it would be great to be able to do something similar in ASPRo , it would take a lot of scripting or development to the program though.
Real time performance capture would be awesome.
Posted: Wed Feb 29, 2012 6:32 pm
I must admit Slowtiger, that is very funny.
That gave me a good laugh. I can't help it.
Posted: Wed Feb 29, 2012 6:41 pm
Thank you all for the replies.
First, let me say that I read many of your helpful replies in the past and truly appreciate your contribution to everyone in the forum and to the animation art. Personally, your ideas helped me a lot throughout the period I am working as an animator, so thank you.
Yes, after reading the post myself, it made me laugh too
As my nick name suggest ("love for animation"), I like to animate. This specific project requires those limitations in advanced, as I will get a lot of speaking time (sound files) and will need to be able to quickly (very fast) produce different characters that present them (as mentioned- most of the movement is just in order to "flavour" the animations and the mouth, of course, needs to be more accurate)
In any case, I will need to illustrate the characters, as well as several poses (for each part and for the whole body) and to use some technique in order to achieve the required goal, so I do not have any choice here.
Generally speaking, animation is evolving all of the time, and thinking and developing new and more efficient methods only helps the craft to keep going forward (unless we all want to stay in classic animation, doing everything frame by frame etc') so I believe that it is legitimate to ask such questions, and maybe others will also be able to learn and use similar techniques in order to make things more efficient.
Nevertheless, I appreciate and very thankful for the ideas and advice.
Posted: Wed Feb 29, 2012 6:45 pm
Actually, you could use the lost wiggle tool and puppet your character with mouse movements. There is automated lips synch.
I can't gurantee that you will get great results though and I would not expect though to keep anybody entertained with a hour hour speech unless you are very VERY funny.
Maybe ST's idea is the best, it worked for the muppets .
Posted: Wed Feb 29, 2012 10:10 pm
Yes, after reading the post myself, it made me laugh too
Glad you see it this way.
Your problem needs to be broken up in order to find a solution. So far, I could identify these main points:
- needs to analyse video input
- needs to create output in realtime
- needs to be as automatic as possible
- needs to be cheap.
You're asking for a lot here, believe me. I've seen solutions for this on IBC and other conferences, and they all were expensive.
To cut it short: AS is not made to create real time output.
You should do a search for "digital puppetry", which is I believe the most common term for this. And it's already in use in TV shows, where an actor behind the stage provides motion which makes a CGI character move which is rendered in realtime and composited into the live TV signal.
The last bit, real time rendering, needs a bigger investment in hardware. So you have to forget the "cheap" part of your requirement.
If you don't insist on realtime, life is so much easier. In that case it boils down to connect some input system with some animation software, which is a problem in defined APIs and data formats. The search term would be "motion capture data format", and I believe there are some not-so-expensive solutions available.
"Automatic" is another problem. Motion capture data from video needs to be cleaned up, usually by data filters and human input. To avoid this, there's an alternative which doesn't use video. Instead, the puppeteer works a mechanical device which moves like a muppet, but delivers reliable data from all joints. Since it reads values directly from the joints, there's no need for elaborate processing of data: they can just be written as file or even used for "realtime animation", that is, fed into a CGI system which steers a character according to input.
I think this gives you enough input so you can search elsewhere for a solution. AS is, as I've said, not made for this kind of thing: it lacks the API, it lacks realtime output.