Advice on 2 characters talking in same scene

General Moho topics.

Moderators: Víctor Paredes, Belgarath, slowtiger

Post Reply
DVTVFilm
Posts: 141
Joined: Thu Sep 07, 2006 5:15 pm
Location: USA

Advice on 2 characters talking in same scene

Post by DVTVFilm »

WHat would be the best approach to get 2 characters talking to each other in the same scene?

Since ASP5 can handle only one audio track per timeline (unless I wrong about that...)- I can pre-mix the back and forth dialogue to a single aiff file.

But, how do I get the 2 characters to talk when they should. Using the audio volume method to make the mouths move in a SWITCH FOLDER makes both characters say everything.

What's the best method to do this? Is there a TUT for this somewhere?

regards
myles
Posts: 821
Joined: Sat Aug 21, 2004 3:32 am
Location: Australia, Victoria, Morwell
Contact:

Post by myles »

Hi DVTVFilm,

Have 3 files.

One with Character A speaking, and silent gaps where Character B is speaking.

One with Character B speaking, and silent gaps where Character A is speaking.

(If you play them together in mixing software, you should hear the full conversation)

Use the first file to create the keyframes for Character A, and the second to create the keyframes for character B (the script just reads the sound file once, and creates appropriate keyframes).

Mix them together into a third file and load that as the soundtrack for Anime Studio.

You can do it all with the third mixed file and get better looking synch if you use Papagayo to create the keyframes, but it is somewhat more time consuming.

Regards, Myles.
"Quote me as saying I was mis-quoted."
-- Groucho Marx
User avatar
heyvern
Posts: 7035
Joined: Fri Sep 02, 2005 4:49 am

Post by heyvern »

Or you could use Papagayo... a bit more work lip synching though... but you only need the one sound file.

-vern
wizaerd
Posts: 415
Joined: Fri Aug 25, 2006 7:08 pm
Location: Gilbert, AZ

Post by wizaerd »

Using Papagayo is a whole lot more work, a whole lot... especially if it's a long dialog. Not to mention that I find it a very cumbersome application to use. While the switch layer support within AS isn't as detailed for lipsynching, it would have suited my needs well enough, if of course working with multiple sound files was possible.
DarkCryst
Posts: 24
Joined: Mon Jul 24, 2006 9:36 pm

Post by DarkCryst »

It's not a lot of work really: and it produces so much better results!

I'm taking this route with a short film I am animating, and its really the simplest solution with the best quality output.

Granted I'm using my own modified version of Papagayo (which is much faster scrubbing audio, and whose UI I customised) but that shouldn't make too much difference.

What problems do you have with Papagayo?
wizaerd
Posts: 415
Joined: Fri Aug 25, 2006 7:08 pm
Location: Gilbert, AZ

Post by wizaerd »

Anytime you hit play, it always starts from the beginning, and when you hit stop it always resets to the beginning. There is no pause. If it's a sound you didn't specifically record, this makes it a bit unwieldy to transcribe it while listening.

Deciding where pauses go, where the mouth doesn't move is more guesswork than not. If you shorten a word (in the transcribed text word breakdown), it leaves a gap and you have to manually move every word after it. It's a bunch of manual tweaking the words to get a desirable result.

These don't mean there's anything wrong with the app, it's just takes more time than I wish to spend on it. I'm not doing this professionally, so the level of perfectionism is a great deal lower...

I'm happy with the switch layer output AS creates, it just sucks majorally you can only do 1 soundtrack, and the keyframes it creates are always placed at frame 1.

Perhaps I'm better off making subtitled movies... :roll:
Guest
Posts: 13
Joined: Tue Jul 11, 2006 2:52 pm

Post by Guest »

Hey DVTVFilm

you need to create whats called a 'Sound Canvas'. and entire scene mixed together from spoken dialogue.

Similar to what Myles said, what I do is have one set of files for character A (a file per speaking line of dialogue for that character) and another set for character B. each set of files for each character should be in a seperate directory. If your using different actors for the dialogue you may need to direct them or let them hear the other characters prior line of dialogue. you have more control over the flow of dialogue and record the same line many times and with differing intonations if you like aswell. More work but better results.

Next part you need a sound editor. Add each line of dialogue in sequence from each character alternately till you have the entire scene (this is your 'Sound Canvas') and save it. It should be no more than 3 mins because your going to use papgayo and anything longer will be long, slow and tedious.

Go and use papgayo and using the tutorial included with it, script each line of dialogue for each character.

any probs or point I havent explained lemme know
DVTVFilm
Posts: 141
Joined: Thu Sep 07, 2006 5:15 pm
Location: USA

Post by DVTVFilm »

hmm- some good advice here... As I'm a FCP-guy since it was created years ago. there's no problem with producing a premixed audio-track.

And I tried Papagayo-- it's pretty slick, but the tracks do take some tedious adjusting on long sections of dialogue.

I'm thinking now after reading everyone's comments that a failry fast compromise might be to let ASP5 do the keyframes based on audio peak levels --which can be applied to both/all characters switch files ---and then simply block-delete the keyframes from the timeline when either is supposed to keep their mouths closed.

This would be considerably faster than applying seperate switch-data files from papagayo to individual characters since it's all coming from the single premixed audio file anyway...

I'll try this and see if it's a possible workable method as well...
User avatar
heyvern
Posts: 7035
Joined: Fri Sep 02, 2005 4:49 am

Post by heyvern »

I did 8 solid minutes of dialog in a few days with Papagayo... less than a week anyway.

Even with all the faults it has (I agree totally) it was still pretty dang fast... once you get on a roll... and the results are great.

The hardest part... THE hardest part is creating all the mouth shapes for every character.... that part sucks. Towards the end... I was cheating and only doing the mouth shapes that were actually used in the dialog for each character.

By the time I finished... I had memorized the ten mouth shape layers needed. I will never have to look that info up ever again.

I got on a roll with Papagayo... after a while... it went faster each time.

Follow the advice they give in the tutorials... start from the top down. Do the big sentences... fine tune the words... then fine tune the actually phonemes. It works like a charm.

Break the dialog text into smaller groups. Don't try typing the whole thing in one long line. Break it up according to the sound and pacing of the dialog. This really really helps tremendously. You can look at the wave form and see where to drag each "sentence". then fine tune from there.

I ended up with about 50 separate Papagayo and DAT files. Each file varied in length from a few seconds to 30 seconds or more.

KEEP THE CLIPS AS SHORT AS POSSIBLE. This will make things so much easier.

I did have the advantage of having a written script for the exact dialog. I was able to use a text editor and paste the text and the link to the wav file into a bunch of Papagayo "templates" in a text editor before starting the lip synch.

I had anticipated this would take me 10 times as long as it did.

I had originally planned to use the built in lip synch with audio files... but it just looked... icky to me.

Just my 2 cents... untill they stop making pennies... then it will have to be a nickel.

-vern
User avatar
7feet
Posts: 840
Joined: Wed Aug 04, 2004 5:45 am
Location: L.I., New Yawk.
Contact:

Post by 7feet »

One option to keep in mind relates to the simplicity of the DAT file you get from Papagayo. It's plain text, and just lists the frames where there are keyframes, and that frames phoneme to be shown. So it's not hard to open the Switch data file in a text editor, delete the lines relating to another character speaking from it, and save it under another name to import for that particular character. If you have a list of what frames the changes between characters happen at, it shouldn't take hardly any time at all. The most work would be inserting a "Rest" phoneme at the end of each speech if needed. And often that'll just be there.
wizaerd
Posts: 415
Joined: Fri Aug 25, 2006 7:08 pm
Location: Gilbert, AZ

Post by wizaerd »

heyvern wrote:I did 8 solid minutes of dialog in a few days with Papagayo... less than a week anyway.

Even with all the faults it has (I agree totally) it was still pretty dang fast... once you get on a roll... and the results are great.

The hardest part... THE hardest part is creating all the mouth shapes for every character.... that part sucks. Towards the end... I was cheating and only doing the mouth shapes that were actually used in the dialog for each character.

By the time I finished... I had memorized the ten mouth shape layers needed. I will never have to look that info up ever again.

I got on a roll with Papagayo... after a while... it went faster each time.

Follow the advice they give in the tutorials... start from the top down. Do the big sentences... fine tune the words... then fine tune the actually phonemes. It works like a charm.

Break the dialog text into smaller groups. Don't try typing the whole thing in one long line. Break it up according to the sound and pacing of the dialog. This really really helps tremendously. You can look at the wave form and see where to drag each "sentence". then fine tune from there.

I ended up with about 50 separate Papagayo and DAT files. Each file varied in length from a few seconds to 30 seconds or more.

KEEP THE CLIPS AS SHORT AS POSSIBLE. This will make things so much easier.

I did have the advantage of having a written script for the exact dialog. I was able to use a text editor and paste the text and the link to the wav file into a bunch of Papagayo "templates" in a text editor before starting the lip synch.

I had anticipated this would take me 10 times as long as it did.

I had originally planned to use the built in lip synch with audio files... but it just looked... icky to me.

Just my 2 cents... untill they stop making pennies... then it will have to be a nickel.

-vern

50 Data files for lipsynching, with each one being a single sentence of a long dialog? How did you get those 50 seperate files into AS? I'm assuming it makes the keyframes the same way as when importing a sound file into a Switch Layers, all at frame 1. I'd be really interested in learning and understanding more of your workflow...
User avatar
heyvern
Posts: 7035
Joined: Fri Sep 02, 2005 4:49 am

Post by heyvern »

I actually have... 25 "scenes" and about 50 separate Moho files. 2 for each scene.

The project involves an announcer with a video screen behind him.

He has dialog talking about the characters in the story. Then the screen zooms in, and several characters would speak or act out the scene that was described.

So each "scene" has two separate "clips"... the announcer and then the inset video.

I render the inset video first for each scene. I load that movie file into the announcer scene then render the complete scene.

It tried putting everything in one Moho project but it was very confusing and actually quite pointless. Using the rendered movie files was a much better solution. I was able to easily mask the movie for the inset video in the corresponding announcer file.

I also would use a single frame of the pre-rendered scene to use as a hold... or still until the video started to play in the inset window.

When finished I will have approximately 25 rendered clips which I will assemble in an external editor. I need to produce a DVD and possibly an low resolution web video.

Breaking this into many pieces makes it much easier to deal with. It seems like a lot... but I have each scene in its own folder that is numbered and titled so I know what it is.

I am methodically plodding through each folder/scene until I get the whole thing done. I have about 10 scenes left out of the 50 total... but all of the lip synching was done ages ago.

I was able to do all of the lip synching in Papagayo before any of the characters were even close to being finished!

Now when I get to a scene with a specific character I load the DAT file from the audio folder in the folder for that scene and load the audio file as my sound track... works great.

I set up the folder structure recently so that everything is "local" to the scene folders so I can easily move this to another computer if needed.

Previously I had the sound and DAT files all in one big folder outside of the structure... when moving these files to another computer the links got broken. Moho/AS doesn't seem to maintain relative links when going up the directory tree.

So I changed my folder structure keeping everything relative to the Moho file. I did a search and replace on the whole directory with a text editor changing all references to external files to reference files in the local directory structure.

Amazingly they all opened fine and the links worked perfectly.

EDIT:
I had never really done anything like this before so... there were a lot of "false steps" and trial and error before I got my work flow working properly.

-vern
User avatar
Rasheed
Posts: 2008
Joined: Tue May 17, 2005 8:30 am
Location: The Netherlands

Post by Rasheed »

I always wondered why in the old Popeye animations, everyone was always mumbling, and in the newer series there was non of that enjoyable chattering. It seems that the Fleisher brothers did the animation first and voices later and in later periods the voices were done first and the animation was modeled using the voices (as is usually the case nowadays). Of course, in both cases there was a script with words, but with the original Popeye the animators could have a little fun with the script and the voice actors even more fun with the jazzy voice acting between the spoken lines of text, so it must have been a happy bunch of creative people in the Fleisher Studios.

It seems to make a big difference if you do the voices before or after animation, but I think both have their charmes. Anyway, if the animation is produced in many languages, the voices are always done in post (at least for the non-English languages). I suppose this is a very enjoyable job for the foreign voice actors, especially if they are allowed to do their own thing.

My point is that if you plan too precisely, you lock yourself in tranlation-wise.
Bones3D
Posts: 217
Joined: Tue Jun 13, 2006 3:19 pm
Contact:

Post by Bones3D »

How about making a "dope" sheet before even starting the animation process? A dope sheet is a chart that tracks the timing of speech (and other sounds) by listing the timecodes of each change in the sound against the frames in your animation. You can then use this information to determine things like when and where certain mouth shapes are needed.

It does, however, require some skill to get it right and can be a time-consuming process. But it does get the job done ahead of time, so you don't need to interrupt your workflow when you finally do start animating.

In the meanwhile, you might consider checking out a program called Magpie Pro that can automate this process a bit, but keep in mind that it is pricy.
8==8 Bones 8==8
Post Reply