Profile picture
, 19 tweets, 3 min read Read on Twitter
Let's talk Motion Matching tips for people who are new to it. I’ll keep things general so it’s not specific to any one company or pipeline...
The Ubisoft dance cards shown at GDC work well, but you don’t HAVE to follow them. To make your own dance cards, just take the coverage you would have normally shot and combine it into a single path. Be sure to leave some extra motion between each move.
The most important thing is that the data that you’re capturing matches your motion model. If your character moves in curves, capture curves. If your character turns at sharp angles, capture sharp angles.
Long takes are not a requirement, as much as an efficient way to capture your data quickly. You can still capture shorter moves, one at a time, just make sure to add extra buffer before and after each move.
If you’re going with long takes, bear in mind your mocap actors will get tired quickly, especially at run speeds (it’s like asking them to do crossfit all day). If you’re shooting a generic style it’s worth hiring multiple actors and swapping them out after each shot.
Going with shorter clips can make things easier on your actors and easier for your animators to work in a format they’re used to. It can also give you more control at the “hard condition” level, if you want to avoid meticulous tagging.
On the other hand, tagging up longer clips is more future proof: If you’re thinking about doing neural network based animation further down the line, having a big library of tagged data will be very, very helpful.
When you start building loco systems, take it one step at a time. First, turn off the character visibility and just focus on making sure that your motion model is generating the future path correctly, before you start worrying about which animation the MM system is selecting.
Motion matching will always pick the most appropriate single piece of data, it doesn’t hold a blend between two pieces of data like a blend space. With that in mind, if you want to build systems that would normally blend between anims, you’re going to need procedural adjustments.
If you’re going with analog speeds, it’s worth splitting out your desired speed value into “actual desired speed” and “clamped desired speed”. Animate and do anim selection using the clamped speed and then procedurally adjust to the actual speed.
Debugging motion matching systems is difficult. If you’re planning on building a motion matching system, schedule time to build lots of debug info and tools. Also take the time to train your animators on exactly how these systems work under the hood.
AI is especially difficult to do with motion matching since AI nav/pathfinding systems rarely match captured motion. I’d advise doing less frequent anim selection and letting the animation stay in control longer (similar to motion planning).
Locomotion seems to be the default go-to when people first start doing motion matching, but it works really well for other moves too. Matching against events (e.g. melee strikes), or matching against points in the environment, are good alternate examples.
Motion matching takes some time to get used to, but once you do, it’s a very powerful tool to have in your tool box. If you’re working on a AAA game it’s absolutely worth your time to invest in this tech.
*Bonus Content* MM myths DEBUNKED: MM comes with a high perf cost: Depends on the system, but this hasn’t been my experience. MM may be a brute force approach, but it often replaces a lot of complex logic and stacked controllers/blends so the perf cost is usually comparable.
MM requires you to use long clips of mocap: No, not at all. You can put clips of any length in an MM database. As mentioned, teams shoot dance cards because they’re a quick and efficient method for capturing coverage, but you can put whatever data you want into the DB.
MM systems give sluggish character movement: No. At least, not inherently, but this is sometimes a choice that teams make.
This feeling comes when teams put raw mocap into the system with no editing and then tune their motion model to match the mocap. If you tune your character controller to move how you want it to move, and then edit the mocap data to match that, it can be as responsive as you like.
MM systems replace traditional approaches and we don’t want to take that risk: Not true. Most MM systems that I’ve seen run MM as it’s own node/controller which can then be embedded into a regular state machine. You can transition back and forth as you need.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Dan Lowe
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!