Is there an updated roadmap for the Sequencer, especially in terms of support for character animation?
For example, animation graph has a few features that look useful (filtering which joints are affected, look at, blending) but it goes on to say that it only works in “Play” mode. So you cannot preview in the Sequencer.
Here is my feature list based on what I can see is missing for my own use case:
- Support static poses and animation clips. This may already be there since a static pose can be done with a one-frame clip.
- Blending between adjacent clips in the sequencer. You overlap them a bit, and blend from one animation to the next. Very useful with static poses for example. E.g. you may have a few hand poses, then you hold one position, blend to next hand position, hold that position for a bit, blend to next hand position etc. You have very precise timing control this way.
- Filtering of animation clips (using animation graph terminology). Allow nominating a track as a higher priority, but with filtering to control which bones are affected (e.g. upper body only) - allow me to define a set of masks (head only, left/right arm only, left/right hand only, upper body, etc)
- Blend in and out of clips in override tracks, so you can throw a higher priority track above existing tracks to override a small part of an animation clip (e.g. character is walking, then turns to look at another target for a bit)
- Procedural animation, like the look-at support in animation graph. Can be extended to other things like “IK lock the hand to this position on the table”.
- Lip sync tracks (audio2face) so I can have an audio track for a character, which feeds into facial animation control.
- Root motion support with animation clip looping - preserve the position of the character at the end of a loop, so if you loop a walk animation they keep moving forward and not jump back to the start point.
- A way of implementing custom tracks - so 3rd parties can add new functionality. E.g., I used one to do captions, I was thinking about one so I can type in text and have it call out to generate text-to-speech synthesis for audio clip generation (which I then want to feed into audio2face and/or audio2gesture) - I don’t want to feed in audio clips - I want to feed in an audio track (which may have multiple audio clips placed at different points in the timeline)
But it’s all got to work with sequencer timeline scrubbing. Animation graph doesn’t from what I can see in the docs.
Note: the above may sound like a lot, but its actually only a few features that get reused a lot. Clips need blend in and out. Multiple layers get blended together based on priority. Add filtering for layers. Most of the work I think would be the UI for the sequencer.
Is there any likelihood of the above coming to the sequencer? If not, is it against licensing rules to “borrow” the existing sequencer code as a starting point to build something? It has all the existing code to move animation clips around, trim, stretch etc. That functionality is all great.