visage|SDK™ LipSync package contains powerful functions for automatic real-time or off-line lip sync in form of a well-documented C++ Software Development Kit (for the same function in an end-user application see visage|interactive™ tool, delivered as part of visage|SDK).
Input can be either an audio file (WAV file) or voice coming from microphone. In either case, the virtual character’s lips are animated to the voice on-the-fly. If you are using a microphone, you see the character moving the lips as you speak.
The lip-sync algorithm generates visemes (mouth shapes) from the standard set of 15 visemes specified in the MPEG-4 FBA specification. Standard MPEG-4 file output is included. Furthermore, a callback mechanism is provided to allow developers to capture the visemes as they are generated and use them in their own functions.
visage|SDK™ contains a sample project with full source code demonstrating the usage of the LIPSYNC package to animate a virtual character by voice from microphone or an audio file.