Giter VIP home page Giter VIP logo

ulipsync's Introduction

uLipSync

uLipSync is an asset for lip-syncing in Unity. It has the following features:

  • Utilizes Job System and Burst Compiler to run faster on any OS without using native plugins.
  • Can be calibrated to create a per-character profile.
  • Both runtime analysis and pre-bake processing are available.
  • Pre-bake processing can be integrated with Timeline.
  • Pre-bake data can be converted to AnimationClip

Features

LipSync

Profile

Real-time Analysis

Mic Input

Pre-Bake

Timeline

AnimationClip

Texture Change

VRM Support

WebGL Support

Install

  • Unity Package
    • Download the latest .unitypackage from Release page.
    • Import Unity.Burst and Unity.Mathematics from Package Manager.
  • Git URL (UPM)
    • Add https://github.com/hecomi/uLipSync.git#upm to Package Manager.
  • Scoped Registry (UPM)
    • Add a scoped registry to your project.
      • URL: https://registry.npmjs.com
      • Scope: com.hecomi
    • Install uLipSync in Package Manager.

How to Use

Mechanism

When a sound is played by AudioSource, a buffer of the sound comes into the OnAudioFilterRead() method of a component attached to the same GameObject. We can modify this buffer to apply sound effects like reverb, but at the same time since we know what kind of waveform is being played, we can also analyze it to calculate Mel-Frequency Cepstrum Coefficients (MFCC), which represent the characteristics of the human vocal tract. In other words, if the calculation is done well, you can get parameters that sound like "ah" if the current waveform being played is "a", and parameters that sound like "e" if the current waveform is "e" (in addition to vowels, consonants like "s" can also be analyzed). By comparing these parameters with the pre-registered parameters for each of the "aieou" phonemes, we can calculate the similarity between each phoneme and the current sound, and use this information to adjust the blend shape of the SkinnedMeshRenderer for accurate lip-syncing. If you feed the input from the microphone into AudioSource, you can also lipsync to your current voice.

The component that performs this analysis is uLipSync, the data that contains phoneme parameters is Profile, and the component that moves the blendshape is uLipSyncBlendShape. We also have a uLipSyncMicrophone asset that plays the audio from the microphone. Here's an illustration of what it looks like.

Setup

Let's set up using Unity-chan. The sample scene is Samples / 01. Play AudioClip / 01-1. Play Audio Clip. If you installed this from UPM, please import Samples / 00. Common sample (which contains Unity's assets).

After placing Unity-chan, add the AudioSource component to any game object where a sound will be played and set an AudioClip to it to play a Unity-chan's voice.

First, add a uLipSync component to the same GameObject. For now, select uLipSync-Profile-UnityChan from the list and assign it to the Profile slot of the component (if you assign something different, such as Male, it will not lip sync properly).

Next, set up the blendshape to receive the results of the analysis and move them. Add uLipSyncBlendShape to the root of Unity-chan's SkinnedMeshRenderer. Select the target blendshape, MTH_DEF, and go to Blend Shapes > Phoneme - BlendShape Table and add 7 items, A, I, U, E, O, N, and -, by pushing the + button ("-" is for noise). Then select the blendshape corresponding to each phoneme, as shown in the following image.

Finally, to connect the two, in the uLipSync component, go to Parameters > On Lip Sync Updated (LipSyncInfo) and press + to add an event, then drag and drop the game object (or component) with the uLipSyncBlendShape component where it says None (Object). Find uLipSyncBlendShape in the pull-down list and select OnLipSyncUpdate in it.

Now when you run the game, Unity-chan will move her mouth as she speaks.

Adjust lipsync

The range of the volume to be recognized and the response speed of the mouth can be set in the Parameters of the uLipSyncBlendShape component.

  • Volume Min/Max (Log10)
    • Set the minimum and maximum volume (closed / most open) to be recognized (Log10, so 0.1 is -1, 0.01 is -2).
  • Smoothness
    • The response speed of the mouth.

As for the volume, you can see the information about the current, maximum, and minimum volume in the Runtime Information of the uLipSync component, so try to set it based on this information.

AudioSource Position

In some cases, you may want to attach the AudioSource to the mouth position and uLipSync to another GameObject. In this case, it may be a bit troublesome, but you can add a component called uLipSyncAudioSource to the same GameObject as the AudioSource, and set it in uLipSync Parameters > Audio Source Proxy. Samples / 03. AudioSource Proxy is a sample scene.

Microphone

If you want to use a microphone as an input, add uLipSyncMicrophone to the same GameObject as uLipSync. This component will generate an AudioSource with the microphone input as a clip. The sample scene is Samples / 02-1. Mic Input.

Select the device to be used for input from Device, and if Is Auto Start is checked, it will start automatically. To start and stop microphone input, press the Stop Mic / Start Mic button in the UI as shown below at runtime.

If you want to control it from a script, use uLipSync.MicUtil.GetDeviceList() to identify the microphone to be used, and pass its MicDevice.index to the index of this component. Then call StartRecord() to start it or StopRecord() to stop it.

Note that the microphone input will be played back in Unity a little later than your own speech. If you want to use a voice captured by another software for broadcasting, set Parameters > Output Sound Gain to 0 in the uLipSync component. If the volume of the AudioSource is set to 0, the data passed to OnAudioFilterRead() will be silent and cannot be analyzed.

In the uLipSync component, go to Profile > Profile and select a profile from the list (Male for male, Female for female, etc.) and run it. However, since the profile is not personalized, the accuracy of the default profile may not be good. Next, we will see how to create a calibration data that matches your own voice.

Calibration

So far we have used the sample Profile data, but in this section, let's see how to create data adjusted for other voices (voice actors' data or your own voice).

Create Profile

Clicking the Profile > Profile > Create button in the uLipSync component will create the data in the root of the Assets directory and set it to the component. You can also create it from the Project window by right-clicking > uLipSync > Profile.

Next, register the phonemes you want to be recognized in Profile > MFCC > MFCCs. Basically, AIUEO is fine, but it is recommended to add a phoneme for breath ("-" or other appropriate character) to prevent the breath input. You can use any alphabet, hiragana, katakana, etc. as long as the characters you register match the uLipSyncBlendShape.

Next, we will calibrate each of the phonemes we have created.

Calibration using Mic Input

The first way is to use a microphone. uLipSyncMicrophone should be added to the object. Calibration will be done at runtime, so start the game to analyze the input. Press and hold the Calib button to the right of each phoneme while speaking the sound of each phoneme into the microphone, such as "AAAAA" for A, "IIIIII" for I, and so on. If it's noise, don't say anything or blow on it.

If you set uLipSyncBlendShape beforehand, it is interesting to see how the mouths gradually match.

If you have a slightly different way of speaking, for example, between your natural voice and your back voice, you can register multiple phonemes of the same name in the Profile, and adjust them accordingly.

Calibration using AudioClip

Next is the calibration method using audio data. If there is a voice that says "aaaaaaa" or "iiiiiii", please play it in a loop and press the Calib button as well. However, in most cases, there is no such audio, so we want to achieve calibration by trimming the "aaa"-like or "iii"-like part of the existing audio and playing it back. A useful component for this is uLipSyncCalibrationAudioPlayer. This is a component that loops the audio waveform while slightly cross-fading the part you want to play.

Select the part that seems to say "aaaaa" by dragging the boundary, and then press the Calib button for each phoneme to register the MFCC to the Profile.

Calibration Tips

When calibrating, you should pay attention to the following points.

  • Perform calibration with mic in an environment with as little noise as possible.
  • Make sure that the registered MFCCs are as constant as possible.
  • After calibration, check several times and re-calibrate phonemes that don't work, or register additional phonemes.
    • You can register multiple phonemes of the same name, so if they don't match when you change the voice tone, try registering more of them
    • If the phonemes don't match, check if you have the wrong phoneme.
    • If there is a phoneme with the same name but completely different color pattern in MFCC, it may be wrong (same phoneme should have similar pattern).
  • Collapse the Runtime Information when checking after calibration.
    • The editor is redrawn every frame, so the frame rate may fall below 60.

Pre-Bake

So far, we have looked at runtime processing. Now we will look at the production of data through pre-calculation.

Mechanism

If you have audio data, you can calculate in advance what kind of analysis results you will receive each frame, so we will bake it into a ScriptableObject called BakedData. At runtime, instead of using uLipSync to analyze the data at runtime, we will use a component named uLipSyncBakedDataPlayer to play the data. This component can notify the result of the analysis with an event just like uLipSync, so you can register uLipSyncBlendShape to realize lipsync. This flow is illustrated in the following figure.

Setup

The sample scene is Samples / 05. Bake. You can create a BakedData from the Project window by going to Create > uLipSync > BakedData.

Here, specify the calibrated Profile and an AudioClip, then click the Bake button to analyze the data and complete the data.

If it works well, the data will look like the following.

Set this data to the uLipSyncBakedDataPlayer.

Now you are ready to play. If you want to check it again in the editor, press the Play button, or if you want to play it from another script, just call Play().

Parameters

By adjusting the Time Offset slider, you can modify the timing of the lipsync. With runtime analysis, it is not possible to adjust the opening of the mouth before the voice, but with pre-calculation, it is possible to open the mouth a little earlier, so it can be adjusted to look more natural.

Batch conversion (1)

In some cases, you may want to convert all the character voice AudioClips to BakedData at once. In this case, please use Window > uLipSync > Baked Data Generator.

Select the Profile you want to use for batch conversion, and then select the target AudioClips. If the Input Type is List, register the AudioClips directly (dragging and dropping multiple selections from the Project window is easy). If the Input Type is List, register the AudioClip directly (dragging and dropping multiple selections from the Project window is easy). If the Input Type is Directory, a file dialog will open where you can specify a directory, and it will automatically list the AudioClips under that directory.

Click the Generate button to start the conversion.

Batch conversion (2)

When you have already created data, you may want to review the calibration and change the profile. In this case, there is a Reconvert button in the Baked Data tab of each Profile, which converts all the data using the Profile.

Timeline

You can add special tracks and clips for uLipSync in Timeline. We then need to bind which objects will be moved using the data from the Timeline. To do this, a component named uLipSyncTimelineEvent that receives playback information and notifies uLipSyncBlendShape is introduced. The flow is illustrated below.

Setup

Right-click in the track area in the Timeline and add a dedicated track from uLipSync.Timeline > uLipSync Track. Then right-click in the clip area and add a clip from Add From Baked Data. You can also drag and drop BakedData directly onto this area.

When you select a clip, you will see the following UI in the Inspector, where you can replace the BakedData.

Next, add a uLipSyncTimelineEvent to some game object, and then add the binding so that lipsync can be played. At this time, register the uLipSyncBlendShape in the On Lip Sync Update (LipSyncInfo).

Then click on the game object with the PlayableDirector and drag and drop the game object into the slot for binding on the uLipSyncTrack in the Timeline window.

Now the lipsync information will be sent to uLipSyncTimelineEvent, and the connection to uLipSyncBlendShape is established. Playback can also be done during editing, so you can adjust it with the animation and sound.

Timeline Setup Helper

Window > uLipSync > Timeline Setup Helper

This tool automatically creates BakedData corresponding to clips registered in AudioTrack and registers them in uLipSync Track.

Animation Bake

You can also convert BakedData, which is pre-calculated lip-sync data, into an AnimationClip. Saving it as an animation makes it easy to combine it with other animations, to integrate it into your existing workflow, and to adjust it later by moving the keys. The sample scene is Samples / 07. Animation Bake.

Setup

Select Window > uLipSync > Animation Clip Generator to open the uLipSync Animation Clip Generator window.

To run the animation bake, you need to open the scene where you have set up a uLipSyncBlendShape component. Then, please set the components in the scene to the fields in this window.

  • Animator
    • Select an Animator component in the scene.
    • An AnimationClip will be created in a hierarchical structure starting from this Animator.
  • Blend Shape
    • Select a uLipSyncBlendShape component that exists in the scene.
  • Baked Data List
    • Select the BakedData assets that you want to convert into AnimationClips.
  • Sample Frame Rate
    • Specify the sampling rate (fps) at which you want to add the keys.
  • Threshold
    • The keys will be added only when the weight changes by this value.
    • The maximum value of the weight is 100, so 10 means when the weight changes by 10%.
  • Output Directory
    • Specify the directory to output the baked animation clip.
    • If the directory is empty, create it under Assets (root).

The following image is an example setup.

Varying Threshold from 0, 10, and 20, you'll get the following.

Texture

uLipSyncTexture allows you to change textures and UVs according to the recognized phonemes. Samples / 08. Texture is a sample scene.

  • Renderer
    • Specify the Renderer of the material you want to update.
  • Parameters
    • Min Volume
      • The minimum volume value (log10) to update.
    • Min Duration
      • This is the minimum time to keep the mouth in the same texture / uv.
  • Textures
    • Here you can select the textures you want to assign
    • Phoneme
      • Enter the phoneme registered in the Profile (e.g. "A", "I").
      • An empty string ("") will be treated as if there is no audio input.
    • Texture
      • Specify the texture to be changed.
      • If not specified, the initial texture set in the material will be used.
    • UV Scale
      • UV Scale. For tiled textures, specify this value.
    • UV Offset
      • UV offset. For tiled textures, specify this value.

Animator

uLipSyncAnimator can be used to lip-sync using AnimatorController. Create a Layer with an Avatar Mask applied only to the mouth as shown below, and setup a Blend Tree to make each mouth shape move by parameters.

Then set the phonemes and the corresponding AnimatorController parameters to uLipSyncAnimator as follows.

The sample scene is Samples / 09. Animator.

VRM Support

VRM is a platform-independent file format designed for the use with 3D characters and avatars. In VRM 0.X, blendshapes are controlled through VRMBlendShapeProxy, while in version 1.0, blendshapes are abstracted into Expression and controlled via VRM10ObjectExpression.

VRM 0.X

With uLipSyncBlendShape, the blendshapes in the SkinnedMeshRenderer was controlled directly, but there is a modified component named uLipSyncBlendShapeVRM that controls VRMBlendShapeProxy instead.

VRM 1.0

By using uLipSyncExpressionVRM, you can control VRM10ObjectExpression.

Sample

For more details, please refer to Samples / VRM. In this sample, uLipSyncExpressionVRM is used for the setup of VRM 1.0.

Scripting Define Symbols

When installing the VRM package from a .unitypackage, you need to manually add Scripting Define Symbols. For VRM 0.X, add USE_VRM0X, and for VRM 1.0, add USE_VRM10. If you add the package via the Package Manager, these symbols are added automatically.

Runtime Setup

If you generate a model dynamically, you need to set up and connect uLipSync and uLipSyncBlendShape by yourself. A sample for doing this is included as 10. Runtime Setup. You will dynamically attach these components to the target object and set them up as follows:

[System.Serializable]
public class PhonemeBlendShapeInfo
{
    public string phoneme;
    public string blendShape;
}

public GameObject target;
public uLipSync.Profile profile;
public string skinnedMeshRendererName = "MTH_DEF";
public List<PhonemeBlendShapeInfo> phonemeBlendShapeTable = new List<PhonemeBlendShapeInfo>();

uLipSync.uLipSync _lipsync;
uLipSync.uLipSyncBlendShape _blendShape;

void Start()
{
    // Setting up uLipSyncBlendShape
    var targetTform = uLipSync.Util.FindChildRecursively(target.transform, skinnedMeshRendererName);
    var smr = targetTform.GetComponent<SkinnedMeshRenderer>();

    _blendShape = target.AddComponent<uLipSync.uLipSyncBlendShape>();
    _blendShape.skinnedMeshRenderer = smr;

    foreach (var info in phonemeBlendShapeTable)
    {
        _blendShape.AddBlendShape(info.phoneme, info.blendShape);
    }

    // Setting up uLipSync and connecting it with uLipSyncBlendShape
    _lipsync = target.AddComponent<uLipSync.uLipSync>();
    _lipsync.profile = profile;
    _lipsync.onLipSyncUpdate.AddListener(_blendShape.OnLipSyncUpdate);
}

Then attach this component to some GameObject and prepare the necessary information in advance and create a Prefab or something. The sample includes a setup for regular SkinnedMeshRenderer and a setup for VRM 1.0.

UI

When you want to create, load, save a Profile at runtime, or add phonemes and perform their calibration, you will need a UI. A simple example of this is added as 11. UI. By modifying this, you can create your own custom UI.

Tips

Custom Event

uLipSyncBlendShape is for 3D models, and uLipSyncTexture is for 2D textures. But if you want to do something different, you can write your own component to support them. Prepare a component that provides a function to receive uLipSync.LipSyncInfo and register it to OnLipSyncUpdate(LipSyncInfo) of uLipSync or uLipSyncBakedDataPlayer.

For example, the following is an example of a simple script that outputs the result of recognition to Debug.Log().

using UnityEngine;
using uLipSync;

public class DebugPrintLipSyncInfo : MonoBehaviour
{
    public void OnLipSyncUpdate(LipSyncInfo info)
    {
        if (!isActiveAndEnabled) return;

        if (info.volume < Mathf.Epsilon) return;

        Debug.LogFormat($"PHONEME: {info.phoneme}, VOL: {info.volume} ");
    }
}

LipSyncInfo is a structure that has members like the following.

public struct LipSyncInfo
{
    public string phoneme; // Main phoneme
    public float volume; // Normalized volume (0 ~ 1)
    public float rawVolume; // Raw volume
    public Dictionary<string, float> phonemeRatios; // Table that contains the pair of the phoneme and its ratio
}

Import to / Export from JSON

There is a function to save and load the profile to/from JSON. From the editor, specify the JSON you want to save or load from the Import / Export JSON tab, and click the Import or Export button.

If you want to do it in code, you can use the following code.

var lipSync = GetComponent<uLipSync>();
var profile = lipSync.profile;

// Export
profile.Export(path);

// Import
profile.Import(path);

Calibration at Runtime

If you want to perform calibration at runtime, you can do it by making a request to uLipSync with uLipSync.RequestCalibration(int index) as follows. The MFCC calculated from the currently playing sound will be set to the specified phoneme.

lipSync = GetComponent<uLipSync>();

for (int i = 0; i < lipSync.profile.mfccs.Count; ++i)
{
    var key = (KeyCode)((int)(KeyCode.Alpha1) + i);
    if (Input.GetKey(key)) lipSync.RequestCalibration(i);
}

Please refer to CalibrationByKeyboardInput.cs to see how it actually works. Also, it is better to save and restore the profile as JSON after building the app because the changes to ScriptableObject can not be saved.

Update Method

Update Method can be used to adjust the timing of updating blendshapes with uLipSyncBlendShape. The description of each parameter is as follows.

Method Timing
LateUpdate LateUpdate (default)
Update Update
FixedUpdate FixedUpdate
LipSyncUpdateEvent Immediately after receiving LipSyncUpdateEvent
External Update from an external script (ApplyBlendShapes())

WebGL

When setting WebGL as the target platform, a small UI addition occurs.

In WebGL, due to the Autoplay Policy, audio will not play unless there is an interaction with the page content (such as a click). However, as Unity is still playing the sound internally, this results in a desynchronization of audio that was supposed to be playing from the start. Enabling Auto Audio Sync On Web GL will correct this discrepancy when user interaction occurs.

Audio Sync Offset Time can be used to adjust the timing of lip-sync delays. Internally, since OnAudioFilterRead() is not available in WebGL, AudioClip.GetData() is used instead. This approach leverages the ability to slightly adjust the position of the buffer being retrieved.

Currently, Unity does not support JobSystem and Burst for WebGL, which means the performance is not optimal. For cases where real-time analysis is not critical, pre-baking the data is recommended.

Mac Build

When building on a Mac, you may encounter the following error.

Building Library/Bee/artifacts/MacStandalonePlayerBuildProgram/Features/uLipSync.Runtime-FeaturesChecked.txt failed with output: Failed because this command failed to write the following output files: Library/Bee/artifacts/MacStandalonePlayerBuildProgram/Features/uLipSync.Runtime-FeaturesChecked.txt

This may be related to the microphone access code, which can be fixed by writing something in Project Settings > Player's Other Settings > Mac Configuration > Microphone Usage Description.

Transition from v2 to v3

From v3.0.0, the values of MFCC have been corrected to more accurate values. As a result, if you are transitioning from v2 to v3, you will need to recalibrate and create a new Profile.

3rd-Party License

Unity-chan

Examples include Unity-chan assets.

© Unity Technologies Japan/UCL

ulipsync's People

Contributors

bx80646g3258 avatar dunnatello avatar hecomi avatar liudger avatar mkc1370 avatar tp-113 avatar uezo avatar yahagi-day avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ulipsync's Issues

Followed setup instructions, got DebugAudioPlayerEditor.cs error.

Thank you for your developing a fantastic library !

I faced a below error on import.

Library\PackageCache\com.hecomi.ulipsync@2136a02\Editor\Debug\DebugAudioPlayerEditor.cs(42,28): error CS0117: 'Path' does not contain a definition for 'GetRelativePath'

Environment

Unity : 2020.3.34f1

I tried two package patterns and reproduced both.

    "com.vrmc.gltf": "https://github.com/vrm-c/UniVRM.git?path=/Assets/UniGLTF#v0.97.0",
    "com.vrmc.univrm": "https://github.com/vrm-c/UniVRM.git?path=/Assets/VRM#v0.97.0",
    "com.vrmc.vrm": "https://github.com/vrm-c/UniVRM.git?path=/Assets/VRM10#v0.97.0",
    "com.vrmc.vrmshaders": "https://github.com/vrm-c/UniVRM.git?path=/Assets/VRMShaders#v0.97.0",
    "com.hecomi.ulipsync": "https://github.com/hecomi/uLipSync.git#upm"

    "com.vrmc.gltf": "https://github.com/vrm-c/UniVRM.git?path=/Assets/UniGLTF#v0.108.0",
    "com.vrmc.univrm": "https://github.com/vrm-c/UniVRM.git?path=/Assets/VRM#v0.108.0",
    "com.vrmc.vrm": "https://github.com/vrm-c/UniVRM.git?path=/Assets/VRM10#v0.108.0",
    "com.vrmc.vrmshaders": "https://github.com/vrm-c/UniVRM.git?path=/Assets/VRMShaders#v0.108.0",
    "com.hecomi.ulipsync": "https://github.com/hecomi/uLipSync.git#upm"

Changes to AudioListener.volume affect mouth shape

Currently, when reducing AudioListener.volume, the lips move less and lip movement is exaggerated when increasing AudioListener.volume (for lip sync blend shapes). It appears the data in the OnAudioFilterRead is affected by the volume received by the audio listener.

Would it be possible to for the lip sync volume multiplier to be solely based on the AudioSource volume separate from the AudioListener? I am currently trying to implement a global volume slider, but lip sync is breaking due to this issue.

WebGL build error

Unity version 2020.3.25f1

Here is the log files:

Failed running command "C:/Program Files/Unity/Hub/Editor/2020.3.25f1/Editor/Data/PlaybackEngines/WebGLSupport\BuildTools\Emscripten_Win\python\2.7.5.3_64bit\python.exe" -E "C:/Program Files/Unity/Hub/Editor/2020.3.25f1/Editor/Data/PlaybackEngines/WebGLSupport\BuildTools\Emscripten\emcc" @"C:\Users\User\Documents\TemasekRoleplayAuthoringPlatform\Assets..\Temp\emcc_arguments.resp" (process exit code: 1)
UnityEditor.BuildPlayerWindow:BuildPlayerAndRun ()

Failed process stderr log:
warning: unexpected number of arguments 1 in call to '__cxa_pure_virtual', should be 0
warning: unexpected number of arguments 2 in call to '_ZN6il2cpp6icalls8mscorlib6System6String22RedirectToCreateStringEv', should be 0
warning: unexpected number of arguments 4 in call to '_ZN6il2cpp6icalls8mscorlib6System6String22RedirectToCreateStringEv', should be 0
warning: unexpected number of arguments 5 in call to '_ZN6il2cpp6icalls8mscorlib6System6String22RedirectToCreateStringEv', should be 0
warning: unexpected number of arguments 4 in call to '_ZN6il2cpp6icalls8mscorlib6System6String22RedirectToCreateStringEv', should be 0
warning: unexpected number of arguments 2 in call to '_ZN6il2cpp6icalls8mscorlib6System6String22RedirectToCreateStringEv', should be 0
warning: unexpected number of arguments 3 in call to '_ZN6il2cpp6icalls8mscorlib6System6String22RedirectToCreateStringEv', should be 0

JS optimizer error:
Unexpected token: keyword (const) (line: 2565, col: 19, pos: 122049)

var uLipSync={unityCsharpCallback:null,resumeEventNames:["keydown","mousedown","touchstart"],userEventCallback:function () {
Module.dynCall_v(uLipSync.unityCsharpCallback);
for (const ev of uLipSync.resumeEventNames) {
^

C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\eliminator\node_modules\uglify-js\lib\parse-js.js:282
throw new JS_Parse_Error(message, line, col, pos);
^
Error
at new JS_Parse_Error (C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\eliminator\node_modules\uglify-js\lib\parse-js.js:260:22)
at js_error (C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\eliminator\node_modules\uglify-js\lib\parse-js.js:282:15)
at croak (C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\eliminator\node_modules\uglify-js\lib\parse-js.js:752:17)
at token_error (C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\eliminator\node_modules\uglify-js\lib\parse-js.js:760:17)
at unexpected (C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\eliminator\node_modules\uglify-js\lib\parse-js.js:766:17)
at C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\eliminator\node_modules\uglify-js\lib\parse-js.js:1131:17
at maybe_unary (C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\eliminator\node_modules\uglify-js\lib\parse-js.js:1220:27)
at expr_ops (C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\eliminator\node_modules\uglify-js\lib\parse-js.js:1247:32)
at maybe_conditional (C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\eliminator\node_modules\uglify-js\lib\parse-js.js:1251:28)
at maybe_assign (C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\eliminator\node_modules\uglify-js\lib\parse-js.js:1275:28)
at C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\eliminator\node_modules\uglify-js\lib\parse-js.js:1289:28
ERROR:root:'C:/Program Files/Unity/Hub/Editor/2020.3.25f1/Editor/Data\Tools\nodejs\node.exe --stack_size=8192 --max-old-space-size=4096 C:\Program Files\Unity\Hub\Editor\2020.3.25f1\Editor\Data\PlaybackEngines\WebGLSupport\BuildTools\Emscripten\tools\js-optimizer.js C:\Users\User\AppData\Local\Temp\tmpmfsbam\build.bc.o.js.pp.js.mem.js noPrintMetadata AJSDCE' failed
UnityEditor.BuildPlayerWindow:BuildPlayerAndRun ()

/Users/bokken/buildslave/unity/build/PlatformDependent/WebGL/Extensions/Unity.WebGL.extensions/BuildPostprocessor.cs:423)
UnityEditor.WebGL.WebGlBuildPostprocessor.LinkBuild (UnityEditor.Modules.BuildPostProcessArgs args) (at /Users/bokken/buildslave/unity/build/PlatformDependent/WebGL/Extensions/Unity.WebGL.extensions/BuildPostprocessor.cs:464)
UnityEditor.WebGL.WebGlBuildPostprocessor.PostProcess (UnityEditor.Modules.BuildPostProcessArgs args) (at /Users/bokken/buildslave/unity/build/PlatformDependent/WebGL/Extensions/Unity.WebGL.extensions/BuildPostprocessor.cs:914)
UnityEditor.Modules.DefaultBuildPostprocessor.PostProcess (UnityEditor.Modules.BuildPostProcessArgs args, UnityEditor.BuildProperties& outProperties) (at <7ac35247888b44f4a7e290f1f6bb33f3>:0)
UnityEditor.PostprocessBuildPlayer.Postprocess (UnityEditor.BuildTargetGroup targetGroup, UnityEditor.BuildTarget target, System.String installPath, System.String companyName, System.String productName, System.Int32 width, System.Int32 height, UnityEditor.BuildOptions options, UnityEditor.RuntimeClassRegistry usedClassRegistry, UnityEditor.Build.Reporting.BuildReport report) (at <7ac35247888b44f4a7e290f1f6bb33f3>:0)
UnityEditor.BuildPlayerWindow:BuildPlayerAndRun()

Possible fix to include normalized BlendShape weights

Hi @hecomi!

Thank you for this useful and well made tool. While trying to use it for my application, when I first tested it the result on my avatar was, lets say, "overshooted"! Looking at your uLipSyncBlendShape code, specifically the OnApplyBlendShapes() method, I noticed that you compute the final weight value as follows:

weight += bs.weight * bs.maxWeight * volume * 100;

Since maxWeight, weight and volume should be normalized values between 0 and 1, the output weight you are providing is in the form of a percentage (from 0 to 100). At least for the avatar that I'm currently using, taken from ReadyPlayerMe tool, it seems that the blendshape weights for the SkinnedMeshRenderer are meant to be normalized values, between 0 and 1. That is why the first test of the tool applied to my avatar results in a deformed and "broken" mesh.

To fix this, I simply added another serialized bool to your script, called useNormalizedBlendShapeWeights, and I added a simple line of code in the OnApplyBlendShapes(), in this way:

weight += bs.weight * bs.maxWeight * volume * 100;
if (useNormalizedBlendShapeWeights) weight /= 100f;  // Keep weight normalized between 0 and 1

Please let me know if this fix actually makes sense to you! I hope that this could help improve compatibility with more types of avatars.

Thanks again!

webgl support

any plan to add webgl platform support? Thank you :)

drawing MFCC's Error: InvalidOperationException: The UNKNOWN_OBJECT_TYPE CreateMfccTextureJob has been deallocated. All containers must be valid when scheduling a job

Unity v 2020.3.22f1, After setup and adding uLipSync to an object with audioSource and adding a Profile it will endlessly spam the following errors. It appears to be an issue in drawing the MFCCs as it only occurs when trying to view that component in the editor.

InvalidOperationException: The UNKNOWN_OBJECT_TYPE CreateMfccTextureJob.texColors has been deallocated. All containers must be valid when scheduling a job.
Unity.Jobs.LowLevel.Unsafe.JobsUtility.Schedule (Unity.Jobs.LowLevel.Unsafe.JobsUtility+JobScheduleParameters& parameters) (at <07c89f7520694139991332d3cf930d48>:0)
Unity.Jobs.IJobExtensions.Schedule[T] (T jobData, Unity.Jobs.JobHandle dependsOn) (at <07c89f7520694139991332d3cf930d48>:0)
uLipSync.TextureCreator.CreateMfccTexture (UnityEngine.Texture2D tex, uLipSync.MfccData mfcc, System.Single min, System.Single max) (at Library/PackageCache/com.hecomi.ulipsync@383958db11/Runtime/Core/TextureCreator.cs:199)
uLipSync.ProfileEditor.DrawMFCC (UnityEngine.Rect position, System.Int32 index, System.Boolean showCalibration) (at Library/PackageCache/com.hecomi.ulipsync@383958db11/Editor/ProfileEditor.cs:183)

&

A Native Collection has not been disposed, resulting in a memory leak. Enable Full StackTraces to get more details.

Searching for a fix I only found a resource to enable full stack traces

As a temporary fix; Commenting out lines 181-200 in TextureCreator.cs will create a grey texture, and the scene will run just fine without any further issue.

Can you add the .cff so I can citation it in the paper?

Hello.
I made an avatar lip sync through your project (ulipsync).
Cited phrases are needed because the content must be included in the paper(It's being written.).
I found out that we can proceed as follows. (Refer to address)
https://docs.github.com/en/repositories/managing-your-repositorys-settings-and-features/customizing-your-repository/about-citation-files
I hope you have a good day.

I closed the question because I found another way.
Have a nice day.

Feature Request: Sparse BakedData

Currently BakedData stores the weight of every phoneme for every baked frame. However, most phonemes in a frame have weight 0, and for large amounts of phonemes this can lead to a lot of extra data (with BakedData rivaling uncompressed wav audio in file size). BakedData could instead store only the phonemes with non-0 weight each frame, preserving the full list/order of phonemes as a separate top-level field, significantly reducing file size.
image

Using uLipSync in Editor mode

I was wondering if there could be support for using and previewing lip sync in Editor mode as well?

I tried the following setup, unable to make the mouth move:
-create timeline
-add audio track
-setup uLipSync on a scene game object
-drag the audio source from the object to the timeline Audio track slot
-drag an audio clip onto the timeline
-hit play

The audio plays, but uLipSync doesn't trigger.

Am I doing something wrong or is Editor mode currently not supported? In this case, seeing this feature would be really useful!

iOS build fails on 2021.2.0a16

I was trying to build a project on Unity 2021.2.0a16 on OSX for iOS, but the build seems to fail. It works on Android.

The message I get is, there's some reference to uLipSync in the error:

Exception: Unity.IL2CPP.Building.BuilderFailedException: Build failed with 0 successful nodes and 1 failed ones
Annotation: IL2CPP_CodeGen /Users/xxx/Koodaus/xxx/Library/Il2cppBuildCache/iOS/il2cppOutput/Data
Cmdline: "/Applications/Unity/Hub/Editor/2021.2.0a16/Unity.app/Contents/il2cpp/build/deploy/net5.0/il2cpp" --convert-to-cpp --profiler-report --profiler-output-file="/Users/xxx/Koodaus/xxx/Library/Il2cppBuildCache/iOS/buildstate/artifacts/il2cpp_conv_553f.traceevents" --directory="/Users/xxx/Koodaus/xxx/Temp/StagingArea/Data/Managed" --data-folder="/Users/xxx/Koodaus/xxx/Library/Il2cppBuildCache/iOS/il2cppOutput/Data" --generatedcppdir="/Users/xxx/Koodaus/xxx/Library/Il2cppBuildCache/iOS/il2cppOutput" --symbols-folder="/Users/xxx/Koodaus/xxx/Library/Il2cppBuildCache/iOS/il2cppOutput/Symbols" --additional-cpp="/Users/xxx/Koodaus/xxx/Library/Il2cppBuildCache/iOS/additionalCppFiles/UnityClassRegistration.cpp" --additional-cpp="/Users/xxx/Koodaus/xxx/Library/Il2cppBuildCache/iOS/additionalCppFiles/UnityICallRegistration.cpp" --emit-null-checks --enable-array-bounds-check --code-generation-option=EnableInlining --stats-output-dir="/Users/xxx/Koodaus/xxx/Library/Il2cppBuildCache/iOS/il2cppOutput" --dotnetprofile=unityaot --cachedirectory="/Users/xxx/Koodaus/xxx/Library/Il2cppBuildCache/iOS"
ExitCode: 1
Stdout: 
Error: IL2CPP error for type 'uLipSync.Algorithm/uLipSync.FFT$PostfixBurstDelegate' in assembly '/Users/xxx/Koodaus/xxx/Temp/StagingArea/Data/Managed/uLipSync.dll'
Unity.IL2CPP.HashCodeCollisionException: Hash code collision on value `92CD236A7F26BEF40FEBEADB2CC1C70A3EDAC857`
Existing Item was : `uLipSync.Algorithm/uLipSync.FFT$PostfixBurstDelegate`
Colliding Item was : `uLipSync.Algorithm/uLipSync.FFT$PostfixBurstDelegate`

   at Unity.IL2CPP.HashCodeCache`1.GetUniqueHash(T value) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/HashCodeCache.cs:line 46
   at Unity.IL2CPP.Naming.NamingComponent.ForTypeNameInternal(TypeReference typeReference) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/Naming/NamingComponent.cs:line 361
   at Unity.IL2CPP.Naming.NamingComponent.ForTypeNameOnly(TypeReference type) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/Naming/NamingComponent.cs:line 91
   at Unity.IL2CPP.Attributes.AttributeNamingExtensions.ForCustomAttributesCacheGenerator(INamingService naming, TypeDefinition typeDefinition) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/Attributes/AttributeNamingExtensions.cs:line 12
   at Unity.IL2CPP.Attributes.AttributeSupportCollector.Add(TypeDefinition typeDefinition) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/Attributes/AttributeSupportCollector.cs:line 59
   at Unity.IL2CPP.Attributes.AttributeSupportCollector.Collect(TypeDefinition type) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/Attributes/AttributeSupportCollector.cs:line 38
   at Unity.IL2CPP.Attributes.AttributeSupportCollector.Collect(AssemblyDefinition assembly) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/Attributes/AttributeSupportCollector.cs:line 33
   at Unity.IL2CPP.Attributes.AttributeSupportCollector.Collect(MinimalContext context, AssemblyDefinition assembly) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/Attributes/AttributeSupportCollector.cs:line 25
   at Unity.IL2CPP.AssemblyConversion.PrimaryCollection.Steps.PerAssembly.AttributeSupportCollection.ProcessItem(GlobalPrimaryCollectionContext context, AssemblyDefinition item) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/AssemblyConversion/PrimaryCollection/Steps.PerAssembly/AttributeSupportCollection.cs:line 16
   at Unity.IL2CPP.AssemblyConversion.Steps.Base.ScheduledItemsStepFunc`5.WorkerWrapper(WorkItemData`3 workerData) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/AssemblyConversion/Steps.Base/ScheduledItemsStepFunc.cs:line 43
   at Unity.IL2CPP.Contexts.Scheduling.PhaseWorkScheduler`1.ContinueWithResultsWorkItem`4.InvokeWorker(Object context, Int32 uniqueId) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/Contexts.Scheduling/PhaseWorkScheduler.cs:line 526
   at Unity.IL2CPP.Contexts.Scheduling.PhaseWorkScheduler`1.BaseContinueWorkItem`2.Invoke(Object context, Int32 uniqueId) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/Contexts.Scheduling/PhaseWorkScheduler.cs:line 438
   at Unity.IL2CPP.Contexts.Scheduling.PhaseWorkScheduler`1.WorkerLoop(Object data) in /Users/bokken/build/output/unity/il2cpp/Unity.IL2CPP/Contexts.Scheduling/PhaseWorkScheduler.cs:line 283

   at il2cpp.Program.DoRun(String[] args, RuntimePlatform platform, Il2CppCommandLineArguments il2CppCommandLineArguments, BuildingOptions buildingOptions, Boolean throwExceptions) in /Users/bokken/build/output/unity/il2cpp/il2cpp/Program.cs:line 212
UnityEditorInternal.Runner.RunProgram (UnityEditor.Utils.Program p, System.String exe, System.String args, System.String workingDirectory, UnityEditor.Scripting.Compilers.CompilerOutputParserBase parser) (at /Users/bokken/buildslave/unity/build/Editor/Mono/BuildPipeline/BuildUtils.cs:129)
UnityEditorInternal.Runner.RunNetCoreProgram (System.String exe, System.String args, System.String workingDirectory, UnityEditor.Scripting.Compilers.CompilerOutputParserBase parser, System.Action`1[T] setupStartInfo) (at /Users/bokken/buildslave/unity/build/Editor/Mono/BuildPipeline/BuildUtils.cs:91)
UnityEditorInternal.IL2CPPBuilder.RunIl2CppWithArguments (System.Collections.Generic.List`1[T] arguments, System.Action`1[T] setupStartInfo) (at /Users/bokken/buildslave/unity/build/Editor/Mono/BuildPipeline/Il2Cpp/IL2CPPUtils.cs:792)
UnityEditorInternal.IL2CPPBuilder.ConvertPlayerDlltoCpp (UnityEditor.Il2Cpp.Il2CppBuildPipelineData data) (at /Users/bokken/buildslave/unity/build/Editor/Mono/BuildPipeline/Il2Cpp/IL2CPPUtils.cs:776)
UnityEditorInternal.IL2CPPBuilder.Run () (at /Users/bokken/buildslave/unity/build/Editor/Mono/BuildPipeline/Il2Cpp/IL2CPPUtils.cs:621)
UnityEditorInternal.IL2CPPUtils.RunIl2Cpp (System.String tempFolder, System.String stagingAreaData, UnityEditorInternal.IIl2CppPlatformProvider platformProvider, System.Action`1[T] modifyOutputBeforeCompile, UnityEditor.RuntimeClassRegistry runtimeClassRegistry) (at /Users/bokken/buildslave/unity/build/Editor/Mono/BuildPipeline/Il2Cpp/IL2CPPUtils.cs:272)
UnityEditor.iOS.PostProcessiPhonePlayer.CrossCompileManagedDlls (UnityEditor.iOS.PostProcessiPhonePlayer+BuildSettings bs, UnityEditor.iOS.PostProcessiPhonePlayer+ProjectPaths paths, UnityEditor.AssemblyReferenceChecker checker, UnityEditor.RuntimeClassRegistry usedClassRegistry, UnityEditor.Build.Reporting.BuildReport buildReport) (at /Users/bokken/buildslave/unity/build/PlatformDependent/iPhonePlayer/Extensions/Common/BuildPostProcessor.cs:930)
UnityEditor.iOS.PostProcessiPhonePlayer.PostProcess (UnityEditor.iOS.PostProcessiPhonePlayer+BuildSettings bs, UnityEditor.iOS.PostProcessiPhonePlayer+ProjectPaths paths, UnityEditor.RuntimeClassRegistry usedClassRegistry, UnityEditor.Build.Reporting.BuildReport buildReport) (at /Users/bokken/buildslave/unity/build/PlatformDependent/iPhonePlayer/Extensions/Common/BuildPostProcessor.cs:748)
UnityEditor.iOS.PostProcessiPhonePlayer.PostProcess (UnityEditor.iOS.PostProcessorSettings postProcessorSettings, UnityEditor.Modules.BuildPostProcessArgs args) (at /Users/bokken/buildslave/unity/build/PlatformDependent/iPhonePlayer/Extensions/Common/BuildPostProcessor.cs:693)
UnityEditor.iOS.iOSBuildPostprocessor.PostProcess (UnityEditor.Modules.BuildPostProcessArgs args) (at /Users/bokken/buildslave/unity/build/PlatformDependent/iPhonePlayer/Extensions/Common/ExtensionModule.cs:45)
Rethrow as BuildFailedException: Exception of type 'UnityEditor.Build.BuildFailedException' was thrown.
UnityEditor.iOS.iOSBuildPostprocessor.PostProcess (UnityEditor.Modules.BuildPostProcessArgs args) (at /Users/bokken/buildslave/unity/build/PlatformDependent/iPhonePlayer/Extensions/Common/ExtensionModule.cs:49)
UnityEditor.Modules.DefaultBuildPostprocessor.PostProcess (UnityEditor.Modules.BuildPostProcessArgs args, UnityEditor.BuildProperties& outProperties) (at /Users/bokken/buildslave/unity/build/Editor/Mono/Modules/DefaultBuildPostprocessor.cs:28)
UnityEditor.PostprocessBuildPlayer.Postprocess (UnityEditor.BuildTargetGroup targetGroup, UnityEditor.BuildTarget target, System.String installPath, System.String companyName, System.String productName, System.Int32 width, System.Int32 height, UnityEditor.BuildOptions options, UnityEditor.RuntimeClassRegistry usedClassRegistry, UnityEditor.Build.Reporting.BuildReport report) (at /Users/bokken/buildslave/unity/build/Editor/Mono/BuildPipeline/PostprocessBuildPlayer.cs:350)
UnityEngine.GUIUtility:ProcessEvent(Int32, IntPtr, Boolean&) (at /Users/bokken/buildslave/unity/build/Modules/IMGUI/GUIUtility.cs:189)

Run Python chat bot with the demo

Hi, I'm very new to unity and just came across this repo.

I have built an audio chatbot in python that takes input from mic and responds in audio. I want to add a speaking face to the bot.

is there a way i can run my python script from unity and integrate this demo with my chatbot.

Thank you

Problem with Textures

Good morning, I'd like to discuss an issue I encountered with my project. I'm working on a game where I want a character to deliver instructions using LipSync in 2D at the beginning. However, I've run into a problem:
´´´
ArgumentOutOfRangeException: Index was out of range. Must be non-negative and less than the size of the collection.
Parameter name: index
System.Collections.Generic.List`1[T].RemoveAt (System.Int32 index) (at :0)
uLipSync.uLipSyncTexture.UpdateVowels () (at Assets/uLipSync/Runtime/uLipSyncTexture.cs:126)
uLipSync.uLipSyncTexture.UpdateLipSync () (at Assets/uLipSync/Runtime/uLipSyncTexture.cs:96)
uLipSync.uLipSyncTexture.Update () (at Assets/uLipSync/Runtime/uLipSyncTexture.cs:68)
´´´
It seems to be related to textures, although I've ensured that all textures are correctly implemented. Here's an image of the mouth variable:

image Additionally, here's how I've defined the 'body' variable: image image While it seems straightforward, this error persists. Does anyone have insight into resolving it?

uLipSync 3.0.2 not working with WebGL in Unity 2022.3

Hi! I'm trying to use the plugin in a WebGL build, but the lips are not moving.

Test procedure:

  1. Start a brand new project, install the 3.0.2 unity package.
  2. Comment out anything microphone related, so the build can succeed.
  3. Do a WebGL build of the scene called 01-1. Play Audio Clip

Result:
Lips are not moving. No errors are shown in the browser console, apart from a warning:

An AudioContext was prevented from starting automatically. It must be created or resumed after a user gesture on the page

Tested on the latest Firefox and Chrome browsers (Windows 10).

Notes:

  • Lips are moving correctly in the editor and Windows standalone builds
  • I've read the internet high and low and it looks like the job system should be supported, it's just that the job count is set to 0 so that everything runs on the main thread
  • I tried setting max job count to 0 in the unity editor to see if that makes any difference, but the editor works well with that setting as well. I did it by setting JobsUtility.JobWorkerCount = 0; in uLipSync.cs's awake method.

Has anyone solved this issue?

Output to Animator (parameters)

An option to Read the parameters from the Animator Controller to send the values directly to the animator instead of the blendshapes.

Possible Unreal Plugin?

This Unity Plugin looks incredible! Any thoughts or plans on bringing it over to Unreal Engine as well? :)

Why coun't add blendshapes?

When I click add new blendshape, and change the default phoneme name, the blendshape dropdown show nothing.Can you give me some suggestions?Is it that I don't have a blendshape? How to create the plugin recognition blendshape?

Suggestion: Add "[ExecuteAlways]" to uLipSync so it can be used with Timeline in Edit mode as well

I tried adding [ExecuteAlways] to uLipSync class to enable Editor mode usage and it seems to work (at least the callback component, I haven't tried the blending).

I haven't checked for the other parts of uLipSync, but some incompability with the Timeline seems to exist there (made a separate issue #3 ) so I'm guessing it would require additional testing. But I could most definitely see the use for syncing in editor mode when building cutscenes etc for Timeline.

Runtime Microphone Detection breaks down

Infrequently I come across a bug where the microphone detection stops working. Even after reloading the scene or deleting objects related to uLipsync and then re-creating them. I found the only fix is to exit play mode and re-enter play mode.

This leads me to believe it's a problem with some static variables deeper in uLipsync for detecting microphone. Any information on where I should look to fix such a problem is appreciated!

I don't have any errors when this occurs.

Microphone input not working in iOS

Hello. This repo is great. But when I try to use microphone to do lipsync in an iOS app, it does not work.
I also try the samples provided and build it directly to iOS. There is also no luck.

こんにちは。このレポは素晴らしいです。しかし、私はiOSアプリでマイクを使ってリップシンクをしようとすると、それは動作しません。

How to set Blendshape automatically

hello, I'm trying to load or instantiate vrm character at runtime and lip-sync it according to the audio at runtime.
if I use a character already in the scene, it's working without any issue as I already assigned the blend shape to the corresponding Phoneme.
but it seems lip-sync is not working when I instantiate or load a new character.
I found it is because the new character's facial blend shape is not assigned to Phoneme and blend shape in U lip sync blend shape.
How can I assign the blend shape to Phoneme automatically?
Since I use vroid characters, they all have same blend shape such as Fcl_MTH_A,Fcl_MTH_I,Fcl_MTH_U.

uLipSyncで口のリップシンクが機能しませんでした

試したこと(Unityバージョン3.16f1)

・新規3DプロジェクトでUniVRM0.108導入(普段VRMモデル表示などで使用していたため)
・Burst・Mathematicsをパッケージマネージャーからインストール
・uLipSync導入し、開発者様の動画を参考に設定したが、口のみが動かなく、音声のみが聞こえる状態に。(uLipSyncの音声データは生成はされます)
※GitHubからZip・スコープ付きレジストリ・Git URLでのuLipSync導入も試しました。

・こちらの方法→https://www.youtube.com/live/4b5fn84RM6E?si=jQhZ5zQusAoUDbcr
 を発見し、uLipSyncの設定を行ったが、同じく口が反応しない
※開発者様のやり方とは違い、タイムラインに音声・生成されたデータ・オブジェクトなどをアタッチしても口パクのみ機能しませんでした。
※生成されたuLipSyncのリップシンクのデータはちゃんと波形や色などは表示されますが、タイムラインに設置しても口が動きませんでした。(男性・女性などのデータに変えましたが、波形や色は生成されてるようです)

・ブレンドシェイプなどの設定も開発者様・上記URLの配信者様のを参考に設定したが、口のみ機能しない(音声は聞こえる)

・uLipSyncパッケージの下部にあったサンプルを導入、VRM0.Xバージョンのサンプルを起動したが、口のみ動かない
・uLipSyncのマイクも試したが、マイク音声が聞こえるのみで、口が動かない。
・短めの文章の音声(VOICEVOXの音声)を試しても、口は動きませんでした。
・VRoidモデルで試しましたが、別のモデルでも同じ現象になりました。

上記の事を3.16バージョンで試しましたがうまく機能できませんでした。
別のバージョンに変えて試してはないのですが、何か解決方法はないでしょうか?

Details about MFCC computation required

Hi,

I am a signal processing engineer and could you provide me details of the following things:

  1. Sampling rate (I read that you are downsampling it to 16KHz)
  2. For real-time what is the length of data or audio chunk that you use to extract MFCC and whats the overlapping size?

Regards,
Khubaib

VRM SetUp failing

Hii, I just installed uniVRM and uLipSync but im getting errors with missing namespaces UniVRM10 and Vrm10Instance
image

Don't know how to get it working, tryed different import ways to add the packages but i get the same errors.

Followed setup instructions, got namespace missing errors.

Trying to set this up for a job. I followed the installation instructions, and it gave me missing namespace errors, all related to "VRM". I tried the first set up instruction after that, which says if I used UPM, the unity package manager (which I did), to also import Samples / 00. Common Sample. So I imported 00. Common under Samples in the uLipSync package. But that only added more errors about assembly names already existing. I also don't see any option to remove 00. Common, only reimport.

Here's the errors: https://postimg.cc/0KJmKpd2

Exporting texture using Facebuilder for blender

When a texture created in Facebuilder is exported to unity with shape keys, the character's skinnedmeshrenderer does not have a Blendshape that corresponds to the various phonemes assumed by the uLipSyncBlendShape component. Blendshape is missing. What should I do in this case?

About unity version support

Hello.

I would like to add uLipSync to the
Git URL (UPM)
Add https://github.com/hecomi/uLipSync.git#upm to Package Manager.
I have installed uLipSync with

I installed uLipSync by using the Git URL (UPM) Add95 to Package Manager.
However, I got an error message.

The problem is in line 160 of uLipSync

if (! _ratios.TryAdd(phoneme, ratio))

I found out that the TryAdd function in Dictionary is .

NET Standard 2.1.
NET Standard 2.1. https://docs.microsoft.com/ja-jp/dotnet/api/system.collections.generic.dictionary-2.tryadd?view=net-6.0

NET Standard 2.1 support for unity seems to be available in Unity 2021.2 or later.

NET Standard 2.1 in unity seems to be supported in Unity 2021.2 or later. To begin with, is the supported version of unity in uLipSync 2021.2 or later?
Or, if I run it on 2020.3.27f (LTS), is there another way to solve the error?

The first parameter set in uLipSyncAnimator does not work.

uLipSyncAnimatorで一番最初にセットしたパラメータが動作しないようです。

原因を調査すると、uLipSyncAnimatorEditorでセットするuLipSyncAnimator.AnimatorInfo.indexが-1になっており、
uLipSyncAnimator.OnApplyAnimator内のif文で弾かれてしまうようです。

解決策としては、uLipSyncAnimatorEditor.DrawParameterListItem内の157行目から始まるindex関連の処理で、
±1している操作を修正すれば解決できますが、
これをしてしまうと、既存のユーザーがアップデート時に不具合が出てしまう恐れがありそうです。

何か解決方法、もしくは修正をお願いできないでしょうか?

image

Future Improvements

  • Improve downsampling algorithm (avoid aliasing)
  • Review the number of MEL filter bank (24 -> 80?)
  • Move the texture create function in uLipSyncMfccTextureCreater to Core and enhance the editor performance

Compatability with Spine runtime?

The project's very good! The ability to detect vowels from live audio is amazing!
I'm currently trying to make project with spine runtime, and there seems no support with it.
I may change the script, but i wonder this project will support spine.

uLipsyncAnimator

When selecting asset on disk with ulipsyncAnimator this error appear in console and interface is broken.

Animator is not playing an AnimatorController
UnityEngine.StackTraceUtility:ExtractStackTrace ()
uLipSync.uLipSyncAnimatorEditor:DrawParameterListItem (UnityEngine.Rect,int) (at Assets/talespin-core/Plugins/uLipSync/Editor/uLipSyncAnimatorEditor.cs:147)
uLipSync.uLipSyncAnimatorEditor:<DrawAnimatorReorderableList>b__6_1 (UnityEngine.Rect,int,bool,bool) (at Assets/talespin-core/Plugins/uLipSync/Editor/uLipSyncAnimatorEditor.cs:118)
UnityEditorInternal.ReorderableList:DoListElements (UnityEngine.Rect,UnityEngine.Rect) (at /Users/bokken/buildslave/unity/build/Editor/Mono/GUI/ReorderableList.cs:946)
UnityEditorInternal.ReorderableList:DoLayoutList () (at /Users/bokken/buildslave/unity/build/Editor/Mono/GUI/ReorderableList.cs:723)
uLipSync.uLipSyncAnimatorEditor:DrawAnimatorReorderableList () (at Assets/talespin-core/Plugins/uLipSync/Editor/uLipSyncAnimatorEditor.cs:130)
uLipSync.uLipSyncAnimatorEditor:OnInspectorGUI () (at Assets/talespin-core/Plugins/uLipSync/Editor/uLipSyncAnimatorEditor.cs:49)
UnityEngine.GUIUtility:ProcessEvent (int,intptr,bool&)

image

To prevent this error I would suggest checking if the animator is enabled? Or is there another way to fix this when still able to see and edit the parameters?

if (EditorUtil.Foldout("Animator Controller Parameters", true))
        {
            ++EditorGUI.indentLevel;
            if (anim.animator != null && anim.animator.isActiveAndEnabled)
            {
                DrawAnimatorReorderableList();
            }
            else
            {
                EditorGUILayout.HelpBox("Animator is not available! To edit parameters open the prefab or have game object in scene.", MessageType.Warning);
            }
            --EditorGUI.indentLevel;
            EditorGUILayout.Separator();
        }

how to stop lip-sync

hello, I'm currently using uLipSync with uLipSyncWebGL.
I put lip-sync in a object called Characters and place vrm avatar inside it. it works fine.
however, when I change the scene, I got a log SendMessage: object Characters not found! in the javascript console every frame.
I guess this is because there is no object called Characters in other scenes.
but I'm not sure why I'm having this log. I thought the scene would be destroyed when moving to other scenes using SceneManager.LoadScene.
I tried to disable the script using <uLipSync.uLipSyncBlendShape>.enable = false and <uLipSync.uLipSync>.enable = false before changing the scene, but then I got an empty log every frame.
it seems the lip-sync is working even after scene change. how can I stop it?

Can uLipSync be used for 3D characters?

I would like to use uLipSync for certain 3D models like Ready Player ME avatars and some other custom 3D models but these models seems not compatible with the plugin, is there any tutorial on this?

On the other hand, when there is background music playing, the character's mouth visemes are significantly interfered. Is there any way to eliminate the presence of a background noise/music when running in microhpone mode?

Thanks in advance for your sincere help.

Calculate Phoneme'MFCCs in EditorMode

May I ask a question? I have a set of wave file with phoneme file (which contain each phoneme' start time and end time in wave file),how can i use this set of wave file to calib MFCCs in Profile in EditorMode. I'll appreciate that you can rely, thanks a lot!!!

Memory leak in asset import worker

When I enable leak detection with full stack traces, I have the following error:

I expect this array needs to be disposed somewhere that gets called in asset import workers.

[Worker0] A Native Collection has not been disposed, resulting in a memory leak. Allocated from:

at new Unity.Collections.NativeArray(int length, Allocator allocator, NativeArrayOptions options)
at void uLipSync.MfccData.Allocate() in D:/repos/babysteps/babysteps/Library/PackageCache/com.hecomi.ulipsync@5b08f31/Runtime/Core/Profile.cs:39
at void uLipSync.Profile.OnEnable() in D:/repos/babysteps/babysteps/Library/PackageCache/com.hecomi.ulipsync@5b08f31/Runtime/Core/Profile.cs:110
at Object UnityEditor.AssetDatabase.LoadMainAssetAtPath(string)
at void UnityEditor.Search.AssetIndexer.IndexProperties(in int documentIndex, in string path, in bool hasCustomIndexers)
at void UnityEditor.Search.AssetIndexer.IndexDocument(string path, bool checkIfDocumentExists)
at void UnityEditor.Search.SearchIndexEntryImporter.OnImportAsset(AssetImportContext ctx)
at void UnityEditor.AssetImporters.ScriptedImporter.GenerateAssetData(AssetImportContext ctx)

NonReorderable and NonReorderableAttribute Namespaces not found

Hello

This may just be a simple fix but I can’t get ulipsync to work due to these errors.
“The type or namespace name “NonReorderable/NonReorderableAttribute” could not be found.”

I have burst and mathematics installed so I’m not sure what I am missing.

I am using Unity 2019.4.31

Thanks!

Plugin not Functioning on iOS 16.4

I have been attempting to port a game over to iOS. So far this plugin has been working exceptionally on macOS, Windows, and Android with no major issues. When it comes to iOS, however, I have noticed that there is no lip movement by the character.

I am utilizing the runtime analysis to perform these lip movements, and have made sure that necessary plugins such as the Burst Compiler are running appropriately (they target iOS 16.4 and I made sure to put my phone to this version). I'm seeing no clear error logs that can hint to how this may be happening, and hope there could be some insight into how this plugin struggles on the iOS platform.

If you need additional screenshots or logs I will happily provide. I am a massive fan of this plugin and hope I can continue to use it for my project.

Lipsync won't play in Play mode, when AudioSource is connected to Timeline

Using the following workflow, I can't get uLipSync play in Play mode:
-create timeline
-setup uLipSync object
-create Audio track on Timeline
-drag the audio source to timeline Audio Track
-drag an audio clip to the track
-> At this point, the lipsync plays correcly when I play timeline (I have added [ExecuteAlways] to uLipSync.cs to be able to use it in Editor mode

-now hit Play to play the timeline in Play mode.

-> The mouth doesn't move at all. The callback on uLipSync triggers and OnAudioFilterRead seems to trigger as well on console (I put a debug there). But the LipSyncInfo.phoneme is empty or null in the callback.

I am only using the lipsync callback for analysis and not the mouth blending component - I do that in my own script.

What could could cause this incompability with Timeline?

Feature: A param that send a float when no audio

Oculus has a sil param that gives a 1 if there is no audio. You can make a param in uLipSync that has silence but this value also goes to 0 if there is no audio. So it would be nice to have a fall back for this.

Is this also something we want for the other system (blendshape, textures etc) or only the animator? Should it be processed in the core? Or can I build something in the animator.

Mapping shape keys error

After creating each shape key corresponding to A I U E O in the phoneme section of uLipSync BlendShape in blender and importing them into unity, those shape keys do not appear as options in the list. I am having trouble specifying the blendshape for each phoneme.
I created texture using facebuilder for blender before exporting to unity.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.