• Stars
    star
    556
  • Rank 80,098 (Top 2 %)
  • Language
    C#
  • License
    MIT License
  • Created almost 4 years ago
  • Updated over 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

MFCC-based LipSync plug-in for Unity using Job System and Burst Compiler

uLipSync

uLipSync is an asset for lip-syncing in Unity. It has the following features:

  • Utilizes Job System and Burst Compiler to run faster on any OS without using native plugins.
  • Can be calibrated to create a per-character profile.
  • Both run-time analysis and pre-bake processing are available.
  • Pre-bake processing can be integrated with Timeline.
  • Pre-bake data can be converted to AnimationClip

Features

LipSync

Profile

Real-time Analysis

Mic Input

Pre-Bake

Timeline

AnimationClip

Texture Change

VRM Support

Install

  • Unity Package
    • Download the latest .unitypackage from Release page.
    • Import Unity.Burst and Unity.Mathematics from Package Manager.
  • Git URL (UPM)
    • Add https://github.com/hecomi/uLipSync.git#upm to Package Manager.
  • Scoped Registry (UPM)
    • Add a scoped registry to your project.
      • URL: https://registry.npmjs.com
      • Scope: com.hecomi
    • Install uLipSync in Package Manager.

How to Use

Mechanism

When a sound is played by AudioSource, a buffer of the sound comes into the OnAudioFilterRead() method of a component attached to the same GameObject. We can modify this buffer to apply sound effects like reverb, but at the same time since we know what kind of waveform is being played, we can also analyze it to calculate Mel-Frequency Cepstrum Coefficients (MFCC), which represent the characteristics of the human vocal tract. In other words, if the calculation is done well, you can get parameters that sound like "ah" if the current waveform being played is "a", and parameters that sound like "e" if the current waveform is "e" (in addition to vowels, consonants like "s" can also be analyzed). By comparing these parameters with the pre-registered parameters for each of the "aieou" phonemes, we can calculate the similarity between each phoneme and the current sound, and use this information to adjust the blend shape of the SkinnedMeshRenderer for accurate lip-syncing. If you feed the input from the microphone into AudioSource, you can also lipsync to your current voice.

The component that performs this analysis is uLipSync, the data that contains phoneme parameters is Profile, and the component that moves the blendshape is uLipSyncBlendShape. We also have a uLipSyncMicrophone asset that plays the audio from the microphone. Here's an illustration of what it looks like.

Setup

Let's set up using Unity-chan. The sample scene is Samples / 01. Play AudioClip / 01-1. Play Audio Clip. If you installed this from UPM, please import Samples / 00. Common sample (which contains Unity's assets).

After placing Unity-chan, add the AudioSource component to any game object where a sound will be played and set an AudioClip to it to play a Unity-chan's voice.

First, add a uLipSync component to the same GameObject. For now, select uLipSync-Profile-UnityChan from the list and assign it to the Profile slot of the component (if you assign something different, such as Male, it will not lip sync properly).

Next, set up the blendshape to receive the results of the analysis and move them. Add uLipSyncBlendShape to the root of Unity-chan's SkinnedMeshRenderer. Select the target blendshape, MTH_DEF, and go to Blend Shapes > Phoneme - BlendShape Table and add 7 items, A, I, U, E, O, N, and -, by pushing the + button ("-" is for noise). Then select the blendshape corresponding to each phoneme, as shown in the following image.

Finally, to connect the two, in the uLipSync component, go to Parameters > On Lip Sync Updated (LipSyncInfo) and press + to add an event, then drag and drop the game object (or component) with the uLipSyncBlendShape component where it says None (Object). Find uLipSyncBlendShape in the pull-down list and select OnLipSyncUpdate in it.

Now when you run the game, Unity-chan will move her mouth as she speaks.

Adjust lipsync

The range of the volume to be recognized and the response speed of the mouth can be set in the Paramteters of the uLipSyncBlendShape component.

  • Volume Min/Max (Log10)
    • Set the minimum and maximum volume (closed / most open) to be recognized (Log10, so 0.1 is -1, 0.01 is -2).
  • Smoothness
    • The response speed of the mouth.

As for the volume, you can see the information about the current, maximum, and minimum volume in the Runtime Information of the uLipSync component, so try to set it based on this information.

AudioSource potiion

In some cases, you may want to attach the AudioSource to the mouth position and uLipSync to another GameObject. In this case, it may be a bit troublesome, but you can add a component called uLipSyncAudioSource to the same GameObject as the AudioSource, and set it in uLipSync Parameters > Audio Source Proxy. Samples / 03. AudioSource Proxy is a sample scene.

Microphone

If you want to use a microphone as an input, add uLipSyncMicrophone to the same GameObject as uLipSync. This component will generate an AudioSource with the microphone input as a clip. The sample scene is Samples / 02-1. Mic Input.

Select the device to be used for input from Device, and if Is Auto Start is checked, it will start automatically. To start and stop microphone input, press the Stop Mic / Start Mic button in the UI as shown below at runtime.

If you want to control it from a script, use uLipSync.MicUtil.GetDeviceList() to identify the microphone to be used, and pass its MicDevice.index to the index of this component. Then call StartRecord() to start it or StopRecord() to stop it.

Note that the microphone input will be played back in Unity a little later than your own speech. If you want to use a voice captured by another software for broadcasting, set Parameters > Output Sound Gain to 0 in the uLipSync component. If the volume of the AudioSource is set to 0, the data passed to OnAudioFilterRead() will be silent and cannot be analyzed.

In the uLipSync component, go to Profile > Profile and select a profile from the list (Male for male, Female for female, etc.) and run it. However, since the profile is not personalized, the accuracy of the default profile may not be good. Next, we will see how to create a calibration data that matches your own voice.

Calibration

So far we have used the sample Profile data, but in this sectio, let's see how to create data adjusted for other voices (voice actors' data or your own voice).

Create Profile

Clicking the Profile > Profile > Create button in the uLipSync component will create the data in the root of the Assets directory and set it to the component. You can also create it from the Project window by right-clicking > uLipSync > Profile.

Next, register the phonemes you want to be recognized in Profile > MFCC > MFCCs. Basically, AIUEO is fine, but it is recommended to add a phoneme for breath ("-" or other appropriate character) to prevent the breath input. You can use any alphabet, hiragana, katakana, etc. as long as the characters you register match the uLipSyncBlendShape.

Next, we will calibrate each of the phonemes we have created.

Calibration using Mic Input

The first way is to use a microphone. uLipSyncMicrophone should be added to the object. Calibration will be done at runtime, so start the game to analyze the input. Press and hold the Calib button to the right of each phoneme while speaking the sound of each phoneme into the microphone, such as "AAAAA" for A, "IIIIII" for I, and so on. If it's noise, don't say anything or blow on it.

If you set uLipSyncBlendShape beforehand, it is interesting to see how the mouths gradually match.

If you have a slightly different way of speaking, for example, between your natural voice and your back voice, you can register multiple phonemes of the same name in the Profile, and adjust them accordingly.

Calibration using AudioClip

Next is the calibration method using audio data. If there is a voice that says "aaaaaaa" or "iiiiiii", please play it in a loop and press the Calib button as well. However, in most cases, there is no such audio, so we want to achieve calibration by trimming the "aaa"-like or "iii"-like part of the existing audio and playing it back. A useful component for this is uLipSyncCalibrationAudioPlayer. This is a component that loops the audio waveform while slightly cross-fading the part you want to play.

Select the part that seems to say "aaaaa" by dragging the boundary, and then press the Calib button for each phoneme to register the MFCC to the Profile.

Calibration Tips

When calibrating, you should pay attention to the following points.

  • Perform calibration with mic in an environment with as little noise as possible.
  • Make sure that the registered MFCCs are as constant as possible.
  • After calibration, check several times and re-calibrate phonemes that don't work, or register additional phonemes.
    • You can register multiple phonemes of the same name, so if they don't match when you change the voice tone, try registering more of them
    • If the phonemes don't match, check if you have the wrong phoneme.
    • If there is a phoneme with the same name but completely different color pattern in MFCC, it may be wrong (same phoneme should have similar pattern).
  • Collapse the Runtime Information when checking after calibration.
    • The editor is redrawn every frame, so the frame rate may fall below 60.

Pre-Bake

So far, we have looked at runtime processing. Now we will look at the production of data through pre-calculation.

Mechanism

If you have audio data, you can calculate in advance what kind of analysis results you will receive each frame, so we will bake it into a ScriptableObject called BakedData. At runtime, instead of using uLipSync to analyze the data at runtime, we will use a component named uLipSyncBakedDataPlayer to play the data. This component can notify the result of the analysis with an event just like uLipSync, so you can register uLipSyncBlendShape to realize lipsync. This flow is illustrated in the following figure.

Setup

The sample scene is Samples / 05. Bake. You can create a BakedData from the Project window by going to Create > uLipSync > BakedData.

Here, specify the calibrated Profile and an AudioClip, then click the Bake button to analyze the data and complete the data.

If it works well, the data will look like the following.

Set this data to the uLipSyncBakedDataPlayer.

Now you are ready to play. If you want to check it again in the editor, press the Play button, or if you want to play it from another script, just call Play().

Parameters

By adjusting the Time Offset slider, you can modify the timing of the lipsync. With runtime analysis, it is not possible to adjust the opening of the mouth before the voice, but with pre-calculation, it is possible to open the mouth a little earlier, so it can be adjusted to look more natural.

Batch conversion (1)

In some cases, you may want to convert all the character voice AudioClips to BakedData at once. In this case, please use Window > uLipSync > Baked Data Generator.

Select the Profile you want to use for batch conversion, and then select the target AudioClips. If the Input Type is List, register the AudioClips directly (dragging and dropping multiple selections from the Project window is easy). If the Input Type is List, register the AudioClip directly (dragging and dropping multiple selections from the Project window is easy). If the Input Type is Directory, a file dialog will open where you can specify a directory, and it will automatically list the AudioClips under that directory.

Click the Generate button to start the conversion.

Batch conversion (2)

When you have already created data, you may want to review the calibration and change the profile. In this case, there is a Reconvert button in the Baked Data tab of each Profile, which converts all the data using the Profile.

Timeline

You can add special tracks and clips for uLipSync in Timeline. We then need to bind which objects will be moved using the data from the Timeline. To do this, a component named uLipSyncTimelineEvent that receives playback information and notifies uLipSyncBlendShape is introduced. The flow is illustrated below.

Setup

Right-click in the track area in the Timeline and add a dedicated track from uLipSync.Timeline > uLipSync Track. Then right-click in the clip area and add a clip from Add From Baked Data. You can also drag and drop BakedData directly onto this area.

When you select a clip, you will see the following UI in the Inspector, where you can replace the BakedData.

Next, add a uLipSyncTimelineEvent to some game object, and then add the binding so that lipsync can be played. At this time, register the uLipSyncBlendShape in the On Lip Sync Update (LipSyncInfo).

Then click on the game object with the PlayableDirector and drag and drop the game object into the slot for binding on the uLipSyncTrack in the Timeline window.

Now the lipsync information will be sent to uLipSyncTimelineEvent, and the connection to uLipSyncBlendShape is established. Playback can also be done during editing, so you can adjust it with the animation and sound.

Timeline Setup Helper

Window > uLipSync > Timeline Setup Helper

This tool automatically creates BakedData corresponding to clips registered in AudioTrack and registers them in uLipSync Track.

Animation Bake

You can also convert BakedData, which is pre-calculated lip-sync data, into an AnimationClip. Saving it as an animation makes it easy to combine it with other animations, to integrate it into your existing workflow, and to adjust it later by moving the keys. The sample scene is Samples / 07. Animation Bake.

Setup

Select Window > uLipSync > Animation Clip Generator to open the uLipSync Animation Clip Generator window.

To run the animation bake, you need to open the scene where you have set up a uLipSyncBlendShape component. Then, please set the components in the scene to the fields in this window.

  • Animator
    • Select an Animator component in the scene.
    • An AnimationClip will be created in a hierarchical structure starting from this Animator.
  • Blend Shape
    • Select a uLipSyncBlendShape component that exists in the scene.
  • Baked Data List
    • Select the BakedData assets that you want to convert into AnimationClips.
  • Sample Frame Rate
    • Specify the sampling rate (fps) at which you want to add the keys.
  • Threshold
    • The keys will be added only when the weight changes by this value.
    • The maximum value of the weight is 100, so 10 means when the weight changes by 10%.
  • Output Directory
    • Specify the directory to output the baked animation clip.
    • If the directory is empty, create it under Assets (root).

The following image is an example setup.

Varying Threshold from 0, 10, and 20, you'll get the following.

Texture

uLipSyncTexture allows you to change textures and UVs according to the recognized phonemes. Samples / 08. Texture is a sample scene.

  • Renderer
    • Specify the Renderer of the material you want to update.
  • Parameters
    • Min Volume
      • The minimum volume value (log10) to update.
    • Min Duration
      • This is the minimum time to keep the mouth in the same texture / uv.
  • Textures
    • Here you can select the textures you want to assign
    • Phoneme
      • Enter the phoneme registered in the Profile (e.g. "A", "I").
      • An empty string ("") will be treated as if there is no audio input.
    • Texture
      • Specify the texture to be changed.
      • If not specified, the initial texture set in the material will be used.
    • UV Scale
      • UV Scale. For tiled textures, specify this value.
    • UV Offset
      • UV offset. For tiled textures, specify this value.

Animator

uLipSyncAnimator can be used to lip-sync using AnimatorController. Create a Layer with an Avatar Mask applied only to the mouth as shown below, and setup a Blend Tree to make each mouth shape move by parameters.

Then set the phonemes and the corresponding AnimatorContorller parameters to uLipSyncAnimator as follows.

The sample scene is Samples / 09. Animator.

VRM Support

VRM is a platform-independent file format designed for the use with 3D characters and avatars. In VRM 0.X, blendshapes are controlled through VRMBlendShapeProxy, while in version 1.0, blendshapes are abstracted into Expression and controlled via VRM10ObjectExpression.

VRM 0.X

With uLipSyncBlendShape, the blendshapes in the SkinnedMeshRenderer was controlled directly, but there is a modified component named uLipSyncBlendShapeVRM that controls VRMBlendShapeProxy instead.

VRM 1.0

By using uLipSyncExpressionVRM, you can control VRM10ObjectExpression.

Sample

For more details, please refer to Samples / VRM. In this sample, uLipSyncExpressionVRM is used for the setup of VRM 1.0.

Runtime Setup

If you generate a model dynamically, you need to set up and connect uLipSync and uLipSyncBlendShape by yourself. A sample for doing this is included as 10. Runtime Setup. You will dynamically attach these components to the target object and set them up as follows:

[System.Serializable]
public class PhonemeBlendShapeInfo
{
    public string phoneme;
    public string blendShape;
}

public GameObject target;
public uLipSync.Profile profile;
public string skinnedMeshRendererName = "MTH_DEF";
public List<PhonemeBlendShapeInfo> phonemeBlendShapeTable = new List<PhonemeBlendShapeInfo>();

uLipSync.uLipSync _lipsync;
uLipSync.uLipSyncBlendShape _blendShape;

void Start()
{
    // Setting up uLipSyncBlendShape
    var targetTform = uLipSync.Util.FindChildRecursively(target.transform, skinnedMeshRendererName);
    var smr = targetTform.GetComponent<SkinnedMeshRenderer>();

    _blendShape = target.AddComponent<uLipSync.uLipSyncBlendShape>();
    _blendShape.skinnedMeshRenderer = smr;

    foreach (var info in phonemeBlendShapeTable)
    {
        _blendShape.AddBlendShape(info.phoneme, info.blendShape);
    }

    // Setting up uLipSync and connecting it with uLipSyncBlendShape
    _lipsync = target.AddComponent<uLipSync.uLipSync>();
    _lipsync.profile = profile;
    _lipsync.onLipSyncUpdate.AddListener(_blendShape.OnLipSyncUpdate);
}

Then attach this component to some GameObject and prepare the necessary information in advance and create a Prefab or something. The sample includes a setup for regular SkinnedMeshRenderer and a setup for VRM 1.0.

UI

When you want to create, load, save a Profile at runtime, or add phonemes and perform their calibration, you will need a UI. A simple example of this is added as 11. UI. By modifying this, you can create your own custom UI.

Tips

Custom Event

uLipSyncBlendShape is for 3D models, and uLipSyncTexture is for 2D textures. But if you want to do something different, you can write your own component to support them. Prepare a component that provides a function to receive uLipSync.LipSyncInfo and register it to OnLipSyncUpdate(LipSyncInfo) of uLipSync or uLipSyncBakedDataPlayer.

For example, the following is an example of a simple script that outputs the result of recognition to Debug.Log().

using UnityEngine;
using uLipSync;

public class DebugPrintLipSyncInfo : MonoBehaviour
{
    public void OnLipSyncUpdate(LipSyncInfo info)
    {
        if (!isActiveAndEnabled) return;

        if (info.volume < Mathf.Epsilon) return;

        Debug.LogFormat($"PHENOME: {info.phoneme}, VOL: {info.volume} ");
    }
}

LipSyncInfo is a structure that has members like the following.

public struct LipSyncInfo
{
    public string phoneme; // Main phoneme
    public float volume; // Normalized volume (0 ~ 1)
    public float rawVolume; // Raw volume
    public Dictionary<string, float> phonemeRatios; // Table that contains the pair of the phoneme and its ratio
}

Import to / Export from JSON

There is a function to save and load the profile to/from JSON. From the editor, specify the JSON you want to save or load from the Import / Export JSON tab, and click the Import or Export button.

If you want to do it in code, you can use the following code.

var lipSync = GetComponent<uLipSync>();
var profile = lipSync.profile;

// Export
profile.Export(path);

// Import
profile.Import(path);

Calibration at Runtime

If you want to perform calibration at runtime, you can do it by making a request to uLipSync with uLipSync.RequestCalibration(int index) as follows. The MFCC calculated from the currently playing sound will be set to the specified phoneme.

lipSync = GetComponent<uLipSync>();

for (int i = 0; i < lipSync.profile.mfccs.Count; ++i)
{
    var key = (KeyCode)((int)(KeyCode.Alpha1) + i);
    if (Input.GetKey(key)) lipSync.RequestCalibration(i);
}

Please refer to CalibrationByKeyboardInput.cs to see how it actually works. Also, it is better to save and restore the profile as JSON after building the app because the changes to ScriptableObject can not be saved.

Update Method

Update Method can be used to adjust the timing of updating blendshapes with uLipSyncBlendShape. The description of each parameter is as follows.

Method Timing
LateUpdate LateUpdate (default)
Update Update
FixedUpdate FixedUpdate
LipSyncUpdateEvent Immediately after receiving LipSyncUpdateEvent
External Update from an external script (ApplyBlendShapes())

Mac Build

When building on a Mac, you may encounter the following error.

Building Library/Bee/artifacts/MacStandalonePlayerBuildProgram/Features/uLipSync.Runtime-FeaturesChecked.txt failed with output: Failed because this command failed to write the following output files: Library/Bee/artifacts/MacStandalonePlayerBuildProgram/Features/uLipSync.Runtime-FeaturesChecked.txt

This may be related to the microphone access code, which can be fixed by writing something in Project Settings > Player's Other Settings > Mac Configuration > Microphone Usage Description.

Transition from v2 to v3

From v3.0.0, the values of MFCC have been corrected to more accurate values. As a result, if you are transitioning from v2 to v3, you will need to recalibrate and create a new Profile.

3rd-Party License

Unity-chan

Examples include Unity-chan assets.

© Unity Technologies Japan/UCL

More Repositories

1

uRaymarching

Raymarching Shader Generator in Unity
HLSL
1,189
star
2

UnityFurURP

Fur shader implementation for URP
HLSL
659
star
3

uREPL

In-game powerful REPL environment for Unity3D.
C#
533
star
4

uDesktopDuplication

Desktop Duplication API implementation for Unity (only for Windows 8/10)
C++
509
star
5

uWindowCapture

This allows you to use Windows Graphics Capture / PrintWindow / BitBlt in Windows to capture multiple windows individually and easily use them as Texture2D in Unity.
C++
469
star
6

UnityScreenSpaceBoolean

Screen Space Boolean Implementation for Unity.
ShaderLab
244
star
7

UnityWaterSurface

Water Surface Simulation using CutomRenderTexture in Unity 2017.1
ShaderLab
236
star
8

UnityVolumeRendering

A simple example of Volume Rendering for Unity.
GLSL
159
star
9

uRaymarchingExamples

Examples using uRaymarching (https://github.com/hecomi/uRaymarching)
C#
141
star
10

MMD4Mecanim-LipSync-Plugin

LipSync and TTS Plugin for MMD4Mecanim
C#
140
star
11

uShaderTemplate

This is an Unity editor extension for generating shader code from template files.
C#
133
star
12

UnityPseudoInstancedGPUParticles

GPU Particles w/ Screen Space Collision Example.
GLSL
109
star
13

UnityECSBoidsSimulation

Simple Boids simulation example using Unity ECS.
C#
108
star
14

uWintab

Wintab API plugin for Unity
C++
99
star
15

node-mecab-async

Asynchronous japanese morphological analyser using MeCab.
JavaScript
94
star
16

uChromaKey

Chroma key shader asset for Unity
ShaderLab
91
star
17

uDllExporter

Tool to build DLLs in Unity.
C#
80
star
18

VrGrabber

VR Grabber components for Unity
C#
79
star
19

node-julius

Node.js module for voice recognition using Julius
C
72
star
20

UnityRemoteDesktopDuplication

An Unity example to send a desktop image to a remote PC using Desktop Duplication API and NVENC/NVDEC.
C#
64
star
21

uHomography

Homography Transformation Image Effect for Unity.
C#
64
star
22

HLSLToolsForVisualStudioConfigGenerator

Create shadertoolsconfig.json for Unity project
C#
60
star
23

uCurvedScreen

Curved-sreen shader aseet for Unity.
ShaderLab
57
star
24

UnityRaymarchingCollision

Raymarching x Rigidbody interaction example.
GLSL
54
star
25

NodejsUnity

Run Node.js automatically when Unity application starts, and they communicate with each other.
C#
51
star
26

UWO

Unity WebGL x WebSocket MMO demo
C#
49
star
27

node-openjtalk

Node.js TTS module using OpenJTalk
C++
49
star
28

Unity-WebGL-3D-Chat

Unity5 WebGL x Socket.IO = Online 3D Chat on the web browser!
JavaScript
44
star
29

UnityScreenSpaceReflection

Simple Screen Space Reflection Implementation.
GLSL
43
star
30

UnityDICOMVolumeRendering

This is a simple project for volume rendering of DICOM data in Unity.
C#
43
star
31

UnityRaymarchingSample

Unity3D Raymarching Simple Implementation.
GLSL
41
star
32

uNvEncoder

A simple wrapper of NVENC (NVIDIA's HW-accelerated encoder) for Unity.
C
41
star
33

uTouchInjection

Windows Multitouch Injection Plugin for Unity.
C#
38
star
34

UnityNativeTextureLoader

This is an example to load textures from StreaminAssets directory or web in a native plugin.
C++
36
star
35

tsubakumi2

Home Automation System for @hecomi room
JavaScript
36
star
36

HoloLensPlayground

My playground for HoloLens.
C#
31
star
37

uNvPipe

A simple NvPipe wrapper for Unity
C
25
star
38

UnityTimelineExample

A simple example of creating custom Track and Clip
C#
22
star
39

StereoAR-for-Unity

Oculus Rift にて 2 つの Playstation Eye を通じて現実世界を立体視し、更にそこへ AR を重畳するプロジェクトです。
C#
20
star
40

LITTAI

Interaction recognizer for LITTAI project.
C++
20
star
41

LITTAI-game

Unity project for the table-top projection game, LITTAI.
C#
20
star
42

unity-android-aruco-sample

Unity x Android x OpenCV x ArUco sample project.
C++
20
star
43

UnityCustomTextureUpdate

An example to load texture using CustomTextureUpdate().
C
19
star
44

node-wemo

Belkin 社の WeMo を操作する Node モジュールです
JavaScript
19
star
45

UnityRaymarchingForward

Example of raymarching in forward rendering for Unity
HLSL
17
star
46

UnityKinectV2DeferredRendering

Unity sample project to draw KinectV2 depth sensor values into G-Buffer directly.
C#
17
star
47

QwertyVR

An Unity sample project for my Mixed Reality-based concept, Visual IO.
C#
17
star
48

uNetVoice

Tiny voice chat implementation for Unity.
C#
16
star
49

CyalumeLive

Unity でシェーダを使って 20,000 人がサイリウムを音楽に合わせて振るデモです。
C
14
star
50

UnityWebGLAudioStream

A simple sample that uses the Web Audio API to play float[] sent from Unity in a WebGL build
C#
10
star
51

node-oll

Node.js module for oll (Online-Learning Library)
Shell
10
star
52

create-upm-branch-action

A Github Actions to create release branches for Unity Package Manager
Shell
10
star
53

UnityCustomFunctionNodeExample

Custom Function Node Example in Unity Shader Graph
HLSL
9
star
54

T265_Playground

Simple implementation of positional tracking for Oculus Go
C#
8
star
55

HmdInputModule

Unity VR cursor module
C#
7
star
56

dotfiles

...
Vim Script
7
star
57

node-iRemocon

iRemocon module for Node.js
JavaScript
7
star
58

node-execSync

node.js で shell コマンドを同期的に実行するアドオンです
C++
7
star
59

UnityCustomTextureUpdateForDXT1

CustomTextureUpdate for DXT1
C
7
star
60

Julius2iRemocon

Home electronics operation by speech recognition is enabled using Julius and iRemocon.
C++
6
star
61

MyoUnity-with-RawData

Emg raw data and RSSI are available
C#
5
star
62

uPacketDivision

A native plugin for Unity that provides simple packet division and restoration.
C++
5
star
63

qml-osc

QML custom element that can handle OSC.
C++
5
star
64

NativePluginExample

Unity Advent Calendar 2017 - 21 日目の記事のサンプルプロジェクトです
C#
4
star
65

node-kaden

node.js-based home electronics control system.
C
3
star
66

Outlinus

VR Effect Tool for Oculus Rift x Ovrvision
C++
3
star
67

node-openjtalk-waf

C
3
star
68

node-kana2voca

Katakana to Julius voca format converter
C++
3
star
69

TangoPlayground

my Google Tango application playground (Unity).
C#
3
star
70

node-tw2email

Twitter の特定のリストの TL をメールで送信してくれるヤツです
JavaScript
3
star
71

HAS-Project

Home Automation System (HAS) Project that makes our life lazier.
C++
2
star
72

Xml2dfa

[Julius] コマンドを記述した XML ファイルから dfa ファイルと dict ファイルを生成する
C++
2
star
73

node-voca

ICU を利用してカタカナをローマ字や Julius の voca 形式に変換する Node.js アドオンです。
C++
2
star
74

unite-fhc

Unite.vim source for Future Home Controller
Vim Script
1
star
75

JuliuslibTest

C
1
star
76

LegoAnalyzer

Maker Faire Tokyo に出展予定の LEGO の凹凸を検出する Qt ベースのアプリです
C++
1
star
77

Hellospc

HelloSPCafe
C
1
star
78

node-mecab

node.js から MeCab を利用したり文字列をカタカナに変換するアドオンです
C++
1
star
79

tsubakumi

音声認識家電コントロールシステム@hecomi 家
JavaScript
1
star
80

Blender

Blender 作品のアーカイブです。
1
star
81

tsubakumi-android-wear

凹家のホームオートメーションの Android Wear 連携用プロジェクト
Java
1
star
82

uMicrophoneWebGL

Enable microphone input buffer access in Unity WebGL builds
C#
1
star