Skip to main content

🔺 HVR Basis Face Tracking (Update)

🔺 Basis

HVR Basis Face Tracking has been updated.

It should work better with ARKit avatars, as we were not requesting some face tracking traits correctly, so VRCFaceTracking would not send them to us. No avatar reupload is needed.

This update includes some performance improvements and some code preparation for future use by HVR Basis AvatarOptimizer, so that avatar optimization tools may detect which blendshapes should not be removed from the mesh.

Backend changes

Avoid string comparisons in HVR addresses

  • Reduce the number of string comparisons by using int addresses instead of string addresses.
  • Addresses are mapped to int equivalents as early as possible.
  • Modify AcquisitionService and all scripts that depend on it to use int addresses.
  • Add HelpUrls to user-facing HVR behaviours.

Try to fix combined face tracking were not requested

  • Some of the face tracking parameters that were needed by the default face tracking templates were not requested from VRCFaceTracking.
  • Update the JSON file (for older versions of VRCFaceTracking) and the OSCQuery response to match.
  • Fix UnifiedExpressions-Advanced-Strict incorrectly references addresses without the v2 prefix.

Expose functions for HVR Basis AvatarOptimizer

  • Expose public functions on BlendshapeActuation and AutomaticFaceTracking for future use by HVR Basis AvatarOptimizer.

✨ Lightbox Viewer 2.5.0-beta.1

Lightbox Viewer has been updated to support URP projects.

Lightbox Viewer is now compatible with the following render pipeline configurations:

  • BIRP Light Probes.
  • BIRP VRCLightVolumes by RED_SIM.
  • URP Adaptive Probe Volumes.
  • URP Light Probes.

Features:

  • Detect and support URP projects. When the current project uses URP:
    • Don't suggest installing the Post-Processing V2 package.
    • Never switch to VRCLightVolumes, even if that package is installed in the project.
    • Switch to a different scene based on whether the Quality settings use Light Probes instead of Adaptive Probe Volumes.
    • Detect when Quality settings change, and switch between the Light Probes scene and the Adaptive Probe Volumes scene accordingly.

Changes:

  • Attempt an optimization that removes the need for copying the texture from the GPU to the CPU.
  • Change Edit mode strategy for performance:
    • Previously, Edit Mode was copying the avatar every frame, so that we don't accidentally cause the avatar to be modified.
    • This change now copies the avatar to a hidden GameObject in the hierarchy and marked non-saved, if the reference to the Object To View does not change.
    • Use object change events to detect when anything changes in the scene. If there is any change that pertains to a non-transform, we destroy the copied object and create a new one.
    • This copied object is moved 10_000 units downwards between render cycles.
    • Don't disable the original avatar between render cycles, because enabling the avatar is slow.
    • In the copy, remove the Animators, because they take time to bind and we don't need them.

Fixes:

  • Try to fix an issue where the depth was affecting how the view was rendered in a Unity 6.2 Basis project.

🗒️ Open documentation.

🔺 HVR Basis Face Tracking (Update)

🔺 Basis

HVR Basis Face Tracking has been migrated to the new Basis networking system.

Instructions for previous Face Tracking avatar users:

  • If you have previously used Face Tracking in Basis, do the following:

  • Remove all components that were part of the previous Face Tracking instructions. That includes:

    • Remove the Feature Networking component
    • Remove the HVR Avatar Comms component
    • Remove the Eye Tracking Bone Actuation component
    • Remove the Blendshape Actuation component
    • Remove the OSC Acquisition component
    • Remove the Acquisition Service component
  • To install the new Face Tracking, do the following:

    • Create a new GameObject inside your avatar. Give it a name of your choice, like FaceTracking.
    • Add the Automatic Face Tracking component.
    • In the inspector of the Automatic Face Tracking component, press the "Create VRCFaceTracking JSON file" button.
    • You will need to do this even if you have used previous versions of Face Tracking in Basis

The new face tracking has been modified to work with the Steam version of VRCFaceTracking (it now has OSCQuery), but it should also work with old versions of VRCFaceTracking that were not installed from Steam.

Backend changes

Migrate face tracking to newest Basis networking

  • Migrate to newer Basis avatar networking systems:
    • Remove GUID-based networking identifiers, which was a remnant of the previous networking system where BasisAvatarMonoBehaviour did not exist and all packets had to be routed manually to their destination components.
    • Migrate networking to the new BasisAvatarMonoBehaviour networking system, which handles packet routing on its own by transmitting packets based on the component rather than based on the avatar.
  • Add Automatic Face Tracking component:
    • The Automatic Face Tracking component is now the entry point to adding HVR face tracking capabilities to the avatar.
    • This component automatically detects all SkinnedMeshRenderer components on the avatar that have ARKit or Unified Expressions blendshapes on it.
    • By default, this component uses the basic face tracking definition files that are bundled with the application, but this can be either overridden by the user or supplemented.
      • Due to this, the face tracking assets are now addressables.
    • This components automatically starts an OSC service on the wearer.
  • Improve the handling of face tracking data:
    • Add Address Overrides to the actuation definition file, so that we can define default values for the eyelids.
    • Work towards the mutualization of the interpolation of addresses between Eye Tracking and Face Tracking.
    • Actuators now introspect which blendshapes actually exist on the target renderers, and will no longer try to network any actuator that has no blendshape associated with it.
    • Add more debugging in the blendshape actuation definition file editor.
    • Fix TongueRoll actuator was targeting the wrong blendshape.
  • Move the package to dev.hai-vr.basis.comms/

Change VRCFaceTracking setup process

  • In the Automatic Face Tracking component, the user now presses a button to create the face tracking JSON file used by VRCFaceTracking.
  • We want to ensure we overwrite any previous version of face tracking that may have already been installed by previous users of the previous system, so we're changing the dummy avatar ID to a different one. That way, we're sure this is really the file that will be loaded by VRCFaceTracking.

Fix always update local eye angles

  • Update local eye angles, even if no address has changed.
  • This should fix an issue where the eyes were always looking forward.

Add a fake OSCQuery service to cope with the Steam version of VRCFaceTracking

  • The Steam Version of VRCFaceTracking 5.3.2.0 does not accept /avatar/change events, it relies only on OSCQuery for initialization.
  • We now impersonate the OSCQuery service by doing the following:
    • A) Opens an HTTP service on a random port:
      • When queried for any URL that ends with /avatar, it replies with a pre-determined message (check out response-avtr.json).
      • When queried for any other URL, it replies with another pre-determined message (check out response.json).
    • B) Advertises that port as a service on mDNS as _oscjson._tcp with the instance name VRChat-Client-XXXXXX where XXXXXX is a random number between 100000 and 999999.
    • C) Queries for _oscjson._tcp once when the service starts.
    • A and B are sufficient for VRCFaceTracking to detect our program if the OsushiQuery service is already running when VRCFaceTracking starts.
    • C is needed to handle the case where VRCFaceTracking is already running before our program starts the OsushiQuery service.
    • The HTTP service completely violates the OSCQuery protocol and is not intended to be read by any other program other than VRCFaceTracking.
  • The JSON file in the users' directory is still needed to function properly.
  • This also means that the application requests a pre-determined list of parameters from VRCFaceTracking, as opposed to the JSON file which could previously be updated separately.
  • Add a MeaMod.DNS package as instructed by dooly.

Update acquisition server to accept VRCFaceTracking being initialized from OSCQuery

  • Since VRCFaceTracking is now being initialized from OSCQuery, we can't use the address that don't start with "/avatar/parameters" like we used to.
  • Rectify the paths in the JSON file to match the OSCQuery paths.
  • Strip that from the messages we receive to restore previous behaviour.

✨ Lightbox Viewer URP V1.0.0-alpha.5

Lightbox Viewer URP for the Basis Framework has been added as a new package.

This package is meant to be used on URP

🔺 Basis
projects in Unity 6, with Adaptive Probe Volumes enabled.

This early version does not support post-processing, and it does not support traditional Light Probes.

🗒️ Open documentation.

⚙️ Animator As Code V1.2.0 / Animator As Code VRChat V1.1.2 / Prefabulous for VRChat V2.2.0 / Auto-reset OSC config V1.1.6 / ComboGestureExpressions V3.3.1

As I am no longer focusing on VRChat content creation, the following packages will now install in ALCOM without warnings, even if VRChat introduces breaking changes:

  • Animator As Code - VRChat V1.1.2
  • Animator As Code - VRChat Destructive Workflow V1.1.2
  • Animator As Code - Modular Avatar V1.1.2
  • Prefabulous for VRChat V2.2.0
  • Auto-reset OSC config V1.1.6
  • ComboGestureExpressions V3.3.1

VRChat Avatars package upper bound dependency raised to VRChat SDK999.

Prefabulous for VRChat V2.2.0

  • Now requires Animator As Code 1.2.0 or above, in order to use the asset container provider API.
  • Add support for cross-application build compatibility being introduced in NDMF 1.8.
    • This explicitly prevents the VRC-specific plugins from being executed in Resonite builds.

Animator As Code V1.2.0

  • Add support for third-party asset container management.
    • In preparation for the introduction of IAssetSaver in NDMF 1.6.0, add the ability to delegate the management of the asset container to a third party.
      • No new dependencies are added.
      • By default, the behaviour of Animator As Code is the same as V1.1.0.
    • Add new optional field AacConfiguration.AssetContainerProvider to specify an asset container provider.
    • Add new interface IAacAssetContainerProvider to abstract asset container management.

The above changes have been contributed by kb10uy (KOBAYASHI Yū) (first contribution).

The NDMF example in the Getting started page has been updated to demonstrate integration with this new API.

As specified in the changelog for the official release of Animator As Code V1, breaking changes had been planned, and will be applied starting this version:

  • AacFlSettingKeyframes constructor is now private.
    • For compatibility reasons, it was public for the duration of V1.1.x, and was already marked as obsolete in V1.1.x.
  • The methods AacFlBase.InternalConfiguration and AacFlBase.InternalDoCreateLayer are now private.
    • For compatibility reasons, it was public for the duration of V1.1.x, and was already marked as obsolete in V1.1.x.
    • The class AacAccessorForExtensions replaced them.

These breaking changes are meant to be the last breaking changes for the lifetime of Animator As Code V1.

  • Make AAC 1.2.0 a requirement, as AAC 1.2.0 adds additional argument checks for compatibility with Vixen.
  • Include contributions from @TayouVR and @yewnyx that make this package better installable via git url.

✨ Lightbox Viewer V2.4.0

  • Add Collections of lightboxes. Users may now switch from the default six lightboxes to other more specialized ones.
    • Add special lightboxes specifically for Light Volumes. This requires the Light Volumes package to be installed.
    • Expose the Spotlight Cookie lightbox that was previously hidden.
  • Previously, lights inside the object being viewed were disabled. This is no longer the case by default.
  • Add option to disable lights inside the object being viewed.
  • Add option to support shaders that require a Depth Texture.
    • This uses Poiyomi's DepthGet light settings, but repackaged with a different GUID to avoid installation conflicts.
  • The Advanced menu is now a sidebar.
  • Some settings are now saved editor-wide, instead of being reset when the Lightbox Viewer window is closed.
    • This includes Counter Rotate, Post Processing, Vertical Displacement, Mute Lights Inside Object, Support Depth Texture.

Unity_a6UfNnk6f1.png

✨ Lightbox Viewer V2.3.0

Add support for Light Volumes if it is installed in the project.

The lightboxes themselves do not change, so the differences are subtle. The Pink scene is the most notable because the left hand will be lit pink and the right hand will be lit purple.

warning

If you want the previews to use light volumes, please understand that unlike avatar uploads, is it not sufficient to have a compatible shader like lilToon 1.10; you need the actual Light Volumes package installed in your avatar project.

lightvolume-Unity_jqyphEGfvy.png

⚙️ Prefabulous Universal V2.2.0-alpha.0 / PPC V2.1.0-alpha.0 / FaceTra compatibility patch

This update is targeted specifically at Modular Avatar for Resonite users.

This update enables all Prefabulous Universal components to execute during builds targeting the Resonite app.

If you use Modular Avatar for Resonite, it is very likely you already have prerelease packages enabled in ALCOM, meaning you may update Prefabulous Universal to V2.2.0-alpha.0 using ALCOM.

If you use FaceTra Shape Creator, you need to download a patch linked here, no log-in required.

Prefabulous Universal 2.2.0-alpha.0

Support Modular Avatar for Resonite:

  • Add support for cross-application build compatibility being introduced in NDMF 1.8.
  • Conditionally compiled when NDMF 1.8.0-alpha.0 is installed:
    • Prefabulous Universal still needs to be compatible with VNyan (Unity 2020) and Warudo (Unity 2021), meaning we still need to support NDMF 1.4 (last known working version in Unity 2020).

Prefabulous for Platform Conversions 2.1.0-alpha.0

  • Fix ConvertVRCConstraintsToUnityConstraints should run after Modular Avatar creates constraints:
    • Run after Modular Avatar, because MA Convert Constraints and MA World Fixed Object create VRCConstraint.
    • Also, remove this component during the build to help missing script issues (e.g. better handles CVR).
    • The above changes have been contributed by Narazaka (first contribution).
  • Integrate with NDMF 1.8 cross-application builds:
    • Add support for cross-application build compatibility being introduced in NDMF 1.8.
    • MA/NDMF for Resonite does not support Unity constraints at this time of writing, but as far as I am aware there is no harm done enabling this conversion plugin pass for all platforms.
    • Conditionally compiled when NDMF 1.8.0-alpha.0 is installed:
      • Prefabulous for Platform Conversions still needs to be compatible with VNyan (Unity 2020) and Warudo (Unity 2021), meaning we still need to support NDMF 1.4 (last known working version in Unity 2020).

✨ Activate with Skinned Offsets (Constraint Tools V1)

A new menu option named Activate with Skinned Offsets is added in the inspector of the Parent Constraint and VRC Parent Constraint components.

Pressing this menu option will behave similarly to the Activate button on the Parent constraint, but calculates different offsets: These offsets will make the Parent constraint behave more like weight painting/mesh skinning.

This is the same algorithm that is used by the Skinned Mesh Constraint Builder component. If you are already using this component, you do not need to use this.

mspaint_US2AvDUNAt.png

🗒️ Open documentation

⚠️ Advance notice for VRChat Creator Companion changes

Starting from the 1st of February 2025, my repository listing will no longer officially support the VRChat Creator Companion package manager. Instead, it will support the ALCOM package manager.

What happens if I continue using the VRChat Creator Companion to install this repository?

In many cases you might still be able to install the packages from the VRChat Creator Companion.

However, if there are any issues occuring within the VRChat Creator Companion during the installation process, then it will be considered to be an issue within the VRChat Creator Companion software itself, and workarounds will no longer be provided.

What should I do if I still use the VRChat Creator Companion, and an error occurs in VCC?

If you still use the VRChat Creator Companion, and an error occurs in VCC during the installation of some packages from this repository, then:

Install the ALCOM package manager

I recommend that you install the ALCOM package manager.

ALCOM (GitHub) is a community-created package manager. It is functionally equivalent to the VRChat Creator Companion, but contains fixes to many bugs that had been reported in the VRChat Creator Companion for years.

Report the VCC bugs to VRChat

Report the bugs you have encountered in VCC directly to VRChat Inc. through either the feedback forum, or the GitHub issues.

The bug you have encountered has most likely been reported already in 2022 or 2023.

What if I already use ALCOM?

There is no action to do if you already use ALCOM. Check out anatawa12's GitHub page.

✨ Transfer Blendshape (Starmesh V1.7.0)

✨ New component: Transfer Blendshape

The Starmesh Op. Transfer Blendshape component attempts to create a blendshape on a costume that mimics the movement of a blendshape from another mesh.

Try using this to transfer nail deformations, chest deformations, or hip deformations (i.e. Hip_big) from a base mesh to a costume.

Combine it with other Selectors to limit the areas affected by the transferred deformations.

warning

This component may not always produce good results.

In addition, if you use other tools that attempt to fit a costume on an avatar it was not designed for (i.e. fitting a costume made for Manuka on a Lime base body), it will probably not work as this operator expects the mesh data to overlap, regardless of how bones are arranged in the scene.

🗒️ Open documentation

⚙️ ComboGestureExpressions V3.3.0

  • Add "Ignore Analog Fist" option in the compiler. When enabled, the Fist animations play without having to press the trigger.
  • ComboGestureExpressions now requires Animator As Code V1.1.0 minimum, and accepts any version above that.

⚙️ Prefabulous for VRChat V2.2.0-beta.1

  • Now requires Animator As Code 1.2.0-beta.1 or above, in order to use the asset container provider API.
  • If NDMF 1.6.0 is installed, Prefabulous for VRChat will use the new APIs in NDMF to save generated assets.

⚙️ Animator As Code V1.2.0-beta.1

The Animator As Code V1 API has been updated to improve support for third-party asset container management.

  • There are no additional dependencies introduced.
  • By default, the behaviour of Animator As Code is the same as V1.1.0.
  • Add new optional field AacConfiguration.AssetContainerProvider to specify an asset container provider.
  • Add new interface IAacAssetContainerProvider to abstract asset container management.

This change has been contributed by kb10uy (KOBAYASHI Yū) (first contribution).

The NDMF example in the Getting started page has been updated to demonstrate integration with this new API.

As specified in the changelog for the official release of Animator As Code V1, the breaking changes that were planned for V1.2.0 have been applied.

These breaking changes are meant to be the last breaking changes for the lifetime of Animator As Code V1.

For more details, see the full changelog.

🔍 View changelog

⚙️ Starmesh V1.6.0

A new version of Starmesh has been released.

Starmesh now has code that is shared with FaceTra Shape Creator under the MeshLib package; this is the main reason this update has been published.

This update contains fixes in the order Selectors and Operators are processed, so that Op. Paint New Bone does not interfere with operators that use Select Bones.

🔍 View changelog

✨ FaceTra Shape Creator V0.9.0

A new version of FaceTra Shape Creator has been released.

There are a lot of changes, described in full in the changelog.

In summary:

  • The performance of the application in Edit mode has been significantly improved.
  • Improvements in tailoring:
    • You can now enable Tailoring and override push and pulling vertices per-shape.
    • The result of tailoring can now be exported into a new file.
    • Tailoring can now optionally use the same blendshapes as those from Adaptive file.
  • Improvements in shapes:
    • Most mouth shapes can now have blendshapes added to them. They will use the Mouth Divider.
    • The functionality to push and pull vertices is now optional per-shape, and can be turned off entirely.
    • Upper teeth can now be included in the deformation of Jaw Open, Jaw Left/Right, and Jaw Forward.
  • Improvements in the calibration process:
    • You can now define a position offset (i.e. when a model has been re-saved with high heels).
    • You can now explicitly define base blendshapes that will serve as the rest pose of the face.
    • Vertex selection UI has been improved.
    • The blendshape used in the calibration process to define Mouth, Teeth, and Tongue vertices is now separate from the blendshape used in Jaw Open.
    • Add a new construction line "Eye Visualization", used to visualize the Eye divider.
  • Add new shape: Nose Sneer Left/Right.
warning

FaceTra now shares code with Starmesh under the MeshLib package, which is included in both products.

If you use Starmesh, please update Starmesh to V1.6.0.

🔍 View changelog

🌃 Summary

Hello!

I've gone through a lot of projects lately, and here's a summary of what has been happening.

My development time is divided between the maintenance of Free tools, Patreon exclusive tools, and the continuation of research.

(This page is a clone of the post published on Patreon)

Free tool updates 🌊

☀️ Animator As Code V1 has been released. This update brings Sub-State Machines, Blend Trees, support for non-VRChat projects, better integration with non-destructive workflows, integration with Modular Avatar, support for VRCAnimatorPlayAudio, and prettier support for VRCParameterDriver. The V1 API is now considered stable, and there will be no more breaking changes. (Read more)

☀️ Last month, I've released LetMeSee, a new tool that lets you see your content in VR, with the Unity Editor in Edit Mode. I had initially created this tool so that I could test VR toon shaders on a non-VRChat Unity project that uses the Universal Render Pipeline, but it works just as well for more traditional projects. (Read more)

⚙️ In Prefabulous Universal, Generate Twist Bones now creates VRC Constraints if the VRChat SDK is installed. In addition, Delete Polygons, Assign UV Tile, and Replace Textures should now run before VRCFury.

☀️ It is almost guaranteed that in the next few months, some complex clothing (i.e. kimonos and yukatas) will be sold with VRC Constraints directly in them. For this reason I've added a new component Convert VRC Constraints to Unity Constraints in order to provide options for users who wish to use avatars for other Unity applications, VTubing apps, or social VR platforms.

Animation Viewer now supports CTRL-K, the Unity Search window. (Read more)

Patreon exclusive updates ⭐

Double Hip Tracker is receiving an update in V1.3.0 that changes the behaviour of the double trackers. It's an option, default ON.

In short, we will measure the distance that separates the two trackers. If that distance changes too much, one of the trackers probably flew off. We will now try to detect that, in addition to the usual method. (Read more)

Also, when both trackers are lost, the double tracker will now freeze in place, instead of flying off.

There are other changes described in the full update post.

Vixen is receiving a small update in V1.3.0 so that you can change the boundary values of PhysBones, Contacts, and OSC. This is especially useful to change the range of activation for a PhysBone angle, the range of a PhysBone squish, or permit using two different ranges using one proximity Contact. That, along with a few fixes. (Read more)

✨ In IconGen V1.1.0, you can now export all of your decorated icons to PNG, so that you can redistribute decorated icons to other users. (Learn how)

If you're a Patreon supporter, you should download them now!

More updates are currently being worked on for FaceTra Shape Creator, Starmesh, and Vixen.

Research 🧪

🧪 Project H-View: I have shared on GitHub a personal project of an ImGui.NET application capable of displaying the entire Expressions Menu into a compact layout, and makes extensive use of OSCQuery. (Read more) (GitHub)

It also has an early implementation of that ImGui application being rendered directly into a SteamVR Overlay! (Video)

I've always wanted to try controlling tools like VRCLens and VirtualLens2 as an OSC application, and also learn how to build an ImGui overlay using the OpenVR API, so I'm happy having finally taken the time to do this.

If you are a C# developer, you may be able to make use of that project (it's under the MIT License).

🧪 Project Nochat: UdonSharp has been mainstream for so long, we may have forgotten that C# files (.cs) were never intended to be executed directly inside VRChat worlds. The ability to write C#, and have it work on VRChat worlds, is a feature that is entirely community-driven.

Therefore, VRChat prefabs that use UdonSharp do not depend on any of VRChat's intellectual property at all.

I've taken the opportunity to try running UdonSharp prefabs in a plain Unity project with VR controls, without VRChat. It works! Zero lines of executable code from VRChat needed, as Udon is completely unnecessary. (Video)

The fact that UdonSharp prefabs do not require VRChat to run is relevant as I want to create and enable experiences outside the limitations of the VRChat platform, so porting content could make it possible to experience content we're already familiar with, but with different virtual environment capabilities. I hope I'll be able to share more with you on this subject. (Read more)

🧪 Project Myrddin: This project is very similar to Project Nochat, however, this one is an attempt to run the VRChat SDK without Udon, with the option to run ClientSim with VR controls. UdonSharp components would run as C#, without the Udon VM. This way, one may be able to use traditional IDE debugging features directly on UdonSharp content (breakpoints, instruction stepping, hot code reload without leaving Play mode, etc.).

VR controls in-editor would let you grab world pickups with actual VR controllers and interact with them, opening the possibility of iterating faster on VR content just like a normal Unity game developer would. (Video)

Thank you ⭐

Your support makes it sustainable to continue the development of all these projects as a full-time occupation. Thank you so much! ⭐