NextHand enables users to create virtual replicas of their hands in augmented or virtual reality, perfect for training, remote collaboration, and other use cases requiring realistic hand presence.
NextHand is an emerging software solution designed to bring highly realistic hand presence into augmented reality (AR) and virtual reality (VR) applications. Its core capability involves using depth sensors and computer vision algorithms to scan a user's actual hands and dynamically recreate lifelike 3D hand models.
These scanned hand models have a wide range of potential applications. In VR, NextHand models can replace traditional motion-tracked controller models - instead of seeing an abstract controller, users will see and directly control a recreation of their own hands. This creates a strong sense of embodiment and allows for natural hand-based interactions.
In AR, the 3D hand models can reach into the real environment for seamless blended reality experiences. Users train the software by performingvarious hand poses, gestures, and movements. NextHand then builds a personal motion database so the hand models animate and behave exactly as the user's real hands would.
Use cases span gaming, design, remote collaboration, simulation and training, sign language translation, accessibility, and more. NextHand removes the need for gloves, trackers, and other cumbersome equipment. The software allows hand presence in VR/AR using only vision-based cameras. With further development, it may someday enable realistic virtual hand presence without any wearable sensors at all.
Here are some alternatives to NextHand:
Suggest an alternative ❐