Home Editorial Study Unlocks Facial Expression Control for More Inclusive XR Experiences

Study Unlocks Facial Expression Control for More Inclusive XR Experiences

by Geny Caloisi

A new study by researchers at the University of Glasgow and the University of St. Gallen suggests that facial expressions could soon be key to unlocking greater accessibility in virtual and augmented reality (VR/AR) environments. Using Meta’s Quest Pro headset, the team identified seven simple facial movements that allowed users to navigate games and web content—no hand controllers required.

The study will be formally presented at CHI 2025 in Yokohama this April. Its findings point toward promising developments for users with physical disabilities who may struggle with current XR interaction models that rely on full-body motion or handheld devices.

XR stands for Extended Reality, which is an umbrella term that encompasses all immersive technologies that blend the physical and digital worlds. This includes:

  • Virtual Reality (VR): Fully immersive digital environments that replace the physical world, typically experienced through headsets like Meta Quest, HTC Vive, or PlayStation VR.
  • Augmented Reality (AR): Overlays digital content onto the real world, often viewed through smartphones or smart glasses (e.g., Microsoft HoloLens or Apple Vision Pro).
  • Mixed Reality (MR): Merges real and virtual environments, allowing digital and physical objects to interact in real time.

XR refers to VR + AR + MR, and is used widely in discussions about immersive tech, particularly when the lines between these categories are blurred or when developers are designing for cross-platform compatibility.

The project tested 53 Facial Action Units (FAUs)—the software-recognised expressions used to animate avatars—on 20 non-disabled volunteers. Participants repeated each expression while rating them for comfort and usability. Seven FAUs emerged as both easy to perform and reliably recognised: opening the mouth, squinting each eye, puffing each cheek, and pulling the mouth’s edges sideways.

To test real-world functionality, the team built a neural network that interpreted these expressions with 97% accuracy. They then trialled the system in two scenarios: a VR game and an AR web browser, both controlled entirely by facial gestures. Users reported the experience was intuitive and effortless, especially when navigating web pages.

Dr Graham Wilson, from the University of Glasgow’s School of Computing Science, said the research challenges the assumption that XR must rely on dextrous or full-body input: “We’ve shown that off-the-shelf headsets can support accurate, hands-free control—potentially transforming access for people with disabilities.”

His colleague, Dr Mark McGill, added: “Our next step is to test this technology with people who have motor or muscular impairments, providing developers with data to expand inclusive input design.”

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy