Supporting Text Entry in Virtual Reality with Large Language Models

Abstract

Text entry in virtual reality (VR) often faces challenges in terms of efficiency and task loads. Prior research has explored various solutions, including specialized keyboard layouts, tracked physical devices, and hands-free interaction. Yet, these efforts often fall short of replicating the efficiency of real-world text entry, or introduce additional spatial and device constraints. This study leverages the extensive capabilities of large language models (LLMs) in context perception and text prediction to enhance text entry efficiency by reducing users’ manual keystrokes. Three LLM-assisted text entry methods - Simplified Spelling, Content Prediction, and Keyword-to-Sentence Generation - are introduced, aligning with user cognition and the contextual predictability of English text at word, grammatical structure, and sentence levels. Through user experiments encompassing various text entry tasks on an Oculus-based VR prototype, these methods demonstrate a 16.4%, 49.9%, 43.7% reduction in manual keystrokes, translating to efficiency gains of 21.4%,74.0%, 76.3%, respectively. Importantly, these methods do not increase manual corrections compared to manual typing, while significantly reducing physical, mental, and temporal loads and enhancing overall usability. Long-term observations further reveal users’ strategies for using these LLM-assisted methods, showing that users’ proficiency with the methods can reinforce their positive effects on text entry efficiency.

Publication
In 2024 IEEE Conference Virtual Reality and 3D User Interfaces
陈柳青
陈柳青
博士生导师

主要研究方向:智能设计,智能交互,设计大数据,创意设计,AR/VR,用户体验,Web前端/UI。

蔡愚
蔡愚
2022级博士生
丁世贤
丁世贤
2024级硕士生
唐怡琳
唐怡琳
2022级硕士生