Extended Reality
Explore our Research Topics
Frank Biocca |
Research Areas: Virtual and augmented reality systems, components for brain-computer interfaces, real-time public opinion measurement Design of Virtual Environments and Interfaces to Support Information, Perception and Cognition Our research examines the design of virtual and augmented reality hardware, software interaction techniques and applications to augment or change user thinking and cognitive performance. This research is done with teams within the distributed Media Interface
|
|
Jacob Chakareski
|
Research Areas: Immersive communication, augmented/virtual reality Virtual Human Teleportation Virtual reality and 360-degree video are emerging technologies that can enable virtual human teleportation to any remote corner of the globe. This requires ultra-low latency, gigabit-per-second wireless speeds and data-intensive computing. Our research investigates synergies at the intersection of 6DOF 360-degree video representation methods, edge computing, UAV-IoT, millimeter-wave and free-space optics wireless technologies. It transmits data using much higher electromagnetic wave frequencies to enable the ultra-high data rates and ultra-low latencies required by next generation societal VR applications. Real-Time Structure-Aware Reinforcement Learning Reinforcement Learning (RL) provides a natural paradigm for decision-making in diverse emerging applications that operate in unknown environments and with limited data of unknown stochastic characteristics. Paramount to the effective operation of these ultralow latency applications, such as IoT sensing, autonomous navigation and mobile virtual and augmented reality, is the ability to learn the optimal operation actions online and as quickly as possible. Existing state-of-the-art RL methods either take too long to converge or are too complex to deploy. Our research examines novel structure-aware RL methods that integrate basic system knowledge to compute learning actions updates across multiple states or even the entire state-space of the problem of interest, in parallel. To address the challenge of computational complexity that is introduced, our methods integrate analysis that help effectively trade-off learning acceleration and computing complexity. Societal Applications Our research focuses on interdisciplinary synergies to enable next-generation applications. For instance, a National Institutes of Health project at the intersection of networked virtual reality, artificial intelligence and low-vision rehabilitation aims to enable novel, previously inaccessible and unaffordable health care services to be delivered broadly and affordably. Other projects include the integration of virtual reality, real-time reinforcement learning and soft-exoskeletons for future physical therapy and the synergy of UAV-IoT and VR towards next generation forest fire monitoring.
|
|
Salam Daher |
Research Areas: Augmented, virtual and mixed reality; physical-virtual, 3D graphics; virtual humans; synthetic reality; modeling simulation and training; distance simulation; healthcare applications and virtual patients Mixed Reality Simulations to Improve Training Our research focuses on creating simulations using computer graphics, multimedia and mixed reality to improve training in different domains including health care simulation. We are especially interested in research involving virtual humans and multisensory experiences. We developed a new class of augmented reality patient simulators called physical-virtual patients that allow health care educators to interact with a life-size simulated patient by providing real- time physical tactile cues such as temperature and pulse; auditory cues such as speech and heart sounds; rich dynamic visual cues such as facial expressions indicating pain or emotions; and changes in appearance such as skin color and wounds. Training Caregivers of Virtual Geriatric Patients We are developing a new generation of Virtual Geriatric Patients (VGP). The VGPs are realistic, embodied, conversational virtual humans who are aware of their surroundings. The VGPs are displayed in Mixed Reality as training scenarios aimed to improve caregivers’ perceptions, attitudes, communication, and care towards older adults. This research is supported by a grant from the National Science Foundation Future of Work at the Human Technology Frontier. 3D Graphics for Wound Visualization, Measurements and Tracking Our research focuses on visualizing wounds in 3D for accurate measurements, reduced variability of measurements and improved tracking of patients’ progress. In the clinical setting, this translational research can reduce errors, improve healing estimates, and improving patient outcomes. In the training setting, this technology can improve healthcare trainees’ skills in wound assessment, especially when combined with mixed reality. This research is partially supported by the New Jersey Health Foundation. Interactive Remote Simulation for Healthcare Training During the pandemic, healthcare educators rushed to use pre-existing videos or had to record their own videos that they shared with their students to watch as a makeshift “simulation”. Content needs to be interactive to satisfy the interactivity requirement for simulation. Our team developed software called “Anywhere Simulation (AwSIM)” that allows healthcare educators to add interactivity to their existing content to create new healthcare scenarios and share that content remotely with their students. AwSIM provides healthcare educators with the power to create their own simulation scenarios using their own content (e.g., videos, images, text) and run the simulation remotely with their students. The software is content independent and is easy to use. We ran multiple studies with nursing students using our AwSIM technology and found that adding interactivity promotes teamwork, perception of authenticity and higher levels of thinking. Also, the AwSIM software has a high technology acceptance rate among students. We are working on creating and evaluating an immersive standalone version for trainees that they can use on a flat screen or on with a head mounted display with or without a facilitator. |
|
Margarita Vinnikov |
Research Areas: Immersive and collaborative cross reality, navigation, gaze/body tracking Immersive Cross-Reality Applications virtual, augmented and mixed reality applications and serious game development. We specialize in eye and body tracking as well as multi-sensory augmentations. Specific topics include the design, development and evaluation of novel XR and cross-model (visual, audio and/or haptic) user experiences through simulations such as walking in European cities and driving in New Jersey and New York. We also build augmented-reality collaborative applications. Multi-User Gaming and Collaborative Platforms Virtual collaboration received a lot of attention recently as many people are forced to work away from their usual workspaces due to the COVID-19 pandemic. Providing a realistic environment where people can reliably and efficiently collaborate on tangible objects and models will help many businesses. Primarily, this is relevant to city planners, military and law enforcement, as well as educational settings. We are also interested in a multi-calibration platform between various augmented and mixed reality devices such as mobile phones, Hololens and Magic Leap. Visualization of Large Datasets in Virtual and Augmented Reality Large data sets such as ontology trees or visibility graphs, when loaded into virtual or augmented reality devices, can pose many challenges. For example, continuous loading of data into a mobile device. Similarly, there are no established methods for the most userfriendly way to visualize large data clouds. Hence, we combine various computer science algorithms with user-studies to develop the most efficient ways to visualize large data sets. |