The focus on increasing brand loyalty and creating value out of long-term relationships is leading many of the world’s most innovative brands straight to humanized AI. Digital People possess the emotional intelligence to deliver a more empathetic customer service experience, generate enriching digital content, and unlock elements of scale — all through a visual interface that reacts appropriately and interacts collaboratively. Check out this session and learn how Digital People can provide data on your customer needs, preferences, and emotional states which then becomes a competitive advantage and value for your business.
Humans have been communicating and working together since the beginning of time. We smile, gesture, use our eyes, hands, and posture to communicate meaning. We come to life when we meet face-to-face. But being together face-to-face has become even harder in today's global environment. How do you replicate that experience - not the exactness of it, but the essence of it - in a fully immersive experience? What are the pros and cons of the wide array of virtual interactions including avatars, holograms, blended experiences, 3D content, AI and digital twins. In this talk, the leaders behind Webex Hologram, Elizabeth Bieniek, Director of Engineering Innovation, and Cullen Jennings, CTO/Fellow, share their insights from building the industry's first holographic collaboration system, what it's truly like to interact with holographic colleagues, and share their predictions on what the future in this space may hold for us all.
Digital identities will determine where, how, and to whom people appear in the metaverse. In these immersive, embodied spaces, avatars will add a new layer of representation and self-expression to our understanding of digital identity. As we work together to build the future metaverse, it will be critical to define and understand people’s expectations around representation, identity, and expression across both 2D and immersive surfaces. This session will explore our approach to tackling hard questions around identity, representation, and self-expression as we work to build the metaverse responsibly.
Join us for some of the latest behind-the-scenes research in Google Advanced Technology and Projects Group. We introduce a new full-body interaction technique that uses miniaturized radar technology and advanced machine learning for the detection and recognition of spatial relations, as they naturally occur around devices (https://www.youtube.com/watch?v=r-eh2K4HCzI). By continuously sensing spatial behaviors and movements, ambient computing devices can anticipate our intent, and participate in our everyday life in a more considerate and respectful way: they can reduce the need for direct interactions, or automate some actions in the background, so that the user doesn’t have to attend to them at all –ultimately freeing us from unnecessary tasks.
The excitement around Digital People or Virtual Beings has created a buzz around how people will interact in the future. For this excitement to turn into reality, advancements in CGI and AI-based animation will scale this to millions of users and use cases. Discover why so many global brands are leveraging this technology to better connect with customers, increase engagement and brand loyalty, gain a competitive advantage, and provide the most engaging experiences in the digital and hybrid worlds.
Most research works featuring AR and Brain-Computer Interface (BCI) systems are not taking advantage of the opportunities to integrate the two planes of data. In this project, we propose a prototype which combines an existing AR headset HoloLens 2 with a Brain-Computer Interfaces (BCI) system and we perform several tasks to validate this concept. In the first experiment, we reduced the distraction of the user by including information about the current attentional state, stress and cognitive load. In the second prototype we propose a system based on covert visuospatial attention (CVSA) – the process of focusing attention on different regions of the visual field without overt eye movements. In the experiment we did not rely on any stimulus-driven responses. Fourteen participants were able to test both systems over the course of two days.