Join Kasper Tiri as he opens the floor for a discussion with developers about interoperability. Is it essential for the future of immersive technology? This is an opportunity to chat with others in the industry who are passionate about this topic.
Today, virtual reality lets us immerse ourselves in vibrant digital spaces with a surprising sense of social presence—the feeling that you’re right there with someone else, no matter where in the world they happen to be. But technology that opens up new possibilities can also be used to cause harm, especially when your virtual identity is segregated from your real identity. In VR, we sometimes see abusive behaviors such as harassing, bullying, grooming, and racial or sexual discrimination.
How do we prevent bad behaviors and engender pro-social behaviors in VR? How do we encourage people to trust and respect each other when they remain anonymous? In this talk, I'll walk through how we can build trust systems in a virtual environment, how social norms of virtual social spaces compare and differ from real life, and what principles we should follow to ensure we are building for inclusivity and safety of all types of people.
The AR Cloud will become the most important communication medium of our time, providing connections and context between people, places and connected machines. We will discuss available SDK's and cover the essential elements, components, and tools required to develop public and private spatial computing applications. We'll walk through real world examples created with the YOUAR Cloud SDK to illustrate how to create, use and leverage AR Cloud infrastructure at your facility today.
Augmented reality applications hold immense potential to bring people to go outdoors and share experiences with each other. However, to create seamless mixed reality experiences, AR apps require our devices to display perception and comprehension of our environment. In this presentation, I focus on the technical approaches to provide AR apps with the ability to localize and identify other devices, which by extension enables players to enjoy multiplayer AR experiences in which they can interact with each other directly. Vizar Technologies proposes an image processing approach, in which light pulses are emitted in specific light signatures, to provide direct multiplayer AR in smartphones. Our approach is compared with other common approaches, including GNSS with gravimetric and geomagnetic sensors, facial recognition, and add-on devices. As outlook, a few examples of possible future multiplayer AR experiences are highlighted.
Leaders from the Korean XR ecosystem share the latest news and information on innovations of Korean developers.
The historical advancement of technology has sometimes distanced and even dehumanized how we work, live, and play. Fortunately, the 5th Industrial Revolution heralds a journey that could yield more balanced and empowered human-machine interactions.
This powerhouse panel will explore the ways that we are turning technology into an ally for how machines and humans can - and must - collaborate for a sustainable future. We will debate how immersive tech has the potential to profoundly impact sustainability agendas across business, politics, academia, and communities.
The opportunity for creators, educators, and policymakers to leverage AR, VR, and other immersive experiences for unprecedented solutions has never been so exciting. Whether it's around climate change, DE&I, social equity, or emotional and psychological wellbeing, there is much to solve with the power of innovation at our fingertips. Join us to ponder, consider, delight in, and celebrate the possibilities.
Silently, WebXR has been enabling a fast and accessible new class of XR development. Additionally, a platform that allows self-distribution and control over content will become even more important with increasing device fragmentation. Learn about how development and deployment of XR web apps make the web the ultimate development platform.
For the past several months, DALL·E, a web-based image creation tool from OpenAI and MidJourney (both using gpt-3) have emerged as a topic of discussion for creatives. Both DALLE and MidJourney generate 2D artifacts. In her current research, Jasmine Roberts has expanded gpt-3 to create and modify 3D assets for use in augmented and virtual reality.
This is an effort to spark discussion of how we might set a course for better XR + AI model interactions. This talk is intended for those considering or in the process of implementing 3D generation systems. The principles given in this talk should lead to the development of more human-friendly tools and processes and mitigate fears surrounding displacement of creators. The audience will come away with some useful techniques to incorporate language models into their XR experiences.
Table stakes for the "The Metaverse" are to allow each user to be immersed into a virtual world that evolves with them. For too long this has been considered impractical due to the budget implications intrinsic in doing them manually. New advances in AI are shifting the calculus in a major way. In particular, new techniques make it possible to drastically scale up asset creation, improve runtime performance, and, most importantly, create a living world.
Doug Engelbert expressed the idea of augmenting human intellect, not just for individuals, but as a collaborative experience. We have been working on Croquet, with the goal of creating a foundation that would enable such an augmentation of our ability to create, explore, understand and solve problems collaboratively. We define the augmented conversation as follows:
1 - A discussion within a group of users that is extraordinarily enhanced with the kinds of tools and capabilities that are only available with a computer.
2 - A computer AI is a full participant in this conversation. It allows us to jointly discuss and explore complex systems, data sets, and simulations as naturally and easily as we engage today.
3 - There is a guarantee of shared truth. The simulation I see must be the exact same as what you see. Anything I do to affect the shared simulation must also be accurately shared. If this is not the case, then there is no guarantee that what you see and understand is the same as for me, so you are unable to trust that communication channel.
In the very near future, the virtual objects that will soon populate our world must be as responsive and alive as a physical object. There won't be a physical reality and a virtual or augmented reality. Soon, there will only be a seamless, multi-user reality.