In this session, Benny will showcase how a 130-year-old brand has found ways to innovate its physical experience by integrating digital experiences.
In this session, leading experts will discuss the state of the volumetric capture ecosystem. What are some of the challenges of volumetric capture, from talent sourcing/scheduling to output/post-processing? Non-standard formats have been a big challenge from playback to the distribution of content on viewing hardware. What would 5G help with, in terms of distribution?
The new medium of immersive computing offers so much opportunities for expression, and at the same time puts creatives in a tricky spot when they have to consider all the parameters that is beyond their control. When every users have different ergonomic and environmental preferences, it's important for us to design and build dynamic experiences that fits into the audience's context. We will explore why proceduralism and machine learning play a crucial role in growing content and experiences in the new spatial medium. Followed up with examples and case studies, we will review the insights and lesson learnt on how to use technology to augment new forms of hardware input and creative directions.
In theatre and film, the 'mise en scène' refers to the 'placing on stage', or arrangement of design and scenic elements. Often this takes places within a frame of a stage or screen, but in XR, the literal frame of storytelling is removed. This session will go through the elements of mise en scène as seen in theatre, and best practices for XR.
Creating and deploying beautiful and meaningful experiences for mass market audiences can be challenging. How can large organizations lead the change? How can startups and independent developers succeed in the consumer market with applications that go beyond just fun games and bring meaningful and impactful experiences to users? How can developers create new experiences with real benefits for large audiences, without the high budgets of AAA video game teams, and without having to build from scratch, deploy and promote standalone applications in app stores? How can we build and expand the Ecosystem?
Throughout history our tools have defined the shape of our world—and so it will be with the augmented world. If a tool makes something easy, creators will explore and innovate in that space. If a tool makes something difficult, they’ll take their first idea and move on. In this session we’ll break down the unique challenges and opportunities for the XR tools that will empower a diverse group of creators to shape the next generation of augmented worlds.
Knowing how people will react and engage with your spatial computing experience is an important part of ensuring success. Researching with your target users is the best way to do this but user research on new and emerging platforms can be challenging, particularly as we move "beyond screens" to smart and spatial computing interfaces. Existing user research methods may be less effective meaning we need to experiment with new ways to understand how successful the design of an experience will be. Using examples we will explore the hurdles of researching new and emerging technology and discuss techniques for gathering great user research insights. This talk is aimed at product managers, designers and developers of spatial computing experiences who want to run their own user research.
A deep dive into Google's Immersive Arts team, a group of designers and technical artists creating original Augmented Reality and Virtual Reality experiences.
From black and white to color, and desktop to mobile, our electronic devices are visual mediums that have rapidly evolved and advanced over the past century. New device technologies continue to add clarity and mobility, enhancing our viewing experience to be more convenient and closer to our reality. Today, more than ever, these devices have become indispensable information vehicles that are ingrained into every aspect of our daily lives.
The average person is now so acclimated to having these thousands of digital interactions each day, they’re hungry for a more immersive experience. They’re ready for the next medium that will seamlessly enhance their lives and bridge the gap between real life and technology (without a headset). They’re ready for light fields.
In this talk, David will discuss Lightfield as the next generation medium. He will explain how this emerging technology provides users with a fully interactive, lifelike viewing experience by rendering images and videos with 3D depth, and complex, realistic light effects such as sparkles, texture and highlights. He will also discuss how this type of technology will change the game for businesses by empowering them to provide end-users with more immersive and engaging experiences across industries including auto, retail, medical and education.
3D face scanning is the first step in cloning a person and bringing the person to a virtual environment with high fidelity. High quality 3D face scanning typically has been done with specialized and complicated photogrammetry systems. The emergence of low-cost and high-quality mobile depth sensors, such as those in Apple iPhone X, helps to fuel the advance of 3D scanning and brings the scanning capabilities to the hands of millions of users. The talk will discuss different types of mobile depth sensors, including structured light, ToF, and active stereo, and compare their pros and cons for close-range 3d face scanning. The talk will also present the unique challenges of 3d face scanning and discuss their applications in fields such as medical, dental, eyewear, entertainment, and 3d printing.
Learn how Artie’s technology enables developers to create games in Unity that have next-gen interactive features, including speech recognition and computer vision.
The Augmented Conversation is a dialog between human and AI participants that enables them to imagine, describe and create live virtual objects and simulations that they can interactively and simultaneously explore and modify. The AI is a full participant in this exploration – listening and watching the human participants and responding instantly to reify the ideas and concepts that they discuss.
This isn’t an app. This is live collaboration – a foundation for how the next devices will mediate our engagement with others and with the world. Not only will these devices replace your smart phone, but they will replace your PC. The new devices will and must become super workstations that live up to the promise of delivering the “bicycle for the mind”. They will enable you to work with ideas and concepts that are simply not possible today, and you'll be able to share those ideas at any time with anyone.
Since the mid-1990s, a significant scientific literature has evolved regarding the mental/physical health outcomes from the use of what we now refer to as Clinical Virtual Reality (VR). While the preponderance of clinical work with VR has focused on building immersive virtual worlds for treating anxiety disorders with exposure therapy, providing distracting immersive experiences for acute pain management, and supporting physical/cognitive rehabilitation with game-based interactive content, there are other emerging areas that have extended the impact of VR in healthcare. One such area involves the evolution of conversational virtual human (VH) agents. This has been driven by seminal research and development leading to the creation of highly interactive, artificially intelligent and natural language capable VHs that can engage real human users in a credible fashion. No longer at the level of a prop to add context or minimal faux interaction in a virtual world, VH representations can now be designed to perceive and act in a 3D virtual world, engage in face-to-face spoken dialogues with real users, and in some cases, can exhibit human-like emotional reactions. This presentation will provide a brief rationale and overview of research that has shown the benefits derived from the use of virtual humans in healthcare applications. Research will be detailed reporting positive outcomes from studies using VHs in the role of virtual patients for training novice clinicians, as job interview/social skill trainers for persons on the autism spectrum, and as online healthcare support agents with university students and military Veterans. The computational capacity now exists to deliver similar VH interactions by way of mobile device technology. This capability can support the “anywhere/anytime” availability of VH characters as agents for engaging users with healthcare information and could provide opportunities for improving access to care and emotional support for a wide range of wellness and clinical applications for a variety of populations. This work will be discussed along with a look into the future of this next major movement in Clinical VR.
For more information on this topic, please visit our website: http://medvr.ict.usc.edu/ and YouTube channel: https://www.youtube.com/user/AlbertSkipRizzo/videos?view=0&sort=dd&shelf_id=1