With the emergence of wearable tech and augmented reality, historically disconnected frontline workers can now be connected into the digital fabric of the business and their activities captured in real time. With the inherently noisy signal that characterizes human activities, Artificial Intelligence (AI) offers the solution to converting this from digital clutter to insights into the largest areas of opportunity in productivity, quality, training, and operations. The outcomes will surprise you.
AI/AR industry has attracted attention never seen before of academia and industry, into which numerous talents and resources have been invested. However, academic achievements are not equal to products, which need to be adjusted and optimized in technology, engineering, product, etc. according to specific application scenarios. This talk will share with you some difficulties, misconceptions and experience in commercializing AR based on HiScene’s practice.
MAXST, a persistent challenger, is proudly specialized in Augmented Reality working with prominent Korean enterprises, such as Hyundai, Samsung Display, Samsung Electronics, and KT. MAXST’s cloud-based AR platform pursues innovation in terms of AR development tools, AR production costs, and profitability. We also believe Visual SLAM is critical basis for Future AR. We will reveal our stories and insight to the immersive world of today and tomorrow.
XR has the potential to become one of the world’s most disruptive computing platforms in the coming decade. Over the past year, we’ve seen significant strides in technology and with the introduction of new XR form factors such as standalone, tethered viewers and dual-mode HMDs – there has been an acceleration of XR opportunities, use cases and adoption. In this keynote presentation, Hugo Swart, Head of XR at Qualcomm Technologies, Inc. will speak to cutting-edge technological developments and how Qualcomm has met previous challenges, with a forward outlook on how strong ecosystem collaboration, the proliferation of 5G and AI will fast-track the industry timeline to an XR ubiquitous world.
LUNCH -- Food and beverages are available for purchase at the following food stands located within the Santa Clara Convention Center: Pete’s Coffee Cart, Great America Lobby Food Court, Concession Stand C (located inside the exhibit hall) and Concession Stand A (located inside the exhibit hall).
Circuit Stream AR developer Nakisa Donnelly shares her development experience getting started with Magic Leap and also building more complex applications. She'll dive into how Unity works with the ML1 and explain how to implement core features like haptic feedback, spatial audio, and gestures.
The Reality Virtually Hackathon at the MIT Media Lab is the largest spatial computing hackathon in the world spanning five day of learning, ideation, designing and building. It extends the definition of a hackathon to prototyping at scale, Reality Virtually is designed to crowdsource new ideas and innovation, apply XR to new fields and impact participants lives with new jobs and by creating new companies.
Submissions Close April 1st
Recently we’ve been hearing a lot of buzz around developments, prospects and also failures in AR and MR. The real questions that need to be addressed are how these successes are measured, what are the reasons for failure and who is taking into account what the user really wants and needs? So, what’s next? A new chapter in human-machine-interaction. We are moving from a nearly imperceptible interface to an interface you actually won’t be conscious about. We are going beyond interface interaction. The big question with interfaces is: You are offered nearly unlimited data to augment your reality or to mix with your reality, but how do you let the machine know which data you want? When you are offered choices, how do you select one? From the early joy stick, we’ve come to state-of-the-art data selection such as the blinking eye. But what’s the next method of data selection? How will we make choices in the future? We will give you the answers at AWE 2019.
It is commonly accepted that picture quality and visual fidelity are crucial factors for truly immersive VR/AR experience. While increasing the pixel density is technologically feasible, the optical performance of VR/AR HMD lenses remains a problem due to lens design constraints and the laws of physics. Besides the requirement that the lenses should be thin and lightweight, an HMD optical system includes a moving element, the eye pupil, which makes the problem of achieving high picture quality even more challenging if not irresolvable. The proposed solution is a computational lens correction which allows to achieve higher resolution, suppress chromatic aberrations and enable wider field of view, while adding no extra size or weight to the optical system. By adapting to the changes of optical flow due to eye pupil movement, such lens correction also allows to keep high quality at any gaze direction, allowing the eye pupil to move off the optical axis of the lens. The objective quality measurements show more than twofold increase in resolution and tenfold reduction of chromatic aberrations.
As the speed of technology continues to accelerate automation in the manufacturing world, the inevitable question of whether or not the “human touch” will become obsolete is top of mind for workers. The fear that robots and smart technologies will take everyone's jobs is prevalent, but not necessarily true. AR has the power to be a job creator, not a job eliminator. Its ability to make anyone an instant expert can in fact increase job security by quickly helping workers become more proficient in tasks. Learn about real-world use cases where AR is making people better, and safer, at their jobs, and explore why enterprises who create a workplace that’s augmented, not automated, will be the leaders of tomorrow.
Storytelling strategies vary among platform, media and genre. As technology changes how we define storytelling, we’ve find ourselves as test subjects within different experiences. Are today’s game consoles tomorrow's set top box? Is the future of narrative a real time vr experience or a branching narrative? Or is leaning back making a come back? Dive into this interactive session that questions where the line is for the future of storytelling? Presented by The Television Academy’s Interactive Media Peer Group
An overview of the current state of hand tracking & gesture recognition in the AR/VR market - the importance, challenges, tradeoffs, hardware and software requirements, use cases, trends, and vision of the future
Learn how Bosch is using Augmented Reality to facilitate technical training.
The partnership between XpertEye and Proceedix offers a best-in-class solution to empower smart operators and technicians. The combination leverages both mobile technology and smart glasses for remote assistance and instruction and inspection execution.
A discussion of the current AR/MR Smart Glasses landscape, their major use cases in market now and what the future looks like for AR/MR Smart Glasses for consumer & enterprise deployment.
Artificial intelligence is increasingly prevalent in every aspect of our lives – and its impact on the entertainment industry is taking shape. As the video space becomes increasingly saturated and engagement rates continue to drop, AI-powered avatars are becoming the “what comes next” in mobile entertainment. Armando will speak about this new form of interactive content: how AI avatars are becoming increasingly lifelike, how emotional insights gleaned from AI avatars can change interactions with computers as we know them, and how the intersection of these innovations offers creators an opportunity to deploy AI avatars across the entertainment industry.
Thinking about launching your own enterprise AR strategy? Worried that it all sounds too technical? In just 15 minutes we can show you how easy it is to author your own AR application for industrial use cases with REFLEKT ONE. It will all be done in 3 simple steps: import, edit and publish. We will cover all the basics from importing your existing 3D and XML data to placing Points of Interest (POIs) and creating your first step-by-step procedure with animations. What’s more, we will even push it straight to a device so that it can be used immediately! No programming skills needed and beginners welcome.
Most software developers today have “mobile-first” tattooed somewhere on their bodies because we humans currently explore our world primarily through portable devices. While the internet has caught up with this behavior, the physical world has not. Streets signs, airports, and TVs could all do a better job delivering mobile and digital moments. Geenee helps mobile devices see the world around them through lightweight and robust image recognition that triggers contextual content at the moment of relevancy, from ticket sales and film trailers to augmented reality stories and storefronts. Geenee provides rich mobile results through a scalable, web-based platform with no downloaded app required. We’re working to make accessing contextual content directly from the world around you easy, rewarding, and meaningful.