What happens to our data and our privacy when AI-powered XR glasses become everyday wear? What does ‘identity’ mean when we can digitally customize or augment our physical appearance - or be augmented by others? And what happens when brain-computer interfaces introduce this perception-altering technology directly into our minds? A recent controversy over a “chubby filter” on TikTok sparked calls for a ban. But what if other people could apply the “chubby” filter to YOUR body, live and in real-time, as they passed you on the street? What if they could filter or re-skin your identity in the privacy of their own XR glasses, without your knowledge or permission?
The world’s most powerful companies are investing billions into physical-digital infrastructures built on scans of the real world. Companies like Niantic are already training world-mapping AIs on datasets contributed by millions of users. And soon, all-day-wear reality-augmenting glasses will enable wealthy consumers to view the world through AI-powered interfaces, filters and reskins. In the words of Niantic’s CEO, consumers might soon be able to “theme the world like it’s Nintendo everywhere.” But unlike the open protocol underpinning the Internet, the infrastructure for global-scale spatial computing will likely be proprietary and profit-driven - and it is already being built.
With brain-computer interfaces on the horizon, these issues may soon impact not just our concept of identity - but our concept of autonomy. Bringing perspectives from design, industry, policy and law from across Europe, our panel will debate the commercial and consumer ethics of AI, XR and BCI, and we’ll ask: what happens when reality itself becomes opt-in?