The Extraordinary Story of Imaginary Space
Head of Immersive, Pablo Colapinto on the next era of spatial storytelling
“A whole different world, interwoven with my world”. – Billie Eilish
As if stepping into a book, world-making systems based on Artificial Intelligence and Mixed Reality tools are asking variations of the same hard question – if you could see anything, what would it be? Slowly at first – then with increasing urgency – this question elicits an exciting tempest of possibilities, and a responsibility to care about the outcome. In this article, I discuss this sea change in human-computer-interaction, where the fantastic can become real and vice versa, and how it is a natural consequence of spatial thinking.
Looking around, you see two rather extraordinary things. First, a blooming proliferation of imagined characters, clothing, objects, and spaces, each offering new physical, emotional, and social experiences that forecast a hypothetical mode of Mixed Reality, where the real and virtual merge in your psyche to create something that is both and neither. We suspend our disbelief that the space is real, so that we can step into the story.
Noodle performing in Times Square as part of Gorillaz Presents.
The Gorillaz perform to their adoring fans in the real world.
Murdoc performing in Times Square as part of Gorillaz Presents.
A bunch of technologies support this suspension, like the new generation of Visual Positioning Systems which act as a sort of hyper GPS – Google’s Geospatial API, Apple’s ARGeoAnchors, Snap’s custom landmarker, and Niantic’s Lightship ARDK, all use computer vision to make it easier to place a character anywhere in the real world for others to see through their phone’s camera. With additional machine learning models, knowing where your camera is and where it is pointing is supplemented by understanding what you are looking at – trees, doorways, windows, tables are recognizable, providing a logical layer for introducing choreography, stagecraft and drama in real time.
“The Fall”, directed by Mischa Rozema.
Meanwhile, new wearable devices (Meta’s Quest Pro, ByteDance’s Pico, Lenovo’s ThinkReality glasses powered by Qualcomm’s Snapdragon chipset, the Magic Leap 2, the elusive Apple eyewear of 2023) promise to go beyond the phone to aid in both the realization of fictional spaces and the fictionalization of real spaces. Your most intimate companion is currently your phone, but might that always be what it’s called?
That’s the first extraordinary thing you see – new contexts for your stories, new characters in them, and new communities around them. I call these components the Three C’s. Together, they enable user experiences that are tactile, interactive, and participatory.
The Three C’s
The second extraordinary thing is the rapid vine-like growth of AI content generation as it unleashes its raw creative power, leaping from the mouth of spring with tools such as Midjourney, DALL-E, and Stable Diffusion, daring us to doubt that it is ready to assist in imagining these fictions. All day and night, “Prompt Engineers” on Discord channels feed Midjourney a descriptive text and watch it spit out an image that is simultaneously original and wholly derivative, alternate realities emerge constantly on social feeds as if from the foaming ocean in Stanislaw Lem’s Solaris. From a dream-like trance, we can cast spells that resolve blurry data into crisp AI generated 3D objects, audio, character movements, videos, code . . . Wait – has everything already changed?
In the studio, it is easy to see that the two extraordinary things – AI-generated content and the bundle of AR/VR/XR technologies that extend our experience – are headed straight for each other in dramatic crescendos of practical fiction. Every morning, a new algorithm for imagining worlds and every evening a new device for entering them. The pace of imagination has picked up speed, as has the number of imaginers and the number of ways into the story. It is a breathtaking path towards boundless creativity, to be made with a whisper of words and a new pair of glasses.
“Bury It” – Chvrches, directed by Mighty Nice
These technologies are ushering in – or were themselves ushered in by – a new era of consumer-as-creator. As consumers, we can bring with us our own customized avatars into new spaces; we can collaborate and be co-present with others in fictional environments; we can enjoy immersive music experiences and sports with the richness of live data; we can discover new ways to spend time with others, both real, imagined, and generative. Whether artisans or not, we are all invited to participate in building this new space. It is distributed creatively, diffused technically, and (some would argue) decentralized productively.
The expanded everywhere-theater for making and consuming entertainment is thrilling – and can be intimidating. Content providers, brands, creatives, product managers are all seeking help from our studio in designing new immersive worlds, emotional interactions, and social participations.
The problems we solve in the studio are remarkable: What gestures should I make with my hands to control an imaginary object? How can we work on it together, remotely? How do we activate a new community? What might a fictional character be doing in your living room while you are in the kitchen? What does it mean to have a city scale fictional experience? What happens when our two imaginary characters meet in the real world? How will I weave it into my daily routine? How does it start and when does it end?
These energizing design problems reveal a foundational change to what we consider to be UX. User experiences are increasingly spatial, which means they are shared with others and spill into each other. The fuzzy landscape of these multi-modal interactions is often simply called the “3D internet”. However, this pat characterization evokes a kind of Wreck It Ralph sloppiness. It belies the fact that we have moved beyond stereoscopic visual effect and into a whole new milieu, where we now must face physical, emotional, and social questions at the edges of presence. These questions are hard, and best informed by engaging in new partnerships, new spaces, and new strategies, across a diverse collection of stakeholders and age groups. Some call the new UX the Metaverse, and that’s an okay name only because it is undefinable, encompassing everything that is not.
The Legend of Vox Machina (Live Q&A). Grog and Scanlan, and hosted a live Q&A with 20,000 fans.
To ground our thinking, I call the work we do day to day in the immersive studio a spatial practice. In this whirlwind of colliding futures, I have found it helpful to remember that it is our developing understanding of space itself that is the cornerstone of these new experiences. Real physical space – the ether – is finally being recognized as having true creative agency in our collective minds. It is forcing us to make the hard decision about what we want to see happen in it. Understanding the immediate space around you – from your wall to your window to the palm of your hand – is the bedrock of all this futuristic thinking. Everything is part of the story. Everything is a piece of the puzzle. Nothing can be taken for granted.
Put another way, a re-calibration of human-computer-interaction has awakened a latent space all around us, a neverending story of possibilities. Suddenly everything is important, every nook and cranny must be seen, every object itemized, recorded, remembered, correlated, and assigned a statistical likelihood of being a cat. On your table, holographic-like visuals might illuminate a mundane task, allow you to enjoy a game, or connect you to a loved one. The air is thick with portent. The future is fuzzy – diffuse – where anything can happen. And likely anything will happen with the right pair of glasses and the right sequence of words. If it feels like you are living in science fiction it’s because you are writing it.
AI generated image: space, woodcut, psychedelic colors via Midjourney.
AI generated image: My evening hike, re-imagined by Midjourney. Which path should we take?
AI generated image: Durer etching of a giant robot bird.
We are living in an aha moment – a dramatic twist in the plot where suddenly everything makes sense – or suddenly none of it does. Indeed, the activation of space itself as a player in our fictions is more of a reckoning than a revolution. Intuitively, we’ve been speaking the language of space all along. Its components and configurations (points, corners, creases, folds, convolutions, inversions, transversions, knots, dimensions) drive our very ability to organize our thoughts. As rational beings, we measure our reality through relationships – between objects, between people – and it stands to reason that space itself is a foundational tool with which we think. The space between us is not empty. There is no such thing as empty space.
TITLE SEQUENCE To the Netflix Film The House, directed by Nicolas Ménard and Manshen Lo.
The geometric principles of ‘rationality’ lie in our measured thoughts, and to an almost absurd degree, are embedded throughout the English language. We ‘form’ thoughts, we make ‘points’, our ideas take ‘shape’, truth has an ‘extent’. Tricky concepts are ‘complicated’ – that is, literally ‘folded’ together, or tangled. To compute is to ‘put together’ (e.g. side-by-side) – and the entire world of mathematics, humanity’s flintstone, is bounded by geometry; as the historian of science Arthur Koestler notes, we still call numbers ‘figures’, an acknowledgment that dates back to Pythagoras. `Structures` are spatial layouts we use for buildings, organizations, grammars, concepts, poems and stories. Life itself is bundled up with space itself. We need space to think.
Space is the computer. It can help us “reason upon things” – and it’s been a long time coming. In practical terms, giving space its due means creating with techniques in our table of spatial storytelling elements – like VPS and SLAM, semantic segmentation, motion capture, and live data – as well as latent spaces cultivated by AI diffusion models. It means thinking like an architect to design habitats, and like a choreographer to design encounters. It means developing a spatial practice, and sharing it with others
With AI and XR, I hope we will learn how to better use the instrument of space itself, so that we might make these improvements to our capacity for reason, not just to add entertainment and utilities for our day to day lives, but to actually expand our ability to think and feel and create and share. Growing these skills is a big deal – and it is a tall task to do so responsibly – and the community of creators at our studio tackle it everyday with the understanding we must do it together.
“Create the world you want to see” is Snap Spectacles byline. Whenever I see that lovely invitation to Augment Reality I am instantly reminded of the half-promise / half-warning text on the back of the medallion from The Neverending Story – it says “Do what you wish”. In that story, imagination is under threat of The Nothing – a non-space – but imagination is not the same as desire. Each wish granted by the medallion costs a real-world memory.
In our own story of space, as new worlds merge with the real one, we must remain rational creators. A digital biogenesis is afoot, with virtual worlds being born from the real one while the real one is responding to it in turn. These worlds are connected now more than ever before – Omne vivum ex vivo (all life comes from life), including virtual life. As the spaces merge, ultimately, it is your space we are talking about here, wherever you are, and the space you share with others. I encourage you to reach into it and start to imagine what it could become. Do what you wish.