As technologies like artificial intelligence (AI), augmented and virtual reality (AR/VR), big data, 5G, and the internet of things (IoT) advance over the next generation, they will reinforce and spur one another. One plausible scenario is a physical world so enhanced by personalized, AI-curated digital content (experienced with what we today call augmented reality) that the very notion of reality is called into question.
Immersion can change how we interact with content in fundamental ways. For example, a fully immersive AR environment of the future, achieved with a wide-field-of-view headset and full of live content integrated with the built environment, would be intended by design to create in the user an illusion that everything being sensed was “real.” The evolution toward this kind of environment raises a host of ethical questions, specifically with attention to the AI that would underlie such an intelligent and compelling illusion.
When watching a movie, the viewer is physically separated from the illusion. The screen is framed, explicitly distinct from the viewer. The frame is a part of traditional art forms; from the book to the painting to the skyscraper, each is explicitly separated from the audience. It is bounded and physically defined.
But with digital eyewear, things change. Digital eyewear moves the distance of digital mediation from the screen (approximately 20 feet) to the human face, which is at zero distance, and almost eliminates the frame. It starts raising inevitable questions about what constitutes “reality” when much of one’s sensory input is superimposed on the physical world by AI. At that stage of the technology’s evolution, one could still simply opt out by removing the eyewear. Although almost indistinguishable from the physical world, that near-future world would still be clinging precariously to the human face.
The next step would be moving the source of the digital illusion into the human body – a distance of less than zero – through contact lenses, implants, and ultimately direct communication. At that point, the frame is long gone. The digital source commandeers the senses, and it becomes very hard to argue that the digital content isn’t as “real” as a building on the corner – which, frankly, could be an illusion itself in such an environment. Enthusiasts will probably argue that our perception is already an electrochemical illusion, and implants merely enhance our natural selves. In any case, opting out would become impractical at best. This is the stage of the technology that will raise practical questions we have never had to address before.
At that point, what is real? How much agency are we humans deprived of when we are making decisions based on AI-generated content and guidance that may or may not be working at cross-purposes to our needs? How would we even know? In the longer term, what happens to our desire to control our own lives when we get better outcomes by letting those decisions be made by AI? What if societal behavior became deliberately manipulated for the greater good, as interpreted by one entity? If efficiency and order were to supersede all other criteria as ideal social values, how could an AI-driven AR capability be dissuaded from manipulating individual behavior to those ends? What happens to individual choice? Is a person capable of being good without the option to be bad?
Perhaps the discussion surrounding the next generation of AI-informed AR could consider the possibility that the ethical questions change as the source of digital content gets closer to the human body and ultimately becomes a part of it. It’s not simply a matter of higher-fidelity visuals. First, the frame disappears, which raises new questions of illusion and identity. Then, the content seems to come from within the body, which diminishes the possibility of opting out and raises further questions about agency and free will.
This combination of next-generation technologies might well find its ultimate expression after we have collectively engaged questions of philosophy and brought them right into the worlds of software development and corporate strategy.
Movies, advertising, and broadcasting have always been influential, but there was never a confusion between the content and the self as we will likely see in the next generation. Having these conversations about ethics and thinking through the implications of new technologies early in their development (i.e. right now) could help guide this remarkable convergence in a way that benefits humanity by modeling a world that reflects our best impulses.
Jay Iorio is the Innovation Director for the IEEE Standards Association.
Disclaimer: The views and opinions expressed are solely those of the author. They do not necessarily represent the views of Futurism or its affiliates.