The last several decades have seen sweeping changes to the kinds of technology available to consumers. It’s easy to forget, as we fiddle with our smartphones and buy cheap drones for recreational flight, how quickly these shifts have taken place. For example: the television, one of the most standard-issue examples of consumer electronics, started to become popular about 70 years ago. Many people alive today have firsthand memories of the pre-TV era. Likewise, some technologies that seem like exotic science fiction concepts today will no doubt be considered mundane facts of life by our children — or even by us in the next few decades.
It’s a fairly safe and common assumption that the world will undergo such transformations in the years to come, but the specifics of those changes are anyone’s guess. The potential impacts of the following emergent technologies in particular — if they should achieve the kind of widespread realization that the television has enjoyed — would be life-changing. Many of these technologies were broadly considered the province of science fiction just a few decades ago, but it’s entirely conceivable that these former pipe dreams will become essential parts of our daily lives in the near future.
Let’s consider how various technologies could fundamentally alter our own lives (both for the good, and for the bad). Here are a few visions of the future that you may actually end up living in, each driven by a core technology that appears to be on the horizon.
Automated cars piloted by computers instead of humans already exist, but if this technology becomes ubiquitous — if self-driving cars replace most or all conventional models, for instance — they could dramatically change life for those with access to them. It’s easy to imagine some of the changes such a development could bring about: for example, advanced self-driving cars may eventually greatly reduce the incidence of car accidents, largely by removing user error from the equation. Given that some 1.3 million people die every year in auto accidents, that’s a big deal. (Computers don’t get distracted by yelling kids in the back seat or indulge in too many drinks at the company holiday party.)
The rise of self-driving cars could bring about less obvious changes too. In such a scenario, individual car ownership might become obsolete for most people, supplanted by the ability to summon one of any number of roving self-driving vehicles for a ride at a moment’s notice. This setup would not be dissimilar to how many people use services like Uber and Lyft today, but potentially could become cheaper and more efficient. Should individual ownership fall by the wayside, urban planning might change as well. Reduced demand for parking lots and garages for private vehicles could lead to their removal from city centers, where space is hard to come by.
The shape of cars themselves could change as a result, as well. Without the need to accommodate a human driver, cars could be designed much smaller and therefore more energy-efficient. Not to mention with faster acceleration and more maneuverability — which computer navigation systems may ultimately be able to take advantage of more safely than humans could.
If autonomous cars end up the only vehicles on the road, the computerized efficiency and inter-car coordination they may possess could allow them to drive at dramatically higher speeds than cars do today (e.g. all accelerating and decelerating in unison). This would help them avoid traffic jams, and the combination of perks could make it far easier for people to commute to cities from far-flung exurbs. This would give them the ability to live in areas where housing is typically far cheaper than in urban locales without sacrificing convenience. Hence, if migration away from city centers becomes more doable, it may well become more appealing.
However, the rise of self-driving cars presents dangers, too. Full adoption of this technology would likely put legions of truckers, cabbies, and other professional drivers out of work, potentially driving mass unemployment and depressing wages — at least in the short term before people can retrain or find other work. Driverless cars might also be vulnerable to hackers: a petty meddler might sabotage or co-opt an individual car, but more serious hacking attacks against such a system might disable entire fleets. Worse yet, such attacks could deliberately cause large crashes, or otherwise weaponize the vehicles themselves.
Like self-driving cars, augmented reality technology is already here. You likely remember the explosive popularity of Pokémon Go, a smartphone game that involves capturing digital monsters in the real world. The decidedly less popular Google Glass device was also an early attempt at augmented reality. The premise of this technology involves superimposing digital media features, such as images or sounds, over the real world in a way that enhances your experience of it.
It’s easy to imagine the direction that augmented reality technology might go should it really take off; we’re already experiencing it to a degree. We already use smartphones to access extra information about things we encounter on the go. Now imagine if the wealth of information you might want to access on your phone were displayed automatically on a clear eyepiece that you wear habitually. Or, alternately, a very futuristic virtual retina display. For example, you might look at a used car for sale and instantly receive a blue-book estimate of its value. Or, you might learn who made a piece of art that catches your eye without having to lift a finger. And that guy at the party whose name you didn’t catch might receive a virtual name tag to spare you some awkwardness — and with information from your social media accounts, you could even get an instant analysis of whom you know in common.
And that’s just practical stuff. Imagine playing a board game that only you and the other players can see, or a sport with virtual obstacles projected on a real-life playing field. Of course, just like smartphones, augmented reality devices could impose substantial social costs on their users. Information overload is a very real risk if you have a screen in front of your face at all times. So is sheer distraction: smartphones already cause problems in this capacity, but imagine trying to have a conversation with someone who had information popping in front of their face while you spoke. The addictive capacity of “always on” eye pieces could also be troubling.
Today, many people are addicted to their smartphones, and could get worse if the information was appearing in real time — not even requiring you to reach into your pocket. What’s more, constant access to hands-free information might also produce an over reliance among the most habitual users. Since augmented reality devices may also be designed to record virtually anything they were pointed at, they may present privacy concerns. If some models of AR devices saved everything they recorded by default, then any conversation you have with someone wearing such a device could become a matter of permanent record.
The idea of virtual reality is a science fiction staple (think The Matrix), which can ironically make it harder to take seriously as a prospect in real life. But like the other technologies we’ve discussed so far, virtual reality devices already exist in some preliminary forms, such as the Oculus Rift VR headset and the VIVE. It is entirely possible that more thoroughly fleshed-out versions of virtual reality could become as commonplace as televisions are today within the next several decades.
In its most futuristic sci-fi presentations — again, think of The Matrix — virtual reality is totally indistinguishable from actual reality. The user experiences a completely realistic range of motion and set of sensory inputs. This degree of perfection isn’t likely to be realized any time soon, but popular interest in VR could conceivably drive the near-term development of visual, aural, and tactile feedback that approaches immersive realism. Eventually, it could even be to the extent that your brain doesn’t automatically register the VR experience as artificial.
Speculation about this technology tends to focus on its applications as a method of delivering media, such as VR video games where you physically play the role of the characters, or VR movies where you can move invisibly around the action as it occurs. But a truly robust and widely available VR cyberspace could have more profound implications for the way people live their lives.
VR could allow people to save vast amounts of time on travel: offices and even some retail stores could become unnecessary, and distant friends and relatives could be reached for visits almost instantly (provided it was a reasonable simulacrum of actual physical presence). Imagine having a nearly totally realistic coffee shop conversation (minus the taste of the coffee) with a close friend who lives on the other side of the world. Or better yet, why not fly together over some mountains, or visit virtual copies of the wonders of the world, or experience a simulation of Mars together?
Taken to its logical extreme, VR could supplant the need go out in public. In conjunction with automated transport technologies such as driverless cars and drones — which could cheaply delivery anything you needed directly to you. What we think of as the public sphere could gradually shift into the virtual world. Such a transformation has the potential to completely reshape life for those who live through it. VR users could conceivably design their own online appearances (what age, gender, or animal do you want to look like?), break the normal laws of physics, teleport, enjoy immunity from physical harm, or acquire new — albeit digital — goods for free. In these ways, virtual reality has a number of advantages over actual reality.
Whatever exotic benefits an extensive virtual reality might confer, it also presents substantial risks. Such a dramatic transformation of the way humans typically interact could affect the content of those interactions in unpredictable ways. It could also impede their ability to interact normally when in the same room in reality. What’s more, some people can’t shake the feeling that even if VR social experiences felt mostly real, that something valuable from “real” interactions would be lost.
The notion of identity may also be compromised in VR. How do you know for sure who you are talking to? As the internet has proven, anonymity can be dangerous. Predicating the majority of human interaction on such a system could be highly problematic if it were to fail or fall under the influence of those who sought to harm. Additionally, companies today that want to persuade us or change our behavior for the company’s benefit have a relatively limited ability to do so — compared to the full immersive control that VR could offer. Imagine if your fully immersive VR experiences were being controlled by companies trying to sell you their product or service?
Though the phrase “genetic modification” still evokes a futuristic frisson, the practice of deliberately altering the genetic makeup of living organisms goes all the way back to the earliest days of human agriculture and animal husbandry. But thus far, humans have (understandably) been skittish about applying their increasingly sophisticated means of genetic modification to themselves. However, as the technology becomes more and more potent, the possible benefits may convince large numbers of people to overcome their squeamishness.
The prospects for genetically modifying humans are tantalizingly broad. If such a process becomes cheap and widespread, it could eventually lead to the elimination of most hereditary maladies — everything from trivial concerns like male-pattern baldness to serious afflictions like sickle-cell disease, and Huntington’s. It could also be used to reduce genetic predispositions toward some of the most common conditions that afflict humans and ultimately end their lives, like heart disease and cancer.
The benefits would likely not stop at disease prevention. In a truly advanced form, genetic engineering could allow parents to essentially “customize” their children — potentially even beyond simple aesthetic choices like build and hair color. This process could conceivably produce more intelligent offspring, as well as healthier, taller, more athletic, and more attractive ones. Since a substantial portion of personality appears to be genetic, parents might conceivably attempt to control the personalities of their children (e.g. making them more conscientious, less anxious, or more altruistic). The second-order consequences of such a change in the population’s genetic makeup could be immense and far-reaching.
Needless to say, meddling in human genetics is risky — it could produce unpredictable knock-on health effects for those who receive such treatment, or have unintended implications for personality development. For example: perhaps some traits we find desirable are unexpectedly linked to other ones that are undesirable. While the imagination tends to turn to body horror and apocalyptic scenarios when considering these possibilities, other adverse effects from human genetic modification could be economic or social, at least at first. If this process is initially expensive, its earliest adopters would likely be the wealthy — who already enjoy substantial economic advantages, including better health outcomes.
Reinforcing these advantages with access to genetically modified children would run a strong risk of further entrenching existing inequalities between those who could afford such treatments and those who could not. Even if such technologies are banned in some countries, if other countries allowed them, it could create opportunities for anyone with enough money to pursue them.
Of all transformative technological possibilities, space travel is perhaps the most iconic. Countless thousands of science fiction tales have involved or even focused on it, back to the very beginnings of the genre. While faster-than-light interstellar travel as seen in franchises like Star Trek remains an exceedingly distant prospect (most physicists believe it would violate the laws of physics), humans have been taking steps towards travel within our solar system for over 50 years. The Apollo missions of the ’60s and ’70s demonstrated that such travel is indeed possible — albeit difficult.
Colonization of other worlds and moons, on the other hand, has yet to be attempted. But with the likes of Elon Musk’s SpaceX planning for a Martian colony to be established in 2025, the prospect could prove less distant than it seems.
Because of the immense technical constraints and costs associated with interplanetary colonization, it is unlikely that the next several decades will see huge numbers of people relocating to the moon or Mars. The earliest colonies are likely to be very small and scientific in nature, dedicated to closely studying the conditions of their inhospitable surroundings and developing sustainable systems. However, other space colonies could be more commercial: a small but growing industry is making plans to mine asteroids for water, precious metals, and other resources.
Both types of colonies could be a driving force behind a host of technological advancements, and could even resolve some of the material shortages that human civilization is liable to face on Earth in the coming decades. If even comparatively small colonies grow numerous enough, they could prove an important bulwark against a species-destroying disaster on our home world.
Space travel is unlikely to produce the kind of catastrophic backfire that many of the other technologies we’ve discussed might — aside from the unlikely possibility of an exotic space-borne disease hitching a ride back to Earth with returning colonists. But if things go badly in the hostile conditions of future space colonies, that would certainly constitute a major tragedy for humanity.
Slowing or halting the natural aging process is a very old human interest. The legend of the Fountain of Youth dates back thousands of years. Countering the aging process itself has not been an explicit focus for many in modern medicine, but as medical science and technology has improved, average lifespans have grown dramatically. The question of whether biological aging itself can be arrested remains controversial, but a small community of scientists — such as those working at the SENS Research Foundation — is already attempting to find ways to do so.
If they should succeed, and anti-aging treatments become available far and wide, their effects could be profound. Such treatments wouldn’t likely confer anything close to immortality — humans would still be vulnerable to disease, body system failures, accidents, violence, and so forth. Though it’s worth noting that aging is the chief risk factor in a wide range of ailments, many of which would become much less prevalent if the population were to stop naturally aging.
But if deaths related to aging became rare enough, the shift could still have vast ramifications in terms of how humans plan their lives. Careers and marriages could last for centuries instead of decades. Massive projects that would have stretched across multiple generations could be seen through by their original staff. Entire industries dedicated to the palliation of the elderly might wither away for lack of clientele.
Naturally, any changes to the fundamental facts of human life on this scale should be treated with extreme caution. One obvious risk associated with the end of natural aging is overpopulation. If the birth rate held constant without nearly as many natural deaths to offset it, it would become far more crowded in rapid fashion. Such a shift could require a widespread reevaluation of the wisdom of having children, and given that the drive to procreate is as fundamental to human psychology as the knowledge of our own mortality, doing so would likely prove very difficult indeed.
Further, as with human genetic modification, this technology would likely be too expensive for any but the wealthy to use it initially, therefore further entrenching their existing economic leg up. This eventuality would put the young poor at an even greater disadvantage than they currently face. What’s more, without older generations dying off, younger people may not have the same ability to rise in power and authority over time. This could potentially lead to a social structure where the very old (but not actually “aged”) control most of the power in society, with little opportunity available for the young.
Like genetic modification and virtual reality, artificial intelligence is a futurism staple that sometimes comes with apocalyptic connotations. When listing various existential threats to humanity, multiple AI-related scenarios typically appear in the rundown. But the super-intelligent AIs aren’t likely to be the first type of AI that humanity makes widespread use of. Indeed, already today we see narrow AI applications springing up everywhere.
The AIs of the near future are likely to be much smarter than forerunners like the Siri and Alexa digital assistants, with their frequently comical misunderstandings of basic instructions. Nonetheless, the prevalence of such AIs could spell big social changes. Various forms of automation has already begun to perform tasks that were formerly served by humans, such as those found in certain manufacturing jobs.
Even AIs that are substantially less intelligent than a typical human could potentially be specialized to fill a vast number of clerical or service positions (and manual labor gigs, in conjunction with physical machines) that human workers currently occupy. If this change were massive and rapid enough, it could entirely reorganize, or even undo, the concept of “employment” as we understand it. Some technologists believe that the contemporary expectation of lifelong labor for almost all able people would ideally be replaced by a universally provided basic income, funded by the profits of AI labor. On the other hand, others argue that the history of technology has always been one of jobs disappearing — yet that has not historically forced massive unemployment on us. The question may not be whether automation occurs, but how much, and how quickly, it occurs.
Our pocket AI assistants wouldn’t likely go away as AI technology further develops — they’d just become much more competent, able to manage much of the basic schedule-making and logistical hassle that most people currently do for themselves. In fact, some of today’s back and forth with technology might soon prove unnecessary; AI assistants might be able to learn to accurately predict our desires, and make some low-level decisions for us based on that knowledge without us instructing them to do so.
The prospect of a widespread AI revolution, however, does merit caution. Even in the relatively rosy and thoroughly non-apocalyptic scenario described here, the transition from an economy predicated primarily on human labor to an AI economy would need to be managed carefully in to avoid periods of mass unemployment-driven poverty, and to prevent subsets of the populace from arranging the new setup to benefit their own interests disproportionately (e.g. a small number of people or companies that own most of the AI labor). As the rise of pervasive mobile internet has so colorfully taught us, any information technology that vast swathes of people come to rely on must be closely safeguarded against deliberate manipulation.
Further, intelligent AIs of below human intelligence could eventually pave the way for human-level AIs, and even for super-intelligent AIs. These continued advancements in AI technology have the potential to change the course of human history forever. Our control of the world today is not based on strength or speed — indeed, many animals are stronger and faster than us — but rather on our superior intelligence. If beings far more intelligent than us come to exist, it may well be they (or, whomever is in control of them) who will largely determine the future of our world.
For more from Doug Moore and Spencer Greenberg, visit ClearerThinking.org
Disclaimer: The views and opinions expressed are solely those of the author. They do not necessarily represent the views of Futurism or its affiliates.