There weren’t many people who had heard of bioterrorism before 9/11. But shortly after the September 11th terrorist attacks, a wave of anthrax mailings diverted the attention of the public towards a new weapon in the arsenal of terrorists—bioterrorism. A US federal prosecutor found that an army biological researcher was responsible for mailing the anthrax-laced letters, which killed 5 and sickened 15 people in 2001. The cases generated huge media attention, and the fear of a new kind of terrorist warfare was arising.
However, as with every media hype, the one about bioterrorism disappeared quickly.
But looking toward the future, I believe that we may not be paying as much attention to it as we should. Although it may be scary, we have to prepare ourselves for the worst. It is the only way we can be prepared to mitigate the damages of any harmful abuses if (and when) they arise.
Ultimately, this means investing in research related to the policy and governance surrounding a host of new technologies. Here is where some of the most pressing concern lies.
In the future, brain implants will be able to empower humans with superpowers with the help of chips that allow us to hear a conversation from across a room, give us the ability to see in the dark, let us control moods, restore our memories, or “download” skills like in The Matrix movie trilogy. However, implantable neuro-devices might also be used as weapons in the hands of the wrong people.
When we have implanted microchips in our brains to enhance cognitive capabilities, it could serve as a platform for hackers to cause damage from a distance. They could turn on functionalities, turn off devices, or bombard the brain with random harmful messages. They could even control what you are thinking and, by extension, how you act.
Fortunately, there are several initiatives that seek to understand exactly how such technologies might work, which could give us the knowledge needed to keep a step ahead.
As the medical wearable and sensor market starts to truly boom, it is logical to think ahead to what might follow this “wearable revolution.” I think that the next step will be insideables, digestables, and digital tattoos.
“Insideables” means devices implanted into the body, generally just under the skin. In fact, there are people who already have such implants, which they can use to open up a laptop, a smartphone, or even the garage door. “Digestables” are pills or tiny gadgets that can be swallowed, which could do things like track digestion and the absorption of drugs. “Digital tattoos” are tattoos with “smart” capabilities. They might easily measure all of our health parameters and vital signs.
All of these teeny-tiny devices might be misused—some could be used to infuse lethal drugs into a organism or strip a person of their privacy. That is the reason why it is of the utmost importance to pay attention to the security aspect of these devices. They can be vulnerable to attacks, and our life will (quite literally) depend on the safety precautions of the company developing the sensors. That may not sound too comforting—putting your health in the hands of a company—but microchip implants are heavily regulated in the US, and so we are already looking ahead to issues surrounding this advancement.
In the future, nanoscale robots could live in our bloodstream or in our eyes and prevent any diseases by alerting the patient (or doctor) when a condition is about to develop. They could interact with our organs and measure every health parameter, intervening when needed.
Nanobots are so tiny that it is almost impossible to discover when someone, for example, puts one into your glass and you swallow it. Some people are afraid that, by using such tiny devices, total surveillance would become feasible. There also might be the possibility there to utilize nanobots to deliver toxic or even lethal drugs to the organs.
By researching ways to identify when these nanobots are being utilized now, we could potentially prevent their misuse in the future.
Robots are quickly becoming ubiquitous in a number of industries. Surgical robots constitute one of the most important strains. For example, the da Vinci Surgical System enables a surgeon to operate with enhanced vision, precision, and control. However, these types of robots have certain security and privacy indications which are not explored in detail yet. But there are already signs that they should be.
Last year, MIT reported that researchers at the University of Washington successfully demonstrated that a cyberattack can be carried out against medical telerobots. Imagine what might happen if a hacker disrupts an operation by disturbing the communication connection between the robot surgeon and the human giving commands to the robotic scalpel. Proper encryption and authentication cannot foil every kind of attack, but companies need to invest in this now to make sure operations are safe.
Community labs, such as The Citizen Science Lab in Pittsburgh, are getting more and more popular. The aim of these laboratories is to spark more interest in life sciences in citizens—from small children to pensioners. In these labs, people can (for the most part) work on whatever they want, from producing a drug to using genome editing. However, such DIY biotech projects raise a lot of safety concerns.
As the price of lab equipment goes down, the elements of scientific experimentation become affordable to a wide variety of people….of course, that includes criminals and terrorists, who might use such labs to create drugs, biomaterials for weapons use, or harmful synthetic organisms.
The US Food and Drug Administration held a workshop in 2016 in order to better understand 3D printing and bioprinting and how these technologies might be used and abused. Similar conversations are currently taking place about CRISPR gene editing; however, these need to be accelerated and expanded to include the entire community of experts, researchers, and innovators.
Artifical intelligence is expanding at an amazing rate, and of course, the biggest fear isn’t that AI will take our jobs…it’s that they will take our lives.
The concern is that AI systems will become so sophisticated that they will work better than the human brain, and after a while, they will take control. In fact, Stephen Hawking even said that the development of full artificial intelligence could spell the end of the human race. Elon Musk had similar feelings, and in response, launched OpenAI, a non-profit research company that aims to carefully promote and develop AI that follow human ethics. The organization ultimately plans to make its patents and research open to the public.
By far the scariest scenario involves hacking the AI systems that we will have. Imagine an autonomous car that is no longer under your control. At all. Or imagine a military drone that is no longer controlled by the military.
That is surely a world that we must avoid, and so we must take action to prevent these things now.
See more from Dr. Bertalan Mesko at The Medical Futurist.