2022 was a banner year for artificial intelligence, and particularly taking into account the launch of OpenAI's incredibly impressive ChatGPT, the industry is showing no sign of stopping.
But for some industry leaders, chatbots and image-generators are far from the final robotic frontier. Next up? Consciousness.
"This topic was taboo," Hod Lipson, the mechanical engineer in charge of the Creative Machines Lab at Columbia University, told The New York Times. "We were almost forbidden from talking about it — 'Don't talk about the c-word; you won't get tenure' — so in the beginning I had to disguise it, like it was something else."
Consciousness is one of the longest standing, and most divisive, questions in the field of artificial intelligence. And while to some it's science fiction — and indeed has been the plot of countless sci-fi books, comics, and films — to others, like Lipson, it's a goal, one that would undoubtedly change human life as we know it for good.
"This is not just another research question that we’re working on — this is the question," the researcher continued. "This is bigger than curing cancer."
"If we can create a machine that will have consciousness on par with a human, this will eclipse everything else we've done," he added. "That machine itself can cure cancer."
Of course, the biggest issue that the industry runs into with the question of consciousness — you know, other than the technological challenge that it would undoubtedly be — is the fact that, well, concept itself doesn't really have a firm definition, in the field or beyond it. Philosophically, consciousness is vague and debatable. And scientifically, as the NYT notes, efforts to tidily nail consciousness down to specific brain functions or otherwise signifiers tends to fall flat. There are also a number of deeply ethical questions that arise with just the concept of machine consciousness, particularly related to machine labor.
For his part, Lipson has his own definition of consciousness, that being the capacity to "imagine yourself in the future," as explained by the NYT. Thus, the engineer has focused a great deal of his career on working to build adaptable machines — generalized intelligence that can learn to evolve by machine-learned natural selection, responding in kind to changing environments and errors or injury within the mechanical body.
In other words: a machine with the ability to not only learn more and correct responsively, as machines do now, but a machine with the ability to imagine how it might be better, and evolving to suit that vision. It's a slight distinction, but an important one.
Even so, considering that consciousness has no set definition, it's hard to cosign any particular one.
It's also impossible to ignore the fact that humans really, really like to anthropomorphize just about anything we can, from toasters to pets to vegetables and more. Such a tendency is exceedingly present in the fields of robotics and artificial intelligence, where those building machines constantly project human features, both physical and intellectual, onto the devices that they create.
And to that end, it's always worth asking whether those machines actually possess the qualities that researchers like Lipson imagine they one day will, or whether scientists, as a result of their own very human urges, are projecting humanity — or nature, or consciousness, or whatever you want to call it — onto very much not conscious machines, reflecting back what they hope to see, rather than what is.
"There's the hubris of wanting to create life," Lipson told the NYT. "It's the ultimate challenge, like going to the moon."
READ MORE: 'Consciousness' in Robots Was Once Taboo. Now It’s the Last Word. [The New York Times]
More on AI: OpenAI Was Founded to Counter Bad Ai, Now worth Billions as It Does the Opposite
Share This Article