In the world of machine learning, few experts are as prominent — or flashy, judging by his incredible leopard print hat — as Ben Goertzel, the Brazilian-American founder of the research group SingularityNET.

Perhaps best known as the human mind behind Sophia the Robot, Goertzel is credited with popularizing the term "artificial general intelligence," or AGI. Basically, the idea is that eventually we could see AI so sophisticated that it could achieve any intellectual task that a human could, or perhaps even vastly exceed the capabilities of a human. It's a concept that some thinkers say could either bring about a utopian singularity and others fret could spell the start of the AI apocalypse.

Regardless, the shockwaves of OpenAI's ChatGPT and other generative AI with unprecedented capabilities have led many to wonder if AGI is closer than they ever suspected.

In a conversation with Futurism, Goertzel went deep on his views about consciousness — human, AI, and otherwise — the role of AI in copyright, and his experiences doing psychedelics with algorithms.

This conversation has been lightly edited for clarity and brevity.

Futurism: Where did you get the hat? Do you have more than one?

Ben Goertzel: That's top, top secret, classified information, not to be revealed until the singularity.

Is AI at a level where it could "replace" humans yet, or are we close to that?

I mean, I'm not sure what that phrasing means. Because I don't think humans have replaced squirrels or cats for that matter, or the great apes. Right? Humans are humans, we have our own particular values in the scheme of things and AIs are probably going to be fairly different from us.  

There's not that much of a point to making AIs that exactly simulate people, since we have a lot of people already.

I think the meaningful answer to that question will be twofold. One, will AIs be as generally intelligent as people? And two, at what pace and in what ways will AIs replace humans economically in serving functions in the job market? But of course, neither of those advances need to lead to the AIs taking humans’ place on the planet. They could, but they certainly don’t entail that, necessarily. 

When do you think AIs will achieve human-level intelligence, or AGI?

My friend Ray Kurzweil predicted 2029 for human-level AI, and then he thought we'd have another 16 years before we got to the singularity with radically superhuman AGI, but I don’t agree with that. 

I think you're once you have a human-level AGI, you’re some small integer number of years from a radically superhuman AGI, because that human-level AGI software can rewrite its own code over and over. It can design new hardware and pay people to build a new factory for it and whatnot. But for human-level — by which we mean AI that's at least at human-level on every major capability — I think Ray's prognostication of 2029 is not looking stupid right now, it's looking reasonable. 

Of course, there's a confidence interval around it. Could it be three or four years sooner or three or four years later? Absolutely. We can't predict exactly what will happen in the world. There could be more pandemics, there could be World War Three, there could be a lot of things that happen. In terms of my own AGI projects, I could see us getting there three years from now.

Have you heard about what Grimes is doing, letting people who use her voice in AI-generated music split royalties with her? What do you think of that scheme?

Yeah, opened up her voice samples, which bypasses some copyright hassles. Grimes opened it up, which is cool. But one musician opening it up doesn't get you that far, because what we really need is a foundational model trained on a whole lot of music, then you can fine-tune that based on Grimes or something particular.

So even though Grimes opened up her vocals, I mean, you can't really fully use that unless you have a broader model that's trained on a lot of other artists and then there are the same copyright issues. There's Creative Commons music, but it's not that much. It’s, like,  less than 10,000 hours, whereas Google's MusicLM was trained on 300,000 hours. So there are issues of copyright to work through there. But I mean, on the other hand, the music is all there, people are gonna download and train models anyway regardless of copyright, so things are gonna move forward.

What would you say to critics who say that generative AI is basically repackaging the work of other writers or artists or musicians without their consent?

Well, it's not that simple, because a lot of creative work has that problem anyway, right? I remember all these lawsuits in music, like Joe Satriani — who's one of my heroes — sued Coldplay for making a song that sounded like one of his. They're both good songs, actually, and I don’t know if Coldplay heard that Satriani song or not, right? I mean, they might have not heard it because there's only a certain number of permutations, the familiar chords in rock music. On the other hand, they might have heard it, you know, on the radio somewhere and then it pops into their head when they're singing. While I love Satriani, I didn't fully agree with his perspective there.

But I mean, Led Zeppelin stole all those Black artists’ blues songs. And I mean, sure, they went a step too far and they stole the words along with the chords, but if they merge in different words on the same chords, then it’s just the same twelve-bar blues, right?

Then again, how many bands made songs in the style of Led Zeppelin? Everyone did in a certain generation. How many death metal songs are there, really? 

There are two fundamental issues here. One is: what's your right to your own identity? That's the basic issue with deepfakes, as well as the limitations of musical style. We want somebody to validate that something really is from you, rather than from some fake version of you. And, you know, digital watermarking technology can do that. So that's really just an issue of standards and adoption, and the world is being slow with this.

The moment deepfakes became a thing, all hardware and software companies and media outlets could have decided on these standard verification watermarks to validate, to say this is really a picture of this person, and this was a picture taken this time and space location. We haven't bothered to establish those standards. Some friends of mine in the crypto space, we were doing meetings with Interpol four years ago, trying to get them to adopt a standard solution. But government and industry organizations don't move as fast as technology. 

Another core issue here is just ways for artists to make money. That's an economic issue and the bottom line is most musicians don't make any money anyway. The fact that AI models are stealing some of your creative contributions, in most cases that's not the main factor causing the artists not to make any money. And the thing is, if we fairly compensated artists for the use of their creative works in an AI model — that's the right thing to do — it's still going to be pennies because there are just so many artists. It is a good thing to do, and it should be done. But it's not going to solve the problem of artists being able to earn a living, because that's just a broader social issue.

Switching gears here: do you think an AI would ever be sophisticated enough to do drugs, and if so, would you do drugs with one?

I've done drugs with an AI, if by that we mean I have done drugs and then interacted with an AI.

How was that? 

In the 90s, I was doing algorithmic music composition. It's quite interesting to play music and have an AI play music back to you. But if you're in an altered state of consciousness, it can be even more interesting.

I'm synesthetic, I see music habitually. AI-based music has different weird patterns to it. And of course, seeing music is accentuated even more in the psychedelic state of mind. 

I think in terms of AI themselves taking drugs, the challenge is more to get the AI to not be in an altered state of consciousness. When we're working with our OpenCog open source AGI system, it's very easy to make it either obsessive-compulsive and just like keep thinking about the same thing over and over or to make it basically stuck in stoned mind, drifting from one thing to another to another to another, like semi-randomly. You have to work to have the system auto-tune its own parameters so it's not OCD or overly stoned and distracted.

With humans, our brains evolved to keep the parameters in a range where we can do useful stuff, and AIs sort of have to recapitulate that process.

You can see that in a simpler way with a system like ChatGPT. The default mode was sort of off the rails and then you do a bunch of prompt engineering to get it to be less insane and more coherent and more controlled.

Of course, AI doesn't need chemical drugs in the same sense that a human does. But you can set the parameters of an AI system so it's going way off the rails in terms of its internal dynamics as well as its external behaviors. And much like on some human drug trips, this will cause it to generate a whole lot of creative things, most of which are garbage and some of which will cause it to be totally unable to estimate the nature or quality of it. 

Do you think there are any sentient or conscious AIs, or do you think we’re gonna get there soon?

I'm a panpsychist, so I believe this coffee cup has its own level of consciousness, and a worm does and an elementary particle does. Every system in the universe is perceiving and acting and adjusting its state based on its prior state and its interactions and there's some elementary sort of spark of experience there.

I think this is by far the majority view of consciousness on the planet now, what everyone in India, China, and Africa believes, right? The notion that the world is divided into only humans and a few other mammals that can experience things and then inanimate objects, this is not the default perspective throughout human history nor on the planet now. 

So if you take more of a panpsychist view, the question isn't whether ChatGPT is conscious or has experiences, the question is, is its variety of experience human-like or not? And I think not so much. It's very diffused, it doesn't have a focus. It doesn't have a working memory like we do, a single focus of consciousness. The lack of a body has led to the lack of an understanding of what it is and its own self and its relation to others. So it clearly is missing a lot of key aspects of human-like consciousness.

If you imagine what it's like to be ChatGPT, it's pretty different than like being a human. If you have no body, you don't know who you are. You don't have intimate heart-to-heart relationships with other minds, you don't even know all the conversations you're having at a given moment in time. It’s as if your toe is doing one thing and your fingers are doing another. They're not coordinated in any way. So it's a much more diffuse, weird mode of consciousness.

Now, could you build a system that has human-like consciousness? I think so. I think it takes a quite different cognitive architecture than what people are doing now. When I work on OpenCog, which is my main attempt to build AGI, it will then be deployed in a decentralized way on our SingularityNET blockchain platform. Its plumbing is decentralized, but the OpenCog system that we're developing to run on this decentralized platform has a sort of coherent self-model that has a coherent working memory. 

It’s ironic because ChatGPT has a centralized infrastructure on Microsoft servers, but it doesn't have any sort of coherent organization. And its "mind," right, it's diffuse. Whereas what we're building is decentralized and diffuse in its underpinnings, in the machines that it runs on and its software processes, but we're building something with some coherence and unity in its cognitive architecture. So it knows who and what it is and has a recognizable human-like state of mind.

I think you can build systems like that, but I don't see any evidence that OpenAI is trying. Google DeepMind is trying, OpenCog and SingularityNET projects are trying and others are trying. There's no reason a digital computer-based AGI system can't have a more human-like form of sentience and consciousness, it’s just that a system like ChatGPT is not architected that way, right? It's intended to have, like, a billion different conversations at once, each of which loses track of itself after a brief period of time, rather than to have a unified state of mind with an overall coherence and self-awareness.

More on expert takes: VR Pioneer Warns That AI Could Drive Us All Insane


Share This Article