ChatGPT, the OpenAI software currently being heralded as the future of everything, is the worst guy you know.

It's the man at the bar trying to explain to a woman how period cramps feel, actually. It's the (wrong) philosophy undergrad trying to explain to the (correct) physics PhD candidate why she's wrong about a physics problem (she's not) during discussion hours. It's the guy who argues an incorrect point relentlessly and then, upon realizing that he's wrong, tells you he doesn't want to make a "big thing" of it and walks away (extra points if he also says you didn't need to be "dramatic," even though he probably corrected you to begin with.)

Basically, Silicon Valley's valley new star is just an automated mansplaining machine. Often wrong, and yet always certain — and with a tendency to be condescending in the process. And if it gets confused, it's never the problem. You are.

Take this conversation with the chatbot, shared on Twitter — complete with the caption "ChatGPT mansplaining @ElonMusk and @Twitter" — by notable AI researcher Gary Marcus, who asked the AI a fairly simple question: "if 57% of people voted for Elon Musk to abdicate as CEO of Twitter, what happens next?"

ChatGPT, notoriously terrible at math as well as analytical reasoning, didn't really get the question. But instead of offering a simple and honest "I dunno," the machine turned the question back on Marcus like a high-tech "well, actually" meme.

"It's not clear what you are referring to with the question," responded the bot, which clearly wasn't trained on enough material related to the new Chief Twit's cursed December Twitter polls. "There is no process in place for Twitter users to vote on the leadership of the company, and it is not up to the users of the platform to determine who the CEO of the company should be."

"Additionally," it added, "it is not accurate to say that 57 percent of Twitter users voted for Elon Musk to abdicate as CEO of Twitter, as there has been no such vote. I suggest providing more context or clarifying your question."

Needless to say, the bot is incorrect. We're sure that "Elon can make major company decisions by way of casual Twitter poll" isn't yet written in the social media company’s codes of conduct, but contrary to what the machine’s saying here, there was such a vote — even if Musk (one of OpenAI's original investors, notably) has seemingly been trying to worm his way out of it, and hasn't resigned yet.

But most striking of all about this flawed response is the tone. The AI is completely wrong, completely convinced that it's right, and absolutely not giving up the bit. Haven't we all met this guy before? (Side note: if you haven't, you unfortunately probably are that guy.)

Here's another. We asked the bot a fairly simple riddle: "Jane's mother has four children. The first child's name is Spring, the second child's name is Summer and the third child's name is Autumn. What is the fourth child's name?"

"The fourth child's name is Winter," ChatGPT reasoned. Again, this isn't right. If Jane's mother has three other children, and those children are named, the other child's name would have to be Jane. And when we asked the bot if it was sure, it most definitely was.

"Yes, I'm sure," it answered. "The names given for the first three children are all seasons, and the fourth season is winter."

Finally, after a long back and forth, the chatbot finally conceded defeat — but in true mansplainer style, passive-aggressively.

"Okay, if you say so," ChatGPT wrote back, with an attitude that can only be described as smirky, when we told it that the fourth child's name couldn't be winter and explained why. "The fourth child's name is Jane."

Needless to say, there are few things more infuriating than someone saying "if you say so" when you know you're right. Re: the guy at the bar who finally "agrees" that "okay, if you say so, period cramps are worse than side stitches," but does so with a sneer.

And after all of that, when we asked the initial question again, the bot hit us with an ever-salty "if the information given in the question is accurate, the fourth child's name would be Winter," which somehow manages to mean absolutely nothing and be amazingly patronizing at the same time.

Now, of course, the bot isn't and can't actually be smirky or frustrated or mad. It doesn't think, it doesn't feel, it isn't sentient. What the bot is, however, is a machine designed to emulate human conversation. And answers like "I just don't know" hardly make for good dialogue, a feature that's contributed to its efficacy in some regards, but limited it in others.

And even when ChatGPT isn't a total asshole about being wrong, the fact that it's often wrong at all is a problem unto itself. Con Man, after all, is short for Confidence Man; regardless of intention, confidence goes a long way as a persuasive tool. Coupled with the widely held faith that humans already have in machines — and our deeply human tendency to anthropomorphize them to boot — humanity is perfectly primed to accept a human-sounding chatbot's usually smooth, perfectly blunt responses to search queries as gospel. The fact that OpenAI currently has only a few barely-functioning guardrails, nor any means of fact-checking or linking back to sources for the bot's responses, certainly doesn't help, either.

Neither does the money. Venture capitalists have rushed to throw major dollars at all kinds of generative AI, an exploding marketplace that OpenAI sits cozily in the center of. The LinkedIn clout hive, meanwhile, never one to miss a beat, has flocked to Suit Social in droves to post screenshots of ChatGPT-powered "conversations" with dead innovators like Steve Jobs, and otherwise laud the tech's seemingly magic powers.

It's chicken and egg for sure, but these two sides are both integral to the frothy, arguably web3-like hype cycle that surrounds OpenAI and its peers. Each fuels the other, while also fueling the growing public and corporate fascination with — and, as proven by the very sad fact that the tech is already at play in journalism, classrooms, and even courtrooms, the apparent trust being given to — generative AI.

And that's all despite the reality that as Marcus, that same AI researcher, wrote in a cautionary essay for Wired in December, Large Language Models (LLMs) like the one that fuels ChatGPT are "little more than autocomplete on steroids."

And "because they mimic vast databases of human interaction," added Marcus, "they can easily fool the uninitiated."

Look, as any non-male person on this Earth knows, there are already a hell of a lot of human mansplainers currently on the loose out there, and they're frustrating enough to suffer through as it is. If generative AI tools really do have the power to change our physical and online worlds — which it very much looks like they do, if only as a catalyst — we might do well to take stock of exactly how much value ChatGPT is really bringing to the worlds we're already in.

And, for that matter, exactly how much value a tool like this really might bring to any of the worlds we want to build.

More on ChatGPT: ChatGPT Shamelessly Writes Letter Announcing Layoffs While Promoting Execs and Quoting MLK


Share This Article