It's an absolutely terrible idea.

Writing Hack

You can sure count on AI-enamored humans to continue to find new and dubious ways of using OpenAI's ChatGPT, the premier chatbot du jour.

While we all admittedly could use a second pair of eyes to look over our work, young adult fiction writer Lauren Kay takes it a little too far by recommending that up-and-comers use ChatGPT to critique their own writing.

"Did you know you can use ChatGPT to get a legitimately objective assessment of your writing?" Kay asks in a since-deleted TikTok video that went viral after being shared and torn to shreds on Twitter.

"Our writer friends love us too much to be as brutally honest as we need them to be," she added. "But you know who can be brutally honest? A robot."

Needless to say, you should probably not use a tool that regularly makes up facts to 'objectively' critique your writing.

Meaningless Critiques

Kay demonstrates the chatbot's supposed usefulness by feeding it the first page of her upcoming novel. First, though, she "calibrates" the bot by having it critique one of the "best books of all time." Her choice for this — we kid you not — is "The Fault In Our Stars."

Kay's criteria are arguably even more suspect, asking ChatGPT to rank prose based on six different factors on a scale of one to ten, including "world-building."

Unsurprisingly, ChatGPT returned some utterly meaningless critiques that arguably read like a high schooler who was forced to provide feedback on a classmate's paper for full credit.

"The writing style and voice are engaging and draw the reader in," the chatbot opined flatly. "The use of humor and irony is effective and adds depth to the character's voice."

Biased Bot

As you may well know, ChatGPT is far from an "objective" arbiter, or a critic. It is a large language model designed to more or less predict the best way of stringing together a legible sentence.

It's also trained largely on sources from the internet which makes it an inherently biased tool.

There's another thorny aspect of feeding a chatbot your writing: anything you feed it eventually gets subsumed into its system to be trained on and imitated. In other words, once it's handed it over to the bot, your writing could potentially be no longer your own. Just ask Amazon, who worried its confidential secrets were being leaked by employees using ChatGPT.

To Kay's credit though, she eventually apologized for the advice, after a "rough 24 hours" of overwhelming backlash.

"I didn't know the extent ChatGPT uses the data you give it when I posted that video, which was incredibly irresponsible of me," she said somberly in a TikTok update.

More on ChatGPT: CEO of OpenAI Says Making Models Bigger Is Already Played Out


Share This Article