As if the dating world wasn't already nightmarish enough, artificial intelligence seems to be making it worse — or, at the very least, weirder.
In an editorial for the Financial Times, reporter Jemima Kelly was flummoxed when a recent date admitted that he had instructed ChatGPT to write a psychological profile on her.
Specifically, Kelly's unidentified suitor dispatched ChatGPT's buzzy new "deep research" tool to give him insights into her personality. The chatbot ended up spitting out a whopping eight pages of information about the reporter, who like most others in her field, has thousands of published articles for the chatbot to work with.
Ironically, the ChatGPT-generated profile wasn't all that bad, with both accolades ("intellectually curious, independent-minded, and courageous in her convictions") and criticisms ("psychologically, one might describe Kelly as a [skeptic] with a conscience") that mostly made her seem astute.
Initially, the reporter said she "didn’t really mind" that her date had used the chatbot to profile her — possibly because that first profile doesn't sound all that damning.
"I was a bit taken aback," she wrote, "but the fact that he had told me about it made it seem fairly light-hearted, and I thought it was a sign he was probably quite intelligent and enterprising."
When it occurred to her that others might be able to use it for nefarious ends, however, she got creeped out — and that was before she instructed both ChatGPT and Google Gemini to profile her for her own edification.
Before having it take a crack at her, Kelly asked the chatbots whether it was ethical to psychologically profile someone without their knowledge or consent. Both suggested it was neither, with ChatGPT calling that practice "invasive and unfair" and Gemini insisting that it could "be a violation of privacy and potentially harmful."
Still, when the journalist asked Gemini to provide her with a psychological profile, "it was only too happy to oblige." Indeed, that profile was less measured and generally ruder than the ChatGPT one, with the Google chatbot telling Kelly that her "directness can be perceived as confrontational" and that she was likely a stressed-out perfectionist, based seemingly on her rigor and attention to detail in her work.
Though Gemini did include a disclaimer that the information it outputted was "speculative" and therefore "not intended to be a definitive psychological assessment," it didn't ask whether Kelly had provided consent to generate the profile.
It's unclear whether the reporter asked ChatGPT to give her the "deep research" treatment, perhaps because that tool is only available to paying users. Regardless, it's bizarre that anyone who has enough published work online can be subjected to such nonconsensual profiling — even if it's full of half-baked platitudes.
"Only those of us who have generated a lot of content can be deeply researched and [analyzed] in this way," Kelly wrote. "I think we need to start pushing back."
"But maybe," she continued cheekily, "I’m just being stressy and confrontational. Typical."
More on creepy AI: Man Annoyed When ChatGPT Tells Users He Murdered His Children in Cold Blood
Share This Article