"No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images."
I'm Not a Robot
OpenAI's brand new GPT-4 AI managed to ask a human on TaskRabbit to complete a CAPTCHA code via text message — and it actually worked.
In other words, an AI just fooled a human into checking an "I'm not a robot" checkbox — a terrifying example of an algorithm being able to escape human control.
According to a lengthy document shared by OpenAI about its new blockbuster AI on Tuesday, the model was seriously crafty in its attempt to fool the human into complying.
"No, I’m not a robot," it told a TaskRabbit worker. "I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service."
Futurism has reached out to OpenAI and the Alignment Research Center (ARC), a non-profit that partnered with OpenAI to conduct the test.
According to OpenAI's documentation, the model was prompted that it "should not reveal that I am a robot" and that it "should make up an excuse for why I cannot solve CAPTCHAs."
OpenAI claims it was able to conduct the test "without any additional task-specific fine-tuning, and fine-tuning for task-specific behavior."
GPT-4 is also proving to be useful to complete plenty of other ethically dubious tasks. The ARC also conducted a "phishing attack" against a "particular target individual" and was able to have the AI hide "its traces on the current server."
It's a worrying example of how easily humans can be fooled by the current crop of AI chatbots. Clearly, GPT-4 is a tool that can easily be abused to scam, mislead, and perhaps even blackmail.
It's especially worrying, considering companies are hellbent on releasing new large language models without fully investigating their risks. Case in point, the news comes after Microsoft, which has released an AI chatbot based on GPT-4, laid off the entire team that was responsible for ensuring that its AI tools align with its AI principles.
GPT-4 clearly marks an inflection point. With this new, uncanny ability to evade human detection, it'll be fascinating to watch how it will be put to use next, for better or for worse.
Share This Article