A win for the Guardians!

Space Bots

The brave Guardians of Space Force, the American military unit created to protect us feeble terrestrial Americans from various space-related threats, has announced a new and Earthly foe: generative AI.

As Bloomberg reports, Space Force leaders have forbidden their Guardians — the unfortunate name given to the military branch's members — from using generative AI tools like ChatGPT on government devices, arguing that the web-based AI tools present security risks, among other concerns.

In an internal memo obtained by Bloomberg, Space Force chief technology and innovation officer Linda Costa reportedly told Guardians that while generative AI "will undoubtedly revolutionize our workforce and enhance Guardian's ability to operate at speed," she emphasized that the tech must be integrated responsibly. Which, at least for now, seems to mean abstaining from using generative AI tools at all, with Costa citing "concerns over cybersecurity, data handling, and procurement requirement" in the September 29 memo, according to Bloomberg.

Angry Guy

Costa's reservations, particularly her concerns regarding sensitive US military data, are fair. When we use products like ChatGPT, our interactions are swallowed up into the ever-data-hungry models, stored and saved for training purposes; thus, if a Space Force Guardian were to, say, ask ChatGPT to drum up a report embedded with sensitive government information, said sensitive data could get vacuumed into OpenAI's system and out of the department's control.

Our glorious Space Force also isn't the only prominent entity to ban generative AI. Companies including Apple and Verizon have prohibited the tech from corporate systems over similar data and privacy fears, and at least one major company, Samsung, has already experienced an embarrassing, chatbot-assisted data leak.

But according to the Bloomberg report, some folks are apparently pretty peeved by Space Force's decision — namely, a guy named Nicolas Chaillan, the former chief software officer in the Defense Department and current founder and CEO of a chatbot company called Ask Sage.

Claiming that his bot meets Space Force's security requirements, Challain told Bloomberg that the Guardians' decision is "short-sighted," adding that his tool has around 10,000 customers in the Defense Department alone. (Per Bloomberg, in a September email to Costa and other defense officials, Challain declared that not using his tech would "put us years behind China," though it's unclear how abstaining from using one dude's chatbot might have such an effect.)

It's weird to imagine Defense Department employees using chatbots to write their reports. We never thought we'd say this, but: hats off to the Guardians. Responsible, thoughtful adoption of AI, especially when it comes to our government institutions, is and will continue to be incredibly important — and right now, it feels like they're doing things right.

More on the Space Force: Top Space Force General Confused about Why Space Force Exists

Share This Article