Book Smart

New Wikipedia Clone Made Entirely of AI Hallucinations

An entire "universe" of nonsensical information that somehow all still fits together.
Frank Landymore Avatar
A colorful, abstract rendition of the Wikipedia globe logo, featuring puzzle pieces with various letters and characters from different writing systems. The image has a vibrant gradient background with red, orange, yellow, green, blue, and purple hues blending together. The puzzle pieces and letters appear slightly distorted and layered, creating a dynamic and artistic effect.
Illustration by Tag Hartman-Simkins / Futurism

A new Wikipedia-style site is purportedly made entirely of AI-hallucinations, treating visitors to preposterous insights beamed from a nonexistent reality.

Called “Halupedia,” its creators say that the “infinite” encyclopedia invents everything it contains on the fly, with each search term — or link click — becoming a prompt for an AI model on the backend, which relates information “in the deadpan register of a 19th-century scholarly press.”

“Every link leads to an entry that does not exist yet — until you click it,” reads the description on GitHub.

The site’s homepage is upfront that it’s an exercise in AI fabulation, but once you dive into one of its countless entries, it begins to feel like a real knowledge database, at least if you suspend your disbelief for the many absurdities hurled your way. There’re links, citations, and quotes from academic journals. Some even have footnotes — which, of course, are also made up.

One of the top articles is about “The Great Pigeon Census of 1887,” which it claims was “an ambitious, if ultimately misguided, undertaking by the Royal Society for Avian Enumeration (RSFE) to meticulously count every gold-crested rock dove within the administrative boundaries of the United Kingdom of Great Britain and Ireland.” The census was supposedly conceived by a “Sir Reginald Featherton,” who we’re told believed an accurate pigeon count was “crucial for understanding the nation’s urban resource allocation and for the fair distribution of Parliamentary Crumbs.”

Much like a genuine Wikipedia article, proper nouns often refer to another article, so if you wish, you can read more about the Royal Society for Avian Enumeration, or our knighted ornithologist Sir Featherton.

You can also invent new entries through the search box, and the site will provide a series of fabricated article titles related to your query. (Searching “bullsh*t” returned as one of the possibilities “The Gnomish Mandate of Circular Reasoning,” for example.) Click on one, and the site says it’s “resolving a minor scholarly dispute,” before landing on the newly-hallucinated, faux-authoritative article.

Despite interweaving layers of nonsense with further layers of nonsense, the developers at least wanted to keep the hallucinations lore-consistent, not unlike how media fandoms are obsessed with canon. To do that, they created a write-forward feature so that links to future articles in waiting contain hidden metadata read by the AI that lays out “canonical” facts like important dates. “The LLM is instructed that the encyclopedia is hallucinated and absurd, but it must not contradict itself,” per the GitHub page.

The system isn’t perfect; the article on the Royal Society for Avian Enumeration, for instance, committed the grave academic error of saying it disbanded in 1927, when the original “The Great Pigeon Census of 1887” article that linked to it says it formally dissolved in 1891.

Unfortunately, like many unregulated internet experiments, Halupedia has been beset with edgelords, with some of the top articles making outright racist references in their titles. But in the Halupedia AI’s defense, it basically ignores the racist nature of the prompts and invents something innocently unrelated to the meaning they bear in this reality, in its trademark grandiloquent style. And we suppose this is still preferable to the sins of another Wikipedia riff, Elon Musk’s anti-woke “Grokipedia,” which cites actual neo-Nazi sites as sources of information.

Your mileage may vary with senseless AI text as a source of amusement, but there’s something to be said about foregrounding AI’s often absurd flaws and its potential for misuse, rather than dressing it up as some impartial arbiter of factuality. The project also has its own subreddit, so we await the heated debates over what AI hallucinations are lore-accurate to ensue.

More on AI: New AI Trained Only on Pre-1930 Data Speaks Like the Most Old-Timey Guy Imaginable

Frank Landymore Avatar

Frank Landymore

Contributing Writer

I’m a tech and science correspondent for Futurism, where I’m particularly interested in astrophysics, the business and ethics of artificial intelligence and automation, and the environment.