The AI wars have become a parameter measuring contest.
Iambic Parameter
Tech giants are waging a war, trying to one-up each other's efforts to cook up the largest and most capable large language models (LLMs), which are the AI tech powering tools like OpenAI's ChatGPT.
Amazon's now looking to come up with its own offering, investing large sums to train its own model codenamed "Olympus" to take on the likes of ChatGPT and Google's Bard, insider sources told Reuters.
The training data for the secretive project is reportedly vast. The model will have a whopping 2 trillion parameters, which are the variables that determine the output of a given model, making it one of the largest currently in development. In comparison, OpenAI's GPT-4 LLM has "just" one trillion parameters, according to Reuters.
That would also dwarf Amazon's existing generative AI models that the company hosts on its Amazon Web Services.
There's a lot we still don't know about the project — but Amazon has a good chance to make a big splash in the AI world, given the tremendous amounts of computing and server infrastructure it already has access to. After all, LLMs are notoriously hardware and energy-intensive.
Attack of the Titans
As The Information reported earlier this week, it's still unclear when Amazon will unveil Olympus, let alone release it to the public.
But given the e-commerce giant's immense available resources and dominance in the web hosting space, Amazon is the next company to watch in the rapidly evolving AI industry. It's also got money to burn; the company announced earlier this year that it'd invested $4 billion in AI startup Anthropic.
Important to note: even with twice as many parameters as GPT-4, it remains to be seen whether Amazon's Olympus will be able to outperform OpenAI's blockbuster AI model.
"A model with more parameters is not necessarily better," Yann LeCun, "godfather of AI" and Meta chief AI scientist, tweeted in September.
More on Amazon: Amazon Is Being Flooded With Books Entirely Written by AI
Share This Article