12-05-2025
Pushing The Limits Of Modern LLMs With A Dinner Plate?
Cerebras Systems
If you've been reading this blog, you probably know a bit about how AI works – large language models parse enormous amounts of data into tiny manageable pieces, and then use those to respond to user-driven stimuli or events.
In general, we know that AI is big, and getting bigger. It's fast, and getting faster.
Specifically, though, not everyone's familiar with some of the newest hardware and software approaches in the industry, and how they promote better results. People are working hard on revealing more of the inherent power in LLM technologies. And things are moving at a rapid clip.
One of these is the Cerebras WSE or Wafer Scale Engine – this is a massive processor that can power previously unimaginable AI capabilities.
First of all, you wouldn't call it a microprocessor anymore. It's famously the size of a dinner plate – 8.5 x 8.5 inches. It has hundreds of thousands of cores, and a context capability that's staggering.
But let me start with some basic terminology that you can hear in a presentation by Morgan Rockett, previously an MIT student, talking about the basics when it comes to evaluating LLM output.
LLMs are neural networks. They use a process of tokenization, where a token is one small bit of data that gets put into an overall context for the machine programming question.
Then there is context – the extent to which the program can look back at the previous tokens, and tie them into the greater picture.
There's also inference – the way that the computer thinks in real time when you give it a question or produce a response.
Another term that Rockett goes over is rate limits, where if you don't own the model, you're going to have to put up with thresholds imposed by that model's operators.
What Rockett reveals as he goes along explaining the hardware behind these systems is that he is a fellow with Cerebras, which is pioneering that massive chip.
Looking at common hardware setups, he goes over four systems – Nvidia GPUs, Google TPUs, Grok LPUs (language processing units,) and the Cerebras WSE.
'There's really nothing like it on the market,' he says of the WSU product noting that you can get big context and fast inference, if you have the right hardware and the right technique. 'In terms of speed benchmarks, Cerebras is an up and coming chip company. They have 2500 tokens per second, which is extremely fast. It's almost instant response. The entire page of text will get generated, and it's too fast to read.'
He noted that Grok is currently in second place with around 1600 tokens per second.
The approach showcased in this presentation was basically a selection of given chunks of a large file, and the summarization of the contents of that file.
Noting that really big files are too big for LLMs to manage, Rockett presents three approaches – Log2, square root, and double square root – all of which involve taking a sampling of chunks to get a cohesive result without overloading your model, using a 'funnel' design.
In a demo, he showed a 4 to 5 second inference model on a data set of 4 GB, the equivalent, he said, of a 10-foot-tall stack of paper, or alternately, 4 million tokens.
The data he chose to use was the total archive of our available information around the transformational event of the assassination of JFK in the 60s.
Rockett showed the model using his approach to summarize, working with virtually unlimited RAM, where tokenization was the major time burden.
With slotted input techniques, he said, you could get around rate limits, and tokenization can conceivably be worked out.
Check out the video for a summary on the archive, going over a lot of the CIA's clandestine activities in that era, and tying in the Bay of Pigs event and more.
Going back to practical uses for the Cerebras processor, Rockett mentioned legal, government and the trading world, where quick information is paramount.
I wanted more concrete examples, so I asked ChatGPT. It returned numerous interesting use cases for this hardware, including G42, an AI and cloud company in the United Arab Emirates, as well as the Mayo Clinic, various pharmaceutical companies, and the Lawrence Livermore National Laboratory (here's a story I did including Lawrence Livermore's nuclear project).
Then I asked a different question:
'Can you eat dinner off of a Cerebras WSE?'
'Physically?' ChatGPT replied. 'Yes, but you'd be committing both a financial and technological atrocity … the Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built—Using it as a plate would be like eating spaghetti off the Rosetta Stone—technically possible, but deeply absurd.'
It gave me these three prime reasons not to attempt something so foolhardy (I attached verbatim):
'In short: You could eat dinner off of it,' ChatGPT said, 'Once. Then you'd have no chip, no dinner, and no job. Using it as a plate would be like eating spaghetti off the Rosetta Stone, technically possible, but deeply absurd.'
Touché, ChatGPT. Touché.
That's a little about one of the most fascinating pieces of hardware out there, and where it fits into the equation of context + inference. When we supercharge these systems, we see what used to take a long time happening pretty much in real time. That's a real eye-opener.