3 days ago
Colby Cosh: The lifelike nature of artificial intelligence
Article content
On Tuesday a Harvard artificial-intelligence researcher, Keyon Vafa, published a short tweet-thread about a new paper looking at how some types of high-performing intelligence algorithms are behaving under the hood. If you're interested in the implications of AI progress, this paper is instructive even if you don't fully understand it, and, yes, that is tantamount to a confession on my part. (And, as the old joke goes, if you're not interested in the implications of AI progress, rest assured that AI progress is interested in you.)
Article content
Article content
For academics like Vafa and his colleagues, AI has a pervasive 'black box' issue that is part of why it inspires fear and confusion. We have learned how to make computers mimic intelligence quite convincingly, and sooner than almost anyone imagined, by applying previously unfathomable amounts of brute computing power. But our ability to understand how these thinking objects are thinking is often limited.
Article content
Article content
If you don't understand anything at all about the Vafa paper, the thing to notice about it is that it is fundamentally experimental. The research approach is oddly like a biologist's, like somebody who studies wombats by rounding up a bunch of wombats and observing wombat behaviour. The team 'teaches' an AI algorithm to perform intellectual task X to near-perfection by giving it verbal instructions and data (in plain English), and then has to figure out 'How did it actually do that?' using statistical inference.
It's the choice of task X that makes this paper most intriguing. Anybody educated enough to still be reading a newspaper probably knows the basics of how the human understanding of planetary orbits evolved. In classical antiquity, the prevailing assumption was that the planets orbited the Earth in circular paths. Well before the birth of Jesus, astronomers were already good at predicting the movements of the planets on the basis of this false model. The planets sometimes apparently move backwards in the sky, so an unmodified 'fixed Earth' + 'perfectly circular paths' model couldn't do the job on its own: to make accurate predictions, astronomers had to add other circular motions-within-motions, called 'deferents' and 'epicycles,' to the basic circular-orbit picture.
Article content
Article content
Well, fast-forward a dozen centuries, and along come Copernicus asking 'What if Earth isn't at the centre after all?'; Kepler asking 'What if the orbits aren't circular, but elliptical?'; and Newton, who got to the bottom of the whole thing by introducing the higher-level abstraction of gravitational force. Bye-bye epicycles.
Article content
None of these intellectual steps, mind you, added anything to anyone's practical ability to predict planetary motions. Copernicus's model took generations to be accepted for this reason (along with the theological/metaphysical objections to the Earth not being at the centre of the universe): it wasn't ostensibly as sophisticated or as powerful as the old reliable geocentric model. But you can't get to Newton, who found that the planets and earthbound objects are governed by the same elegant and universal laws of motion, without Copernicus and Kepler.
Article content
Which, in 2025, raises the question: could a computer do what Newton did? Vafa's research group fed orbital data to AIs and found that they could correctly behave like ancient astronomers: make dependable extrapolations about the future movements of real planets, including the Earth. This raises the question whether the algorithms in question generate their successful orbital forecasts by somehow inferring the existence of Newtonian force-abstractions. We know that 'false,' overfitted models and heuristics can work for practical purposes, but we would like AIs to be automated Newtons if we are going to live with them. We would like AIs to discover new laws and scientific principles of very high generality and robustness that we filthy meatbags haven't noticed yet.