logo
#

Latest news with #JaronLanier

Elon Musk sees humanity's purpose as a facilitator of superintelligent AI. That should worry us
Elon Musk sees humanity's purpose as a facilitator of superintelligent AI. That should worry us

Irish Times

time26-04-2025

  • Science
  • Irish Times

Elon Musk sees humanity's purpose as a facilitator of superintelligent AI. That should worry us

During an interview published earlier this month on the website Vox, the American technologist and writer Jaron Lanier came out with one of the more unsettling remarks I've encountered recently. Lanier is one of the more interesting of Silicon Valley's in-house intellectuals. He was an early developer of virtual reality technologies in the 1980s, and is often credited (and credits himself) with coining the term to describe that technology. He's a genuine believer in the human possibilities and social benefits offered by information technology, but also a trenchant and vehement critic of the anti-human tendencies within the culture of Silicon Valley. Lanier's value as a public intellectual has always struck me as being somewhat limited by his tendency, common among Silicon Valley thinkers, to see social and political issues as primarily engineering problems. But he is generally worth reading and listening to as a tech industry insider whose critiques are motivated by both a love of technology and a deep liberal humanism – a love, that is, for technology as a human art form that is inseparable from a love of humanity itself. The remark in the Vox interview that I found especially unsettling was one in which Lanier addressed the ascendancy of precisely the opposite tendency within the elite of Silicon Valley. He is, he says, constantly talking to people who believe that we need to put everything into developing superhuman artificial intelligence , recognise its status as a higher form of intelligence and being, and simply get out of its way. 'Just the other day,' he says, 'I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a 'bio baby' because as soon as you have a 'bio baby,' you get the 'mind virus' of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it's much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical.' READ MORE In one sense, I should not find this anecdote especially unsettling. I have written fairly extensively about this particular constellation of ideas, and the tech industry milieu within which it exists. In writing my first book, which explored this anti-human veneration of machines, I met many people who advanced some or other variation of the ideas Lanier sketches here. But artificial intelligence, though it was certainly an article of faith among the people I spoke to, was at that point largely an abstract concept for society at large. There was no real sense, in 2014 and 2015, that we would so quickly come to live in a world so defined and deformed by this technology. The ideas Lanier speaks about – that humanity is in its last decadent days, and that the role of technologists was to bring about the advent of the superintelligent machines that will succeed us – were at the time niche, considered fairly eccentric within the context of radical Silicon Valley techno-optimism. As Lanier points out, this attitude is now a very common one in the circles in which he moves. The idea, or perhaps more accurately ideology, is known as 'effective accelerationism'; its adherents advocate for the rapid advancement of machine learning technology, unfettered by any kind of regulation or guardrails, in an effort to hasten the advent of AI superintelligence. The more utopian versions of this idea see such a superintelligence as a means toward the solution of all human problems – economic abundance for all, the curing of every known disease, solutions to climate change , and so on. In its darker inflections, it envisions such an intelligence as vastly superior to humanity, and destined to overthrow and replace us entirely. [ Mark O'Connell: It's degrading to have to take anything Elon Musk says seriously, but he's right about one thing Opens in new window ] As an idea, it's most associated with the English philosopher Nick Land. As a professor at the University of Warwick in the 1990s, Land was a central figure of an influential circle of rogue academics who embraced the utopian ideas around early internet culture; he went Awol around the turn of the century and resurfaced, in Shanghai, as an enigmatic and fugitive thinker whose increasingly anti-human and anti-democratic writing advocated a kind of techno-fascism in which all human life would be subordinate to, and ultimately obliterated by, the supremacy of AI. 'Nothing human,' as he memorably and chillingly put it, 'makes it out of the near future.' In a sense there is nothing very new about this kind of thinking: the fascists of the interwar years combined a great enthusiasm for new technology with a contempt for Enlightenment values of liberal democracy and humanism. The Italian Futurists fetishised machinery, speed and violence, glorifying war as 'the only hygiene of the world'. Land remains a niche figure, but he has his constituency, and his ideas have been influential in Silicon Valley. The disturbing ideology Lanier encountered at that Palo Alto lunch are clearly derived from his writing. And then there's Elon Musk , who is himself unquestionably among the most influential people on the planet. Just a few weeks ago, Musk made the following statement on X, the social network that he owns and on which he is the most followed account: 'It increasingly appears that humanity is a biological bootloader for digital superintelligence.' [ Mark O'Connell: Elon Musk makes being a billionaire plutocrat look profoundly uncool Opens in new window ] A bootloader, to be clear, is a piece of code that initiates the start-up of a computer's operating system when it's powered on. In other words, humanity, in the view of the world's richest and arguably most influential man, is important only as a necessary facilitator of superintelligent AI. I wouldn't want to bet that Musk has read Land – my sense of it is that the extent of his reading is the cry-laughing emojis posted by sycophants under his bad jokes on X – but his invocation of the biological bootloader notion suggests the extent to which Land's ideas have filtered through. And it's hard to think of a more damning indictment of our time than the fact that people with such an openly anti-human worldview, who hold humanity itself in such contempt, have been granted such unprecedented wealth, power and cultural influence.

Will AI become God? That's the wrong question.
Will AI become God? That's the wrong question.

Vox

time07-04-2025

  • Science
  • Vox

Will AI become God? That's the wrong question.

It's hard to know what to think about AI. It's easy to imagine a future in which chatbots and research assistants make almost everything we do faster and smarter. It's equally easy to imagine a world in which those same tools take our jobs and upend society. Which is why, depending on who you ask, AI is either going to save the world or destroy it. What are we to make of that uncertainty? Jaron Lanier is a digital philosopher and the author of several bestselling books on technology. Among the many voices in this space, Lanier stands out. He's been writing about AI for decades and he's argued, somewhat controversially, that the way we talk about AI is both wrong and intentionally misleading. Jaron Lanier at the Music + Health Summit in 2023, in West Hollywood, California. Michael Buckner/Billboard via Getty Images I invited him onto The Gray Area for a series on AI because he's uniquely positioned to speak both to the technological side of AI and to the human side. Lanier is a computer scientist who loves technology. But at his core, he's a humanist who's always thinking about what technologies are doing to us and how our understanding of these tools will inevitably determine how they're used. We talk about the questions we ought to be asking about AI at this moment, why we need a new business model for the internet, and how descriptive language can change how we think about these technologies — especially when that language treats AI as some kind of god-like entity. As always, there's much more in the full podcast, so listen and follow The Gray Area on Apple Podcasts, Spotify, Pandora, or wherever you find podcasts. New episodes drop every Monday. This interview has been edited for length and clarity. What do you mean when you say that the whole technical field of AI is 'defined by an almost metaphysical assertion'? The metaphysical assertion is that we are creating intelligence. Well, what is intelligence? Something human. The whole field was founded by Alan Turing's thought experiment called the Turing test, where if you can fool a human into thinking you've made a human, then you might as well have made a human because what other tests could there be? Which is fair enough. On the other hand, what other scientific field — other than maybe supporting stage magicians — is entirely based on being able to fool people? I mean, it's stupid. Fooling people in itself accomplishes nothing. There's no productivity, there's no insight unless you're studying the cognition of being fooled of course. There's an alternative way to think about what we do with what we call AI, which is that there's no new entity, there's nothing intelligent there. What there is a new, and in my opinion, sometimes quite useful, form of collaboration between people. What's the harm if we do? That's a fair question. Who cares if somebody wants to think of it as a new type of person or even a new type of God or whatever? What's wrong with that? Potentially nothing. People believe all kinds of things all the time. But in the case of our technology, let me put it this way, if you are a mathematician or a scientist, you can do what you do in a kind of an abstract way. You can say, 'I'm furthering math. And in a way that'll be true even if nobody else ever even perceives that I've done it. I've written down this proof.' But that's not true for technologists. Technologists only make sense if there's a designated beneficiary. You have to make technology for someone, and as soon as you say the technology itself is a new someone, you stop making sense as a technologist. If we make the mistake, which is now common, and insist that AI is in fact some kind of god or creature or entity or oracle, instead of a tool, as you define it, the implication is that would be a very consequential mistake, right? That's right. When you treat the technology as its own beneficiary, you miss a lot of opportunities to make it better. I see this in AI all the time. I see people saying, 'Well, if we did this, it would pass the Turing test better, and if we did that, it would seem more like it was an independent mind.' But those are all goals that are different from it being economically useful. They're different from it being useful to any particular user. They're just these weird, almost religious, ritual goals. So every time you're devoting yourself to that, it means you're not devoting yourself to making it better. One example is that we've deliberately designed large-model AI to obscure the original human sources of the data that the AI is trained on to help create this illusion of the new entity. But when we do that, we make it harder to do quality control. We make it harder to do authentication and to detect malicious uses of the model because we can't tell what the intent is, what data it's drawing upon. We're sort of willfully making ourselves blind in a way that we probably don't really need to. I really want to emphasize, from a metaphysical point of view, I can't prove, and neither can anyone else, that a computer is alive or not, or conscious or not, or whatever. All that stuff is always going to be a matter of faith. That's just the way it is. But what I can say is that this emphasis on trying to make the models seem like they're freestanding new entities does blind us to some ways we could make them better. So does all the anxiety, including from serious people in the world of AI, about human extinction feel like religious hysteria to you? What drives me crazy about this is that this is my world. I talk to the people who believe that stuff all the time, and increasingly, a lot of them believe that it would be good to wipe out people and that the AI future would be a better one, and that we should wear a disposable temporary container for the birth of AI. I hear that opinion quite a lot. Wait, that's a real opinion held by real people? Many, many people. Just the other day I was at a lunch in Palo Alto and there were some young AI scientists there who were saying that they would never have a 'bio baby' because as soon as you have a 'bio baby,' you get the 'mind virus' of the [biological] world. And when you have the mind virus, you become committed to your human baby. But it's much more important to be committed to the AI of the future. And so to have human babies is fundamentally unethical. Now, in this particular case, this was a young man with a female partner who wanted a kid. And what I'm thinking is this is just another variation of the very, very old story of young men attempting to put off the baby thing with their sexual partner as long as possible. So in a way I think it's not anything new and it's just the old thing. But it's a very common attitude, not the dominant one. I would say the dominant one is that the super AI will turn into this God thing that'll save us and will either upload us to be immortal or solve all our problems and create superabundance at the very least. I have to say there's a bit of an inverse proportion here between the people who directly work in making AI systems and then the people who are adjacent to them who have these various beliefs. My own opinion is that the people who are able to be skeptical and a little bored and dismissive of the technology they're working on tend to improve it more than the people who worship it too much. I've seen that a lot in a lot of different things, not just computer science. One thing I worry about is AI accelerating a trend that digital tech in general — and social media in particular — has already started, which is to pull us away from the physical world and encourage us to constantly perform versions of ourselves in the virtual world. And because of how it's designed, it has this habit of reducing other people to crude avatars, which is why it's so easy to be cruel and vicious online and why people who are on social media too much start to become mutually unintelligible to each other. Do you worry about AI supercharging this stuff? Am I right to be thinking of AI as a potential accelerant of these trends? It's arguable and actually consistent with the way the [AI] community speaks internally to say that the algorithms that have been driving social media up to now are a form of AI, if that's the term you wish to use. And what the algorithms do is they attempt to predict human behavior based on the stimulus given to the human. By putting that in an adaptive loop, they hope to drive attention and an obsessive attachment to a platform. Because these algorithms can't tell whether something's being driven because of things that we might think are positive or things that we might think are negative. I call this the life of the parity, this notion that you can't tell if a bit is one or zero, it doesn't matter because it's an arbitrary designation in a digital system. So if somebody's getting attention by being a dick, that works just as well as if they're offering lifesaving information or helping people improve themselves. But then the peaks that are good are really good, and I don't want to deny that. I love dance culture on TikTok. Science bloggers on YouTube have achieved a level that's astonishingly good and so on. There's all these really, really positive good spots. But then overall, there's this loss of truth and political paranoia and unnecessary confrontation between arbitrarily created cultural groups and so on and that's really doing damage. So yeah, could better AI algorithms make that worse? Plausibly. It's possible that it's already bottomed out and if the algorithms themselves get more sophisticated, it won't really push it that much further. But I actually think it can and I'm worried about it because we so much want to pass the Turing test and make people think our programs are people. We're moving to this so-called agentic era where it's not just that you have a chat interface with the thing, but the chat interface gets to know you through years at a time and gets a so-called personality and all this. And then the idea is that people then fall in love with these. And we're already seeing examples of this here and there, and this notion of a whole generation of young people falling in love with fake avatars. I mean, people talk about AI as if it's just like this yeast in the air. It's like, oh, AI will appear and people will fall in love with AI avatars, but it's not. AI is always run by companies, so they're going to be falling in love with something from Google or Meta or whatever. The advertising model was sort of the original sin of the internet in lots of ways. I'm wondering how we avoid repeating those mistakes with AI. How do we get it right this time? What's a better model? This question is the central question of our time in my view. The central question of our time isn't, how are we able to scale AI more? That's an important question and I get that. And most people are focused on that. And dealing with the climate is an important question. But in terms of our own survival, coming up with a business model for civilization that isn't self-destructive is, in a way, our most primary problem and challenge right now. Because the way we're doing it, we went through this thing in the earlier phase of the internet of 'information should be free,' and then the only business model that's left is paying for influence. And so then all of the platforms look free or very cheap to the user, but then actually the real customer is trying to influence the user. And you end up with what's essentially a stealthy form of manipulation being the central project of civilization. We can only get away with that for so long. At some point, that bites us and we become too crazy to survive. So we must change the business model of civilization. How to get from here to there is a bit of a mystery, but I continue to work on it. I think we should incentivize people to put great data into the AI programs of the future. And I'd like people to be paid for data used by AI models and also to be celebrated and made visible and known. I think it's just a big collaboration and our collaborators should be valued. How easy would it be to do that? Do you think we can or will? There's still some unsolved technical questions about how to do it. I'm very actively working on those and I believe it's doable. There's a whole research community devoted to exactly that distributed around the world. And I think it'll make better models. Better data makes better models, and there's a lot of people who dispute that and they say, 'No, it's just better algorithms. We already have enough data for the rest of all time.' But I disagree with that. I don't think we're the smartest people who will ever live, and there might be new creative things that happen in the future that we don't foresee and the models we've currently built might not extend into those things. Having some open system where people can contribute to new models and new ways is a more expansive and just kind of a spiritually optimistic way of thinking about the deep future. Is there a fear of yours, something you think we could get terribly wrong, that's not currently something we hear much about? God, I don't even know where to start. One of the things I worry about is we're gradually moving education into an AI model, and the motivations for that are often very good because in a lot of places on earth, it's just been impossible to come up with an economics of supporting and training enough human teachers. And a lot of cultural issues in changing societies make it very, very hard to make schools that work and so on. There's a lot of issues, and in theory, a self-adapting AI tutor could solve a lot of problems at a low cost. But then the issue with that is, once again, creativity. How do you keep people who learn in a system like that, how do you train them so that they're able to step outside of what the system was trained on? There's this funny way that you're always retreading and recombining the training data in any AI system, and you can address that to a degree with constant fresh input and this and that. But I am a little worried about people being trained in a closed system that makes them a little less than they might otherwise have been and have a little less faith in themselves.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store