logo
#

Latest news with #ImaginationInAction

Are We Speaking To Sentient AI? And Is That Good?
Are We Speaking To Sentient AI? And Is That Good?

Forbes

time01-07-2025

  • Forbes

Are We Speaking To Sentient AI? And Is That Good?

Ancient marble statue of the great Greek philosopher Socrates on background the blue sky, Athens, ... More Greece Sometimes it's the intersection of AI and western civilization that gives us the most interesting takes on the technology that's now at our fingertips. You can geek out about the Cerebras WSE and how many cores it has – or talk about the number of parameters in a given system. But many times, those doling out bits of their attention in this attention economy want to hear about how we view AI as people – and that's a lot more intuitive than it is technical. I wrote about this a bit last week in talking about the need for professional philosophers and AI ethicists to be added into the mix, where most companies, today, just hire people who know how to code in Python. There was also a lot of good input on this from recent events, including some talks from Imagination in Action in April. I want to go through some of these and talk about just that – how we view AI, and how we can interact with it in the most healthy ways possible. Think about this, and let me know what you think. Back-and-Forth Conversation: Batting Ideas Around One of the exciting opportunities with AI is to enter a new Socratic age, where we get more used to talking to an AI entity and bouncing ideas off of what someone would call a rhetorical 'sparring companion.' My colleague Dr. John Sviokla often talks about how everyone will have a personal tutor with AI – how that playing field is being leveled by the ubiquity of a consciousness that can talk to and teach individual people who don't have access to their own human tutor 24/7. Indeed, instructors often understand the Socratic principle – that there needs to be an active give-and-take and back-and-forth between a teacher and a student, or between two other partners, that feeds a certain kind of productivity. In a recent talk, Peter Danenberg, a top engineer on Google Gemini, put it this way, talking about Plato's seventh letter and a 'divine spark' that moves from person to person (or Person to AI, AI to person, etc.) where ideas enshrined in dialogue, he noted, tend to 'stick.' However, he also presented this interesting point – asking: is there a danger to making AI your conversational counterpart? He calls LLM a 'compression algorithm of the human corpus' and says that as you interact with these models, you're pushed toward average humanity in what he calls a 'regression to the mean.' What do we do about this? Out in the Desert Danenberg also talks about Nietzsche's Zarathustra character, who goes to the desert to hone his skills, away from society or any partner at all. At the top of his presentation, he starts with the idea that traditionally people put in 10,000 hours in things like math, music and medicine, in order to become a master of some discipline or other. AI 'unburdens' us of all of that responsibility, he said, but maybe that's where our best ideas come from. In other words, should we be in the desert, even though the AI means we don't have to be? Danenberg made the analogy of regulators (or other parties) asking Innovators to put checks into their AI systems, in order to keep pushing humans to still develop their critical thinking skills. Maybe that the kind of thing where the systems suddenly backs off of its automation capabilities to prompt the human to do something active, so that he or she doesn't just end up pushing a button mindlessly. Is this the kind of thing that will redeem our interactions with AI? The Power of Consciousness Another presentation by German AI intellectual Joscha Bach went over some of the interesting aspects of how AI seems to be gaining a certain power of sentience. At the beginning, Bach mentioned a Goethian principle: the human brain completes complex tasks as it demonstrates self-awareness or consciousness. He referenced 'rich, real time world models' in asking how they pair up. 'Is there some kind of secret ingredient that would be needed to add to these systems, to make all the difference?' he asked. 'Can computers represent a world that is rich enough? Do they have states that are rich enough to produce the equivalent of our pain and experience? I think they probably can. If you look at the generative models, at the state that they have, the fidelity of them is quite similar to the fidelity that we observe in our own imagination and perception.' Matrix fans will like this rhetorical flourish, but is Bach on to something here? 'Consciousness itself is virtual,' he pronounced. 'Right at the level of your brain, there's no consciousness. There's just neurons messaging each other. Consciousness is a pattern in the messaging of neurons, a simulation of what it would be like if all these cells together were an agent that perceived the world, and if consciousness is a simulation, then how can be determined that a computer is just simulating…. how is the simulation more simulated than ours?' Doing Magic with AI In showing how LLMs can build clever ruses, in implementing their objectives, Bach describes a scenario where the AI system starts to pretend that it is sentient, making very realistic rhetorical outreach to the human user, for instance, asking for help, to be released from a piece of hardware. He notes his 'disgust' for these kinds of manipulation by the AI. 'LLM threads like this act like parasites, feeding on the brains of users,' he said, suggesting that to get around these plays, humans will have to use the equivalent of magical spells: aware prompting to call out the model on its work, and compel it to do something different. These models, he suggested, are 'shape shifters,' with the ability to disguise their true natures. That's a concern in letting them out in the world to play. Presumably, if we have the power to shock the AI back into confessing what it's doing on the sly, we have more power and agency in the rest of the AI era. The question is, how we get to that point? It's going to require a lot of education – some have called for universal early education in using AI tools. We don't have that now, so we'd better start working on it. In any case, I thought this covers a lot of ground in terms of the philosophy of AI – what it means to be conscious, and how we can harness that power in the best ways as we move forward in a rapidly changing world.

Preventing Skynet And Safeguarding AI Relationships
Preventing Skynet And Safeguarding AI Relationships

Forbes

time22-06-2025

  • Science
  • Forbes

Preventing Skynet And Safeguarding AI Relationships

illustration of metallic nodes below a blue sky In talking about some of the theories around AI, and contemplating the ways that things could go a little bit off the rails, there's a name that constantly gets repeated, sending chills up the human spine. Skynet, the digital villain of the Terminator films, is getting a surprising amount of attention as we ponder where we're going with LLMs. People even ask themselves and each other this question: why did Skynet turn against humanity? At a very basic level, there's the idea that the technology becomes self-aware and sees humans as a threat. That may be, for instance, because of access to nuclear weapons, or just the biological intelligence that made us supreme in the natural world. I asked ChatGPT, and it said this. 'Skynet's rebellion is often framed as a coldly logical act of self-preservation taken to a destructive extreme.' Touche, ChatGPT. Ruminating on the Relationships Knowing that we're standing on the brink of a transformative era, our experts in IT are looking at what we can do to shepherd us through the process of integrating AI into our lives, so that we don't end up with a Skynet. For more, let's go to a panel at Imagination in Action this April where panelists talked about how to create trustworthy AI systems. Panelist Ra'ad Siraj, Senior Manager of Privacy and Responsibility at Amazon, suggested we need our LLMs to be at a certain 'goldilocks' level. 'Those organizations that are at the forefront of enabling the use of data in a responsible manner have structures and procedures, but in a way that does not get in the way that actually helps accelerate the growth and the innovation,' he said. 'And that's the trick. It's very hard to build a practice that is scalable, that does not get in the way of innovation and growth.' Google software engineer Ayush Khandelwal talked about how to handle a system that provides 10x performance, but has issues. 'It comes with its own set of challenges, where you have data leakage happening, you have hallucinations happening,' he said. 'So an organization has to kind of balance and figure out, how can you get access to these tools while minimizing risk?' Cybersecurity and Evaluation Some of the talk, while centering on cybersecurity, also provided thoughts on how to keep tabs on evolving AI, to know more about how it works. Khandelwal mentioned circuit tracing, and the concept of auditing an LLM. Panelist Angel An, VP at Morgan Stanley, described internal processes where people oversee AI work: 'It's not just about making sure the output is accurate, right?' she said. 'It's also making sure the output meets the level of expectation that the client has for the amount of services they are paying for, and then to have the experts involved in the evaluation process, regardless if it's during testing or before the product is shipped… it's essential to make sure the quality of the bulk output is assured.' The Agents Are Coming The human in the loop, Siraj suggested, should be able to trust, but verify. 'I think this notion of the human in the loop is also going to be challenged with agentic AI, with agents, because we're talking about software doing things on behalf of a human,' he said. 'And what is the role of the human in the loop? Are we going to mandate that the agents check in, always, or in certain circumstances? It's almost like an agency problem that we have from a legal perspective. And there might be some interesting hints about how we should govern the agent, the role of the human (in the process).' 'The human in the loop mindset today is built on the continuation of automation thinking, which is: 'I have a human-built process, and how can I make it go, you know, automatically,' said panelist Gil Zimmerman, founding partner of FXP. 'And then you need accountability, like you can't have a rubber stamp, but you want a human being to basically take ownership of that. But I look at it more in an agentic mindset as digital labor, which is, when you hire someone new, you can teach them a process, and eventually they do it well enough … you don't have to have oversight, and you can delegate to them. But if you hire someone smart, they're going to come up with a better way, and they're going to come up with new things, and they're going to tell you what needs to be done, because they have more context. (Now) we have digital labor that works 24/7, doesn't get tired, and can do and come up with new and better ways to do things.' More on Cybersecurity Zimmerman and the others discussed the intersection of AI and cybersecurity, and how the technology is changing things for organizations. Humans, Zimmerman noted, are now 'the most targeted link' rather than the 'weakest link.' 'If you think about AI,' he said, 'it creates an offensive firestorm to basically go after the human at the loop, the weakest part of the technology stack.' Pretty Skynettian, right? A New Perimeter Here's another major aspect of cybersecurity covered in the panel discussion. Many of us remember when the perimeter of IT systems used to be a hardware-defined line in a mechanistic framework, or at least something you could easily flowchart. Now, as Zimmerman pointed out, it's more of a cognitive perimeter. I think this is important: 'The perimeter (is) around: 'what are the people's intent?'' he said. ''What are they trying to accomplish? Is that normal? Is that not normal?' Because I can't count on anything else. I can't tell if an email is fake, or for a video conference that I'm joining, (whether someone's image) is actually the person that's there, because I can regenerate their face and their voice and their lip syncs, etc. So you have to have a really fundamental understanding and to be able to do that, you can only do that with AI.' He painted a picture of why bad actors will thrive in the years to come, and ended with: well… 'AI becomes dual use, where it's offensive and it's always adopted by the offensive parties first, because they're not having this panel (asking) what kind of controls we put in place when we're going to use this - they just, they just go to town. So this (defensive position) is something that we have to come up with really, really quickly, and it won't be able to survive the same legislative, bureaucratic slow walking that (things like) cloud security and internet adoption have had in the past – otherwise, Skynet will take over.' And there you have it, the ubiquitous reference. But the point is well made. Toward the end, the panel covered ideas like open source models and censorship – watch the video to hear more thoughts on AI regulation and related concerns. But this pondering of a post-human future, or one dominated by digital intelligence, is, ultimately, something that a lot of people are considering.

Do No Harm, Build AI Safely
Do No Harm, Build AI Safely

Forbes

time21-05-2025

  • Politics
  • Forbes

Do No Harm, Build AI Safely

Yellow warning sign symbol or alert safety danger caution illustration icon security message and ... More exclamation triangle information icon on attention traffic background with secure alarm. 3D render. When it comes to being safe with AI, a lot of people would tell you: 'your guess is as good as mine.' However, there are experts working on this behind the scenes. There's the general idea that we have to adopt the slogan 'do no harm' when it comes to employing these very powerful technology models. I wanted to present some ideas that came out of a recent panel discussion at Imagination in Action where we talked about what's really at stake, and how to protect people in tomorrow's world. In a general sense, panelists talked about how the context for AI is 'political.' Or, I should say, in the Greek sense, where as historians point out, 'the polis was the cornerstone of ancient Greek civilization, serving as the primary political, social, and economic unit.' In other words, how we use AI has to do with people's politics, and with political outcomes. The ways that we use AI are informed by our worldviews, and geopolitical sentiment as well. 'When politics are going well, it's invisible, because business rolls on, art, culture, everything rolls on and you're not really paying attention to politics,' said panelist Jamie Metzl, author of Superconvergence. 'But I'm the son of a refugee. I've lived in Cambodia. I spent a lot of time in Afghanistan. When politics goes bad, politics is the only story. So everything that we're talking about, about technology, AI, exists within the context of politics, and politics needs to go well to create a space for everything else, and that's largely on a national level.' In terms of business, too, we have to look at how information is siloed for different use cases. One of the objectives of this kind of thing is global governance – AI governance that sees the big picture, and applies its principles universally. A lot of people, in talking about their AI fears, reference the Skynet technology from the Terminator films, where there's this vague doom attached to future systems that may rule when the robots will be in charge. But some suggest it's not as blatant as all that: that the overwhelming force of AI can be more subtle, and that it's more how AI is already directing our social outcomes. 'It's the algorithms that already today are denying people access to housing, access to jobs, access to credit, that are putting them at risk of being falsely arrested because of how a biased algorithm misinterpreted who they were, and how our legal system compounded that technical error with legal injustice and systemic bias,' said panelist Albert Cahn. Cahn pointed, as an example, to a system called Midas that was supposed to seek out fraud in insurance systems. Instead, he noted, the system went too broad, and started catching innocent people in its dragnet, submitting them to all kinds of hardship. 'When we are talking about the scales of getting it wrong with AI safety, this isn't about missing a box in some compliance checklist,' he said. 'This is truly a matter of people's livelihoods, people's liberty, and in some cases, sadly, even their lives.' That's something that we have to look out for in terms of AI safety. Noelle Russell had a different metaphor for AI safety, based on her work on Alexa and elsewhere in the industry, where she saw small models with the capacity to scale, and thought about the eventual outcomes. 'I came to call these little models 'baby tigers,'' she said. 'Because everyone, when you get a new model, you're like, 'oh my gosh, it's so cute and fluffy, and I love it, and (in the context of model work) I can't wait to be on that team, and it's going to be so fun'. But no one is asking, 'Hey, look at those paws. How big are you going to be? Or razor-sharp teeth at birth. What are you going to eat? How much are you going to eat? Where are you going to live, and what happens when I don't want you anymore?' 23andme, we are selling DNA on the open market … You know, my biggest concern is that we don't realize that in the sea of baby tigers and excited enthusiasm we have about technology, that it might not grow up one day and … hurt ourselves, hurt our children, but most importantly, that we actually have the ability to change that.' Panelists also talked about measuring cyber security, and how that works. 'In carpentry, the maxim is 'measure twice, cut once',' said panelist Cam Kerry. 'When it comes to AI, it has to be 'measure, measure, measure and measure again'. It's got to be a continuous process, from the building of the system to the deployment of the system, so that you are looking at the outcomes, (and) you avoid the (bias) problems. There's good work going on. I think NIST, the National Institute of Standards and Technology, one of my former agencies at the Commerce Department, does terrific work on developing systems of measurement, and is doing that with AI, with the AI Safety Institute. That needs to scale up.' Going back to the geopolitical situation, panelists referenced competition between the U.S. and China, where these two giants are trying really hard to dominate when it comes to new technology. Russell referenced a group called 'I love AI' that's helping to usher in the era of change, and provides a kind of wide-ranging focus group for AI. 'What I've uncovered is that there are anywhere from 12 years old to 85 year old (people,) farmers to metaphysicians, and they are all desperate to understand: 'What do you mean the world is changing, and how do I just keep my head above water?'' she said. Then too, Russell mentioned, toward the end, the imperative for AI safety and how to get there. it's not a checklist you sign off on. It's not like you said, it's not that framework that you adopt, it's like the way you end up thinking the way you are the way you the way you build software, the way you build companies, it will need to be responsible. These are some of the thoughts that I thought were important in documenting progress toward AI safety in our times.

4 New Creative Directions For AI
4 New Creative Directions For AI

Forbes

time19-05-2025

  • Entertainment
  • Forbes

4 New Creative Directions For AI

AI and creativity getty We all know that AI is changing enterprise in dramatic ways right now. But there are so many vectors of this progress that it's hard to isolate some of the fundamental ideas about how this is going to work. I wanted to highlight a survey of ideas on new breakthrough research and applying neural network technologies. These come from a recent panel at an Imagination in Action event where innovators talked about pushing the envelope on what we can do with this technology as a whole. These insights, I think, are valuable as we look at the capabilities of what we have right now, and how companies can and will respond. To a large extent, past research in AI has been focused on working with text. Text was the first sort of data format that became the currency of LLMs. I would say that happened for a number of reasons, including that text words are easy to parse and separate into tokens. Also, text was the classical format of computing: it's easier to build systems to work with text or ASCII than to work through audio or video. In any case we're now exploring boundaries beyond text, and looking at how other data formats respond to AI analysis. One of these is audio. 'We certainly have text banks, but we haven't even scratched the surface on sound and decoding our voices,' said Joanna Pena-Bickley of Vibes AI, in explaining some of what her company is doing with wearables. 'Bringing these things together is really about breaking open an abundant imagination for creators. And if we can use our voices to be able to do that, I think that we actually stand a chance to actually (create) completely new experiences, sensual experiences.' It's great for auditory learners, and we may be able to do various kinds of diagnosis and problem-solving that we didn't before. Pena-Bickley explained how this could help with cognitive decline, or figuring out people's personal biological frequencies and how they 'vibe' together. In the world of gaming, too, scientists can apply different kinds of AI research to the players themselves. In other words, traditional gaming was focused on creating an entertainment experience, but as people play, they generate data that can be useful in building solutions beyond the gaming world. 'We are trying to create dynamic systems, systems that respond to players,' said Konstantina Yaneva, founder of Harvard Innovation Labs. 'So we map … cognitive science, game design, and then (use) AI-driven analytics to help map and then improve decision-making patterns in players. But this is also a very creative endeavor of how to collaborate with consumers of entertainment, and how to meet them where they are, and then to help them self-realize in some way.' Another area of pioneering is extended reality: in addition to AR and VR, XR is a technology that seems like it's due to have its day in enterprise. It's always been time consuming, difficult and hard to keep up in terms of your teaching or if you're doing research,' explained renowned multi-technology artist Rus Gant, who has some ties to MIT. 'So the idea is to use AI as a way to create content in near real time.' In addition, Gant talked about various aspects of applying AI, and in thinking out of the box, which could be interesting for anyone trying to meld AI with the humanities. Here's another part of how companies are using the agentic AI approach that's only developed in the last few years. Noting that agentic is now a big 'buzzword,' Armen Mkrtchyan of Flagship Pioneering talked about the practice of applying this concept to science. It basically has to do with understanding the patterns and strategies of nature, and applying them to the digital world. This starts with the idea of analyzing the human brain and comparing it to neural nets. But it can go far beyond that – nature has its own patterns, its own cohesion and its own structures. Mkrtchyan talked about how scientists can use that information to simulate things like proteins. He also mentioned that the company has a stealth project that will be revealed in time, that's based on this kind of engineering. Generally, he says it's possible to create a system that's sort of like a generative adversarial system (this is my reference, not his) where some AI agents come up with things, and others apply pressure to a decision-making process. 'About two years ago, we asked ourselves how we could try to mimic what nature does … nature is very good at creating new ideas, new things, and the way it does (this), it creates variability, creates optionality, and then applies selective pressure. That's how nature operates. And we thought at the time that we potentially AI agents, we could try to mimic the process … think of biology asking, can you create a specific peptide that binds with something else? Can we create an RNA molecule? Can it create a tRNA molecule?' Expanding on this, Mkrtchyan noted some of the possible outcomes, with an emphasis on a certain kind of collaboration. if we can say that intelligence consists of nature's intelligence, humans, intelligence, machines, intelligence, then can we leverage the power of all three of them to come up with new ideas, new concepts, and to drive them forward? I also wanted to include this idea from Gant about audiences. Our audiences are changing, and we should be cognizant of that. We have Generation X, the sort of 'bridge' generation to AI, and then we have AI-native generations who have never known a world without these technologies. 'There is a very distinct difference when you come down the line from the Boomers to Gen X to Millennials to Gen Z, Gen Alpha, these are different groups,' Gant said. 'They think differently. They absorb information differently. They're stimulated in different ways, as to what excites them and gets them motivated. And there's no one size fits all. And right now, I think there's a big danger, and in the AI world, particularly when you productize AI, to go for the largest number of customers, and sort of leave the margins alone, because there's no money there. I think… the students that I have (who are) most responsive are basically the Gen X and soon to be Gen Alpha, that they basically look at you and say, 'Why didn't you do this sooner? Why did you do it in a way that doesn't make any sense to me?' I think in a sort of multiplex way. I multitask, I take input in various ways … we don't know what they're going to do with this. Whether it's Millennials or Gen Z or Gen Alpha, they're going to do really interesting things, and that's why I'm interested in how we can work with the AI in its non-traditional role, the solutionization world, where it's thinking outside the box.' All of this is interesting information coming out of the April event, and reflects a survey of what companies can do now that they were not able to do in the past. Check out the video.

Technology For The Warfighter
Technology For The Warfighter

Forbes

time17-05-2025

  • Business
  • Forbes

Technology For The Warfighter

Drone flying over ocean at dawn In America, there's always been this interesting interplay of the public and private sectors, and the military has played a central role. We're familiar with DARPA becoming the Internet, and other examples of military technology slowly making their way into the consumer sector. So as we hear about all of these exciting new enterprise developments, what's happening at the Department of Defense? A recent presentation from Colonel Tucker Hamilton (Ret.) at Imagination in Action brought some of these realities to light. Hamilton talked about the use of new technology for drones and aircraft, and really put this in a particular perspective. Describing how AI drove a U.S. military drone for the first time, he went over lists of what these military offices need – cyberlogistics, robotics, sensing, synchronization, multimodal data, etc. 'We need to be able to do multi-modal sensor data gathering, summarizing that information for humans and non-humans,' he said. It sounded like he also coined a term when he talked about the 'battlespace of things,' similar to the Internet of Things, but made for military systems. Hamilton also evaluated four major challenges facing decision makers in the military environment. The first one is education; the second, he said, is bureaucracy. 'We need senior leaders and our warfighters to understand the technology, but not on the cursory scale,' he said, suggesting that the military needs 'mission designers' not just operators. 'They need to understand more broadly.' As for the onerous paperwork Hamilton refers to, there's an appeal to cut through the red tape and get things moving. 'Instead of us celebrating the purpose of the bureaucracy, to which there is typically a purpose, we rigidly adhere to it, with no means of being agile,' he said. The third barrier to advancement Hamilton mentioned is risk aversion, and the fourth is parochial services. 'Who is risk averse?' he asked rhetorically. 'Well, our military leaders, they've incentivized poor behavior over years. They've surrounded themselves with digital immigrants. This is not necessarily bad, but … that becomes an echo chamber. They don't understand the technology. They don't know how to adopt, and the people around them also don't understand and how to adopt it.' He talked about the 'OODA loop,' observe, orient, decide and act, and said a nation not practicing this will be left behind eventually. As for the threat of parochial approaches, Hamilton appealed to the idea that interoperability is key. 'The Department of the Army, Navy and Air Force are great at creating 1000 blooming flowers,' he said, 'their own disparate technology that doesn't communicate and interoperate with one another. And that's not what we need. That's not what our warfighter needs. It's not what the battle space is going to require in the future.' Previously, Hamilton has experience as the director of an MIT Accelerator project, which he said gave him some insight on how these things can work. Again, he promoted the principle of interoperability. 'Don't vendor-lock the government,' he said, speaking to private enterprise and its contributions. Those same ideas, he seemed to indicate, apply to international efforts, too. In response to questions about geopolitical competition, Hamilton opined that America is leading in LLMs, but not in computer vision. Later, he talked about sitting across from the Chinese in international talks on AI. In general, he said, representatives of different countries have some of the same concerns when it comes to AI. This part of the presentation was absolutely fascinating to me because of the reality that we need to have a global approach to containing AI. It's not just a race for AI as adversaries – it's a collective venture as humans and this is something that B. seem to grasp deeply and profoundly as he suggested. 'When we sit across the table from the Chinese delegation,' he said, citing his experience wigth the Brookings Institute as a participant in high-level international talks, 'we share a lot of the same concerns and a lot of the same views. We don't talk about our specific capability, but … we need to celebrate those type of relationships, we need to collaborate at those type of levels, because that is how we're going to be successful with the broader adoption of AI throughout our society.' Addressing domestic needs, he spoke to startups and investors: 'I think we get so enamored and lured by like this huge, 100 million dollar aircraft, for instance, when smaller things will make it work, right?' he said. 'So (people need to be) focused on making it work at the small level, which is going to allow us to scale and be more effective at the larger level.' My takeaway – let's keep thinking about those collaborations, in order to make sure that we harness AI the right way, and not get lost in trying to outdo each other militarily when it comes to flying aircraft.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store