logo
#

Latest news with #ImaginationInAction

Do No Harm, Build AI Safely
Do No Harm, Build AI Safely

Forbes

time21-05-2025

  • Politics
  • Forbes

Do No Harm, Build AI Safely

Yellow warning sign symbol or alert safety danger caution illustration icon security message and ... More exclamation triangle information icon on attention traffic background with secure alarm. 3D render. When it comes to being safe with AI, a lot of people would tell you: 'your guess is as good as mine.' However, there are experts working on this behind the scenes. There's the general idea that we have to adopt the slogan 'do no harm' when it comes to employing these very powerful technology models. I wanted to present some ideas that came out of a recent panel discussion at Imagination in Action where we talked about what's really at stake, and how to protect people in tomorrow's world. In a general sense, panelists talked about how the context for AI is 'political.' Or, I should say, in the Greek sense, where as historians point out, 'the polis was the cornerstone of ancient Greek civilization, serving as the primary political, social, and economic unit.' In other words, how we use AI has to do with people's politics, and with political outcomes. The ways that we use AI are informed by our worldviews, and geopolitical sentiment as well. 'When politics are going well, it's invisible, because business rolls on, art, culture, everything rolls on and you're not really paying attention to politics,' said panelist Jamie Metzl, author of Superconvergence. 'But I'm the son of a refugee. I've lived in Cambodia. I spent a lot of time in Afghanistan. When politics goes bad, politics is the only story. So everything that we're talking about, about technology, AI, exists within the context of politics, and politics needs to go well to create a space for everything else, and that's largely on a national level.' In terms of business, too, we have to look at how information is siloed for different use cases. One of the objectives of this kind of thing is global governance – AI governance that sees the big picture, and applies its principles universally. A lot of people, in talking about their AI fears, reference the Skynet technology from the Terminator films, where there's this vague doom attached to future systems that may rule when the robots will be in charge. But some suggest it's not as blatant as all that: that the overwhelming force of AI can be more subtle, and that it's more how AI is already directing our social outcomes. 'It's the algorithms that already today are denying people access to housing, access to jobs, access to credit, that are putting them at risk of being falsely arrested because of how a biased algorithm misinterpreted who they were, and how our legal system compounded that technical error with legal injustice and systemic bias,' said panelist Albert Cahn. Cahn pointed, as an example, to a system called Midas that was supposed to seek out fraud in insurance systems. Instead, he noted, the system went too broad, and started catching innocent people in its dragnet, submitting them to all kinds of hardship. 'When we are talking about the scales of getting it wrong with AI safety, this isn't about missing a box in some compliance checklist,' he said. 'This is truly a matter of people's livelihoods, people's liberty, and in some cases, sadly, even their lives.' That's something that we have to look out for in terms of AI safety. Noelle Russell had a different metaphor for AI safety, based on her work on Alexa and elsewhere in the industry, where she saw small models with the capacity to scale, and thought about the eventual outcomes. 'I came to call these little models 'baby tigers,'' she said. 'Because everyone, when you get a new model, you're like, 'oh my gosh, it's so cute and fluffy, and I love it, and (in the context of model work) I can't wait to be on that team, and it's going to be so fun'. But no one is asking, 'Hey, look at those paws. How big are you going to be? Or razor-sharp teeth at birth. What are you going to eat? How much are you going to eat? Where are you going to live, and what happens when I don't want you anymore?' 23andme, we are selling DNA on the open market … You know, my biggest concern is that we don't realize that in the sea of baby tigers and excited enthusiasm we have about technology, that it might not grow up one day and … hurt ourselves, hurt our children, but most importantly, that we actually have the ability to change that.' Panelists also talked about measuring cyber security, and how that works. 'In carpentry, the maxim is 'measure twice, cut once',' said panelist Cam Kerry. 'When it comes to AI, it has to be 'measure, measure, measure and measure again'. It's got to be a continuous process, from the building of the system to the deployment of the system, so that you are looking at the outcomes, (and) you avoid the (bias) problems. There's good work going on. I think NIST, the National Institute of Standards and Technology, one of my former agencies at the Commerce Department, does terrific work on developing systems of measurement, and is doing that with AI, with the AI Safety Institute. That needs to scale up.' Going back to the geopolitical situation, panelists referenced competition between the U.S. and China, where these two giants are trying really hard to dominate when it comes to new technology. Russell referenced a group called 'I love AI' that's helping to usher in the era of change, and provides a kind of wide-ranging focus group for AI. 'What I've uncovered is that there are anywhere from 12 years old to 85 year old (people,) farmers to metaphysicians, and they are all desperate to understand: 'What do you mean the world is changing, and how do I just keep my head above water?'' she said. Then too, Russell mentioned, toward the end, the imperative for AI safety and how to get there. it's not a checklist you sign off on. It's not like you said, it's not that framework that you adopt, it's like the way you end up thinking the way you are the way you the way you build software, the way you build companies, it will need to be responsible. These are some of the thoughts that I thought were important in documenting progress toward AI safety in our times.

4 New Creative Directions For AI
4 New Creative Directions For AI

Forbes

time19-05-2025

  • Entertainment
  • Forbes

4 New Creative Directions For AI

AI and creativity getty We all know that AI is changing enterprise in dramatic ways right now. But there are so many vectors of this progress that it's hard to isolate some of the fundamental ideas about how this is going to work. I wanted to highlight a survey of ideas on new breakthrough research and applying neural network technologies. These come from a recent panel at an Imagination in Action event where innovators talked about pushing the envelope on what we can do with this technology as a whole. These insights, I think, are valuable as we look at the capabilities of what we have right now, and how companies can and will respond. To a large extent, past research in AI has been focused on working with text. Text was the first sort of data format that became the currency of LLMs. I would say that happened for a number of reasons, including that text words are easy to parse and separate into tokens. Also, text was the classical format of computing: it's easier to build systems to work with text or ASCII than to work through audio or video. In any case we're now exploring boundaries beyond text, and looking at how other data formats respond to AI analysis. One of these is audio. 'We certainly have text banks, but we haven't even scratched the surface on sound and decoding our voices,' said Joanna Pena-Bickley of Vibes AI, in explaining some of what her company is doing with wearables. 'Bringing these things together is really about breaking open an abundant imagination for creators. And if we can use our voices to be able to do that, I think that we actually stand a chance to actually (create) completely new experiences, sensual experiences.' It's great for auditory learners, and we may be able to do various kinds of diagnosis and problem-solving that we didn't before. Pena-Bickley explained how this could help with cognitive decline, or figuring out people's personal biological frequencies and how they 'vibe' together. In the world of gaming, too, scientists can apply different kinds of AI research to the players themselves. In other words, traditional gaming was focused on creating an entertainment experience, but as people play, they generate data that can be useful in building solutions beyond the gaming world. 'We are trying to create dynamic systems, systems that respond to players,' said Konstantina Yaneva, founder of Harvard Innovation Labs. 'So we map … cognitive science, game design, and then (use) AI-driven analytics to help map and then improve decision-making patterns in players. But this is also a very creative endeavor of how to collaborate with consumers of entertainment, and how to meet them where they are, and then to help them self-realize in some way.' Another area of pioneering is extended reality: in addition to AR and VR, XR is a technology that seems like it's due to have its day in enterprise. It's always been time consuming, difficult and hard to keep up in terms of your teaching or if you're doing research,' explained renowned multi-technology artist Rus Gant, who has some ties to MIT. 'So the idea is to use AI as a way to create content in near real time.' In addition, Gant talked about various aspects of applying AI, and in thinking out of the box, which could be interesting for anyone trying to meld AI with the humanities. Here's another part of how companies are using the agentic AI approach that's only developed in the last few years. Noting that agentic is now a big 'buzzword,' Armen Mkrtchyan of Flagship Pioneering talked about the practice of applying this concept to science. It basically has to do with understanding the patterns and strategies of nature, and applying them to the digital world. This starts with the idea of analyzing the human brain and comparing it to neural nets. But it can go far beyond that – nature has its own patterns, its own cohesion and its own structures. Mkrtchyan talked about how scientists can use that information to simulate things like proteins. He also mentioned that the company has a stealth project that will be revealed in time, that's based on this kind of engineering. Generally, he says it's possible to create a system that's sort of like a generative adversarial system (this is my reference, not his) where some AI agents come up with things, and others apply pressure to a decision-making process. 'About two years ago, we asked ourselves how we could try to mimic what nature does … nature is very good at creating new ideas, new things, and the way it does (this), it creates variability, creates optionality, and then applies selective pressure. That's how nature operates. And we thought at the time that we potentially AI agents, we could try to mimic the process … think of biology asking, can you create a specific peptide that binds with something else? Can we create an RNA molecule? Can it create a tRNA molecule?' Expanding on this, Mkrtchyan noted some of the possible outcomes, with an emphasis on a certain kind of collaboration. if we can say that intelligence consists of nature's intelligence, humans, intelligence, machines, intelligence, then can we leverage the power of all three of them to come up with new ideas, new concepts, and to drive them forward? I also wanted to include this idea from Gant about audiences. Our audiences are changing, and we should be cognizant of that. We have Generation X, the sort of 'bridge' generation to AI, and then we have AI-native generations who have never known a world without these technologies. 'There is a very distinct difference when you come down the line from the Boomers to Gen X to Millennials to Gen Z, Gen Alpha, these are different groups,' Gant said. 'They think differently. They absorb information differently. They're stimulated in different ways, as to what excites them and gets them motivated. And there's no one size fits all. And right now, I think there's a big danger, and in the AI world, particularly when you productize AI, to go for the largest number of customers, and sort of leave the margins alone, because there's no money there. I think… the students that I have (who are) most responsive are basically the Gen X and soon to be Gen Alpha, that they basically look at you and say, 'Why didn't you do this sooner? Why did you do it in a way that doesn't make any sense to me?' I think in a sort of multiplex way. I multitask, I take input in various ways … we don't know what they're going to do with this. Whether it's Millennials or Gen Z or Gen Alpha, they're going to do really interesting things, and that's why I'm interested in how we can work with the AI in its non-traditional role, the solutionization world, where it's thinking outside the box.' All of this is interesting information coming out of the April event, and reflects a survey of what companies can do now that they were not able to do in the past. Check out the video.

Technology For The Warfighter
Technology For The Warfighter

Forbes

time17-05-2025

  • Business
  • Forbes

Technology For The Warfighter

Drone flying over ocean at dawn In America, there's always been this interesting interplay of the public and private sectors, and the military has played a central role. We're familiar with DARPA becoming the Internet, and other examples of military technology slowly making their way into the consumer sector. So as we hear about all of these exciting new enterprise developments, what's happening at the Department of Defense? A recent presentation from Colonel Tucker Hamilton (Ret.) at Imagination in Action brought some of these realities to light. Hamilton talked about the use of new technology for drones and aircraft, and really put this in a particular perspective. Describing how AI drove a U.S. military drone for the first time, he went over lists of what these military offices need – cyberlogistics, robotics, sensing, synchronization, multimodal data, etc. 'We need to be able to do multi-modal sensor data gathering, summarizing that information for humans and non-humans,' he said. It sounded like he also coined a term when he talked about the 'battlespace of things,' similar to the Internet of Things, but made for military systems. Hamilton also evaluated four major challenges facing decision makers in the military environment. The first one is education; the second, he said, is bureaucracy. 'We need senior leaders and our warfighters to understand the technology, but not on the cursory scale,' he said, suggesting that the military needs 'mission designers' not just operators. 'They need to understand more broadly.' As for the onerous paperwork Hamilton refers to, there's an appeal to cut through the red tape and get things moving. 'Instead of us celebrating the purpose of the bureaucracy, to which there is typically a purpose, we rigidly adhere to it, with no means of being agile,' he said. The third barrier to advancement Hamilton mentioned is risk aversion, and the fourth is parochial services. 'Who is risk averse?' he asked rhetorically. 'Well, our military leaders, they've incentivized poor behavior over years. They've surrounded themselves with digital immigrants. This is not necessarily bad, but … that becomes an echo chamber. They don't understand the technology. They don't know how to adopt, and the people around them also don't understand and how to adopt it.' He talked about the 'OODA loop,' observe, orient, decide and act, and said a nation not practicing this will be left behind eventually. As for the threat of parochial approaches, Hamilton appealed to the idea that interoperability is key. 'The Department of the Army, Navy and Air Force are great at creating 1000 blooming flowers,' he said, 'their own disparate technology that doesn't communicate and interoperate with one another. And that's not what we need. That's not what our warfighter needs. It's not what the battle space is going to require in the future.' Previously, Hamilton has experience as the director of an MIT Accelerator project, which he said gave him some insight on how these things can work. Again, he promoted the principle of interoperability. 'Don't vendor-lock the government,' he said, speaking to private enterprise and its contributions. Those same ideas, he seemed to indicate, apply to international efforts, too. In response to questions about geopolitical competition, Hamilton opined that America is leading in LLMs, but not in computer vision. Later, he talked about sitting across from the Chinese in international talks on AI. In general, he said, representatives of different countries have some of the same concerns when it comes to AI. This part of the presentation was absolutely fascinating to me because of the reality that we need to have a global approach to containing AI. It's not just a race for AI as adversaries – it's a collective venture as humans and this is something that B. seem to grasp deeply and profoundly as he suggested. 'When we sit across the table from the Chinese delegation,' he said, citing his experience wigth the Brookings Institute as a participant in high-level international talks, 'we share a lot of the same concerns and a lot of the same views. We don't talk about our specific capability, but … we need to celebrate those type of relationships, we need to collaborate at those type of levels, because that is how we're going to be successful with the broader adoption of AI throughout our society.' Addressing domestic needs, he spoke to startups and investors: 'I think we get so enamored and lured by like this huge, 100 million dollar aircraft, for instance, when smaller things will make it work, right?' he said. 'So (people need to be) focused on making it work at the small level, which is going to allow us to scale and be more effective at the larger level.' My takeaway – let's keep thinking about those collaborations, in order to make sure that we harness AI the right way, and not get lost in trying to outdo each other militarily when it comes to flying aircraft.

6 Investor Predictions For AI In 2025
6 Investor Predictions For AI In 2025

Forbes

time14-05-2025

  • Business
  • Forbes

6 Investor Predictions For AI In 2025

Investment and analysis money cash profits. Successful investor or entrepreneur making investing ... More plans. Financial consulting, investment and savings. Vector illustration. That are we likely to see as we continue to integrate the results of LLMs into the business world? There are certain overarching trends and patterns that experts are pointing to in gaming out the next few years of the AI revolution. For example, there's the tremendous need for data (which we'll talk about a little later). There's the need for energy, which is prompting new interest in nuclear power, and other means of sourcing the juice needed for new data centers and systems. Then there are other sorts of changes getting recognized as investors prepare to take on the rest of 2025, and build strategy for successive years. In a recent presentation at Imagination in Action, Ulrike Hoffmann-Burchardi, the CIO for Global Equities at the UBS Chief Investment Office, pointed to some interesting predictions for the AI market. The first of her predictions was that AI data center spending would hit $4 billion by 2027, and exceed spending on other general purpose data centers. She listed three reasons. One relates to Jevin's paradox, where innovations mean more of the new technology gets used. Another contemplates training and scaling laws in the industry. The third centers around inference. 'We're not just doing one-shot inferencing,' she said. 'We are now simulating different decision paths, and trying to find out what the best solution is going to be - that takes a lot more compute.' Another of Hoffmann-Burchardi's predictions is that, in the race to feed AI engines, synthetic data will help. That's going to help address the problem of needing original data to give systems something to train on. It also alleviates some issues around intellectual property and data ownership that come with using human data for AI ends. Hoffmann-Burchardi hinted at continuing gains for AI hardware, while cautioning that some forms of hardware have incredible staying power, like the GPU, that has remained a popular format for over a decade. This corresponds to some of Hoffmann-Burchardi's writing in a report in February where she talked about hardware trends. 'The release of Groq 3 by xAI on 17 February marked another proof point of hardware scaling laws,' she wrote. 'Groq 3 was trained on 100,000 NVIDIA H100 GPUs and, based on xAI's claims, it surpasses leading models from OpenAI and DeepSeek in areas such as mathematics, science, and coding, while it already achieved top rankings on the HuggingFace chatbot leaderboard. The capex spent by xAI for the development of its models confirms our view that the AI race will keep pushing the frontier of hardware spending.' In other analysis, Hoffmann-Burchardi suggested that AI margins will actually go up. 'The reason for that is that these companies are their own best customers,' Hoffmann-Burchardi said. 'They use AI in order to lower costs and increase efficiency.' As ballast, she pointed to news from a recent earnings call where a big company saved $260 million, or 2.6% of net income, by using GenAI to upgrade 30,000 job applications. Hoffmann-Burchardi also cited Google estimating that 25% of their codebase is now written by AI. That will increase, she suggested. 'You'll see more and more of that,' she said. Going back to Hoffmann-Burchardi's February report on global equity strategy, we have the assertion that China will get a significant amount of market share for Internet stocks: 'China's internet companies—from Alibaba, Baidu, ByteDance, to Tencent—have shown an impressive cadence of algorithmic innovation in developing efficient large language models since 2023. These models can be used to enhance the user experience, improve ad targeting, and personalize content, potentially leading to higher revenue. Further automation through AI agents can drive operational costs lower, setting up for an inflection in operating margins.' Responding to questions from the audience, Hoffmann-Burchardi talked about two options – decreasing cost by letting people off, and improving revenue. If companies can find ways to improve revenue, she suggested, they won't just use AI for productivity – they'll find completely new ways of doing business, and that may allow people to keep their jobs. 'GPUs are ideally suited for parallel processing, especially if the algorithm, the transformer algorithm, is starting to evolve. Yes, there will be some competition (but) we believe (that) five years from now, GPUs will still be the dominant AI chip.' 'We actually think any industry, any company can re-invent itself for AI. You need just two things. You need an AI-first mindset. You need some proprietary data, … any company will have a chance to become a leader in its space.' 'AI is reinventing the way business is done at a speed that is accelerating, and that may be even difficult for us to fully understand.' Those are some of the investing predictions around the AI world this year. Look for more as we continue to demonstrate what's coming out of MIT conferences and events, and other sources like professional blogs by AI experts, as well as the biggest news stories of the year.

AI HR Is Going To Rock Our Worlds As AI Adoption Soars
AI HR Is Going To Rock Our Worlds As AI Adoption Soars

Forbes

time09-05-2025

  • Business
  • Forbes

AI HR Is Going To Rock Our Worlds As AI Adoption Soars

BASED ON IIA Vinay Gidwaney E14-633 -AH - Google Drive Businessman touching to virtual screen with infographic and HR wording , Human development and ... More recruitment concept. How do you deal with a human resources department that's run by robots? It's a big question that one that many of us will have to consider as the weeks and months go on. AI automation is now a fact of life. It's not science fiction. It's not theory. It's happening. And that is being felt in the field of human resources – by the people who run these departments, and by the people who trust them to manage their affairs as employees. How does AI assist in HR? Or to put it another way, what does AI do in HR departments? Well, obviously, there's talent and recruitment. There's onboarding and offboarding. There's, for lack of a better term, employee decommissioning, and the entire employee life cycle. It's more than just computers calculating your payroll. Increasingly, the AI agents are going to be doing the kinds of things that Nancy or Susie from HR used to do back when HR people had human bodies and human backgrounds. This resource from the Academy to Improve Human Resources will give you a flavor of the various instances of AI that can add to HR departments. Vinay Gidwaney has experience at the MIT Media Lab and with his company, One Digital, in figuring out how HR for AI works. In a recent talk at Imagination in Action, he talked about a 'lowest common denominator' in HR, and challenges coming down the pike. 'HR departments across the country are unprepared and woefully underskilled in the impact that AI is going to have on the American workforce,' he said. He quoted Jensen Huang of Nvidia saying 'IT will become the HR of AI.' I'm not sure exactly what that means, but in any case, he brought his own analysis that's a bit different. 'We're in the advice-giving business,' he said of his work at One Digital. 'We sit down with small businesses owners, and we help them figure out their benefits and retirement plans for their employees, insurance and so on. And we sit down with American families and help them deal with their financial concerns and find success.' AI, he said, gives 'mediocre advice,' which is one reason that people have to be involved. 'People need to drive it,' he said. Here's a really interesting part of Gidwaney's presentation. He revealed that his company actually has AI coworkers – deep personas that are built for the AI parts of the HR department. 'We have a variety of AI co-workers that we deployed across the organization, and each coworker actually has a biography, they have a resume, they have experiences, they have skill sets, they have qualifications,' he said. 'They even go through regular assessments. So we test them. They have continuing education plans. They even have human supervisors.' He introduced us to Ben, Piper, Artie, Oliver, and a whole slew of AI entities that have their own human avatars, human back stories, and other characteristics, and explained that the human coworkers actually treat them as if they were humans in so many ways. There's even an AI hiring pipeline, where the company either discards AI interns, or promotes them to be apprentices. It's all part of a human-based approach, he said, that has its advantages. 'We want our AI to act and behave like humans within our organization,' he said, 'and it's incredibly important to design and model them in a way that replicates how humans act and behave in the company.' One Digital advises 100,000 HR departments around the country. Gidwaney talked about a system that's ideal in maintaining HR as a discipline. 'How do you have a merit-based process of hiring and promoting people when people use AI and you don't even know about it?' he asked. 'These are the kinds of questions that we want to start to ask when AI comes into the company. It does fundamentally change the culture of your organization. All of a sudden, the work in your organization is being done by something that you as an HR department don't have any control over.' As a CTA, he appealed to the concept of proactive interpretation of how AI will help, and a real reckoning within company culture. 'HR needs to do what they do best, which is help humans succeed at work,' he said, adding that the AI questions shouldn't just be left up to technologists, but asked all of the way through the enterprise. 'And if they're going to do that, we need to own the AI conversation in the company.' You can view the rest of the video from IIA where Gidwaney gets asked about bias and other areas of IT. In answering these questions, he continues to explore how we're going to deal with AI in our ranks – not just on the computer, but in our Slack conversations, and in meetings, and as digital coworkers who we need to acknowledge and recognize. Presumably, not everyone will be happy about sharing the break room and the conference room with various AI 'people,' but again, it's going to be a fact of life rather soon. A while ago I wrote about Toby Lutke's memo to Shopify employees telling them that AI is now mandatory. That is likely to become a pretty across the board reality pretty soon. Now you can also look back at recent blogs I've done on approaches like Alvin Graylin's manifesto, talking about replacing work with dignity as a human identity. We're going to need to do that, too. But we can't stick our heads in the sand and pretend that AI isn't going to be next to us at the table.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store