Latest news with #humanintelligence


Forbes
5 days ago
- Business
- Forbes
AI And Humanity: Designing A Future Of Mutual Empowerment
Trushant Mehta is a philomath and a sedulous and inquisitive tech evangelist. He cofounded and serves as CTO of OpenEyes Technologies Inc. In 2025, artificial intelligence (AI) isn't the antagonist of a dystopian tale. It's a dynamic partner in shaping a future where human intelligence and machine intelligence work in concert to amplify creativity, accelerate innovation and solve the world's most complex challenges. For years, AI has been portrayed as a force that might replace us: our jobs, our thinking, even our autonomy. But those fears, often fueled by science fiction and headlines, miss a fundamental truth: AI isn't here to substitute human ingenuity. It's here to elevate it. Every major tech leap, from the printing press to the internet, sparked fears of obsolescence. Yet, humans adapted, industries evolved and new opportunities emerged. AI is on the same path. The sooner we stop seeing it as humans versus AI, the sooner we can build a future shaped by humans with AI. From Fear To Collaboration: A Paradigm Shift AI's early headlines revolved around disruption: mass layoffs, job displacement and dystopian automation. But disruption, history shows, is often a prelude to reinvention. During the Industrial Revolution, the electrification era and the rise of the internet, every technological leap introduced fear and, eventually, growth. The same arc is unfolding with AI. According to the World Economic Forum's Future of Jobs Report 2025, although 92 million jobs could be displaced by 2030 due to drivers that include technological change like automation, 170 million new roles are projected to emerge, particularly in data science, sustainability, healthcare and AI development. The result? A net gain of 78 million jobs. The most successful organizations won't just adopt AI quickly—they'll integrate it wisely, blending machine power with human empathy, creativity and ethics. Enhancing Human Productivity And Innovation AI is transitioning from a back-end tool to a front-line collaborator, transforming sectors beyond tech. 1. Catalyzing AI Productivity: McKinsey forecasts as much as $4.4 trillion in productivity potential from corporate use cases of generative AI, indicating a sweeping, cross-functional impact while emphasizing that organizations must move beyond the pilot stage to fully harness these gains by embedding AI into workflows. 2. Revolutionizing Drug Discovery And Healthcare: AI can speed up drug discovery and streamline clinical trials. The FDA has reported a sharp rise in AI-enhanced drug applications. Tech leaders like Amazon and Nvidia are investing heavily in AI (paywall) for healthcare, signifying a larger commitment to improving patient care and operational efficiency. 3. Advancing Space Exploration: NASA's ExoMiner has already used AI to discover hundreds of new exoplanets by analyzing complex datasets from telescopes—a task that would take humans years. 4. Optimizing Renewable Energy: In the energy sector, AI is playing a pivotal role in optimizing renewable energy production. Google's collaboration with U.S. power grid operators has led to the implementation of AI tools that significantly reduce grid connection times for renewable projects. Preparing For An AI-Integrated Future As AI becomes foundational, continuous training is no longer a perk—it's a prerequisite. According to the World Economic Forum, 50% of the workforce has already undergone upskilling or reskilling, up from 41% in 2023. Forward-looking companies are already investing in talent transformation: • Vodafone plans to fill 40% of its future software roles internally through AI training programs. • Infosys is reskilling over 2,000 cybersecurity professionals to address growing digital threats. At OpenEyes, we don't solely provide tools; we also foster adaptive talent and are committed to self-reskilling as a constant endeavor. Traditional one-size-fits-all workshops lacked impact and engagement, so we pivoted to a hands-on approach. We ran prompt engineering, data tagging and UX testing frameworks workshops that involved co-creating features. Moreover, we taught ethical AI and survey logic design to nontechnical staff, particularly those working on our learning management system and credential management products. With these programs, we're focused on enabling deeper fluency in AI, customer insight synthesis, adaptive agile iterative thinking and holistic and inclusivity frameworks. Building A Future Of Mutual Empowerment The promise of AI will only be fulfilled if its deployment is ethical, equitable and inclusive. The EU's AI Act, set to take full effect by August 2026, is a landmark framework that classifies AI systems by risk and regulates high-risk use cases in employment, law enforcement and health. Meanwhile, U.S. states like California and Massachusetts are leveraging existing laws such as consumer protection and anti-discrimination statutes to rein in unethical AI use. OpenEyes is dedicated to ethical AI practices and is governed by frameworks that emphasize transparency and accountability. Governance frameworks allocate responsibility to specific individuals or teams, helping streamline operations within predefined boundaries. We implement bias identification and mitigation frameworks for ethical AI use, including data imbalance audits, AI fairness evaluation with IBM's AI Fairness 360 and Google's What-If Tool, counterfactual evaluation for outcome assessment and outcome bias assessment. Models exhibiting bias are corrected with countermeasures including re-weighting, re-sampling and adversarial debiasing. We ensure accountability by soliciting feedback from a range of impacted groups and providing documented procedures. Conclusion: Embracing AI As Humanity's Greatest Ally AI isn't a threat; it's an opportunity—a chance to elevate the human experience, solve problems at scale and unlock new realms of innovation. Like every major technology before it, AI will reflect our intentions. Used wisely, it can help us build a more inclusive, creative and resilient world. This isn't a story of man versus machine. It's a story of humanity with AI working together to create a future where intelligence, both human and artificial, is used not just to optimize but to empower. The challenge before us isn't to contain AI but to lead it with wisdom, ethics and vision. Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?


Forbes
07-07-2025
- Business
- Forbes
Sometimes It's OK To Say ‘No Way' To AI
Let's think about some framework for deciding when and where it's appropriate to apply AI. Although it pains me to say this as a longtime technology disciple and fintech booster, companies need to create policies about when not to use technology—specifically artificial intelligence. I make my living selling technology solutions to financial companies. So it's in my interest to convince professionals in wealth management, asset management, capital markets, and, indeed, all areas of financial services, to adopt new technology solutions whenever and wherever possible. This mentality has given rise to billions of dollars of value creation and enabled the democratization of finance. However, as artificial intelligence proliferates across the business world and society, I find myself frequently encountering examples of situations where companies would have been better served spurning AI in favor of old-fashioned 'HI,' or human intelligence. Some of these examples are all too obvious and all too common. By now, we've probably all been forced to interact with an AI-powered chatbot when what we really wanted was to talk to a human client service rep who could quickly answer our question or fix our problem. Recently, I've also encountered communications from companies that were obviously (and poorly) written by AI. Those experiences have mainly been on social media, where apparently some companies are content to let their brands be defined by unedited, AI-generated text. My first thought is: Are you underestimating the intelligence of your clients? These examples of annoying or excessive AI use might turn off some consumers to a specific company or brand. But misapplications of AI can also have much more serious consequences. Artificial intelligence is still very much a work in progress. AI models hallucinate, they make mistakes and they can be influenced by dangerous biases introduced from training data, algorithmic formulations and human designers. Given these shortcomings, the consequences of an AI-generated mishap could be catastrophic for financial services companies who hold consumers' wealth, must earn their trust every day and operate under tight regulations. TALK: A Decision-Making Framework for AI So despite my enthusiasm for technology in general—and my strong recommendations in past columns for individuals to start experimenting with AI as a time-saver and personal assistant—I think it's important to pause for a moment. Let's think about some framework for deciding when and where it's appropriate for companies to apply AI, and when it might be better to stick with human intelligence and more traditional methodologies. This framework should be applicable to both external-facing functions that directly impact customers and internal applications involving a company's operational systems. Of course, the first element of any business decision-making framework is a cost-benefit analysis. In a growing number of cases, companies will find that AI solutions are simply more economically attractive than human alternatives, at least in the long run. Next, companies must look at the AI solution from a risk-management perspective. Are there certain core functions in the business that are simply too important (or too susceptible to error or compliance issues) to fully entrust to AI? Finally, companies must institutionalize some guardrails that prevent cost-savings incentives from driving these decisions entirely, ensuring that senior management takes non-financial criteria into account. Companies need some system that requires decision-makers to gather information and opinions from a broad range of employees about the wisdom and implications of using AI in specific applications. This system can be laid out with the acronym 'TALK': T — Think about all the ways switching to an AI solution will affect workflows, customers, culture and the brand. A — Ask the business team most directly involved if they can think of any detrimental effects the switch to AI could have in those same categories (workflows, customers, culture and brand). L — Leverage colleagues from across the organization, including business teams, technology experts, senior management, sales and client service, and other areas to gather opinions about applying the AI solution to this particular business issue. K — Kick AI to the curb when the team's HI is better than AI. Your clients deserve the respect of human intelligence. Humans Still Bring Something Important That last element is most important. At a time when AI is tackling problems, cutting costs and unlocking new opportunities, it takes a bold decision-maker to assess a situation and determine that, no, in this instance, it makes sense to hit pause on the AI solution and stick with our human intelligence. In that case, having results from a formal, institutionalized decision-making process will allow business leaders and other decision-makers to more easily justify their determination to say no to AI. That ammunition will be particularly valuable when there is a significant cost savings associated with the AI solution. As a technology salesperson, I hope companies don't decide to spurn the tech-based solutions too often. But sometimes it's worth remembering that we humans do bring something important to the table, too.


Forbes
03-07-2025
- Science
- Forbes
AGI And AI Superintelligence Are Going To Sharply Hit The Human Ceiling Assumption Barrier
Is there a limit or ceiling to human intelligence and how will that impact AI? In today's column, I examine an unresolved question about the nature of human intelligence, which in turn has a great deal to do with AI, especially regarding achieving artificial general intelligence (AGI) and potentially even reaching artificial superintelligence (ASI). The thorny question is often referred to as the human ceiling assumption. It goes like this. Is there a ceiling or ending point that confines how far human intellect can go? Or does human intellect extend indefinitely and nearly have infinite possibilities? Let's talk about it. This analysis of an innovative AI breakthrough is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here). Heading Toward AGI And ASI First, some fundamentals are required to set the stage for this weighty discussion. There is a great deal of research going on to further advance AI. The general goal is to either reach artificial general intelligence (AGI) or maybe even the outstretched possibility of achieving artificial superintelligence (ASI). AGI is AI that is considered on par with human intellect and can seemingly match our intelligence. ASI is AI that has gone beyond human intellect and would be superior in many if not all feasible ways. The idea is that ASI would be able to run circles around humans by outthinking us at every turn. For more details on the nature of conventional AI versus AGI and ASI, see my analysis at the link here. We have not yet attained AGI. In fact, it is unknown as to whether we will reach AGI, or that maybe AGI will be achievable in decades or perhaps centuries from now. The AGI attainment dates that are floating around are wildly varying and wildly unsubstantiated by any credible evidence or ironclad logic. ASI is even more beyond the pale when it comes to where we are currently with conventional AI. Human Intellect As A Measuring Stick Have you ever pondered the classic riddle that asks how high is up? I'm sure that you have. Children ask this vexing question of their parents. The usual answer is that up goes to the outer edge of Earth's atmosphere. After hitting that threshold, up continues onward into outer space. Up is either a bounded concept based on our atmosphere or it is a nearly infinite notion that goes as far as the edge of our expanding universe. I bring this riddle to your attention since it somewhat mirrors an akin question about the nature of human intelligence: In other words, the intelligence we exhibit currently is presumably not our upper bound. If you compare our intelligence with that of past generations, it certainly seems relatively apparent that we keep increasing in intelligence on a generational basis. Will those born in the year 2100 be more intelligent than we are now? What about being born in 2200? All in all, most people would speculate that yes, the intelligence of those future generations will be greater than the prevailing intelligence at this time. If you buy into that logic, the up-related aspect rears its thorny head. Think of it this way. The capability of human intelligence is going to keep increasing generationally. At some point, will a generation exist that has capped out? The future generation represents the highest that human intellect can ever go. Subsequent generations will either be of equal human intellect, or less so and not more so. The reason we want to have an answer to that question is that there is a present-time pressing need to know whether there is a limit or not. I've just earlier pointed out that AGI will be on par with human intellect, while ASI will be superhuman intelligence. Where does AGI top out, such that we can then draw a line and say that's it? Anything above that line is going to be construed as superhuman or superintelligence. Right now, using human intellect as a measuring stick is hazy because we do not know how long that line is. Perhaps the line ends at some given point, or maybe it keeps going infinitely. Give that weighty thought some mindful pondering. The Line In The Sand You might be tempted to assume that there must be an upper bound to human intelligence. This intuitively feels right. We aren't at that limit just yet (so it seems!). One hopes that humankind will someday live long enough to reach that outer atmosphere. Since we will go with the assumption of human intelligence as having a topping point, doing so for the sake of discussion, we can now declare that AGI must also have a topping point. The basis for that claim is certainly defensible. If AGI consists of mimicking or somehow exhibiting human intelligence, and if human intelligence meets a maximum, AGI will also inevitably meet that same maximum. That's a definitional supposition. Admittedly, we don't necessarily know yet what the maximum point is. No worries, at least we've landed on a stable belief that there is a maximum. We can then draw our attention toward figuring out where that maximum resides. No need to be stressed by the infinite aspects anymore. Twists And Turns Galore AI gets mired in a controversy associated with the unresolved conundrum underlying a ceiling to human intelligence. Let's explore three notable possibilities. First, if there is a ceiling to human intelligence, maybe that implies that there cannot be superhuman intelligence. Say what? It goes like this. Once we hit the top of human intelligence, bam, that's it, no more room to proceed further upward. Anything up until that point has been conventional human intelligence. We might have falsely thought that there was superhuman intelligence, but it was really just intelligence slightly ahead of conventional intelligence. There isn't any superhuman intelligence per se. Everything is confined to being within conventional intelligence. Thus, any AI that we make will ultimately be no greater than human intelligence. Mull that over. Second, well, if there is a ceiling to human intelligence, perhaps via AI we can go beyond that ceiling and devise superhuman intelligence. That seems more straightforward. The essence is that humans top out but that doesn't mean that AI must also top out. Via AI, we might be able to surpass human intelligence, i.e., go past the maximum limit of human intelligence. Nice. Third, if there isn't any ceiling to human intelligence, we would presumably have to say that superhuman intelligence is included in that infinite possibility. Therefore, the distinction between AGI and ASI is a falsehood. It is an arbitrarily drawn line. Yikes, it is quite a mind-bending dilemma. Without some fixed landing on whether there is a human intelligence cap, the chances of nailing down AGI and ASI remain aloof. We don't know the answer to this ceiling proposition; thus, AI research must make varying base assumptions about the unresolved topic. AI Research Taking Stances AI researchers often take the stance that there must be a maximum level associated with human intellect. They generally accept that there is a maximum even if we cannot prove it. The altogether unknown, but considered plausibly existent limit, becomes the dividing line between AGI and ASI. Once AI exceeds the human intellectual limit, we find ourselves in superhuman territory. In a recently posted paper entitled 'An Approach to Technical AGI Safety and Security' by Google DeepMind researchers Rohin Shah, Alex Irpan, Alexander Matt Turner, Anna Wang, Arthur Conmy, David Lindner, Jonah Brown-Cohen, Lewis Ho, Neel Nanda, Raluca Ada Popa, Rishub Jain, Rory Greig, Samuel Albanie, Scott Emmons, Sebastian Farquhar, Sébastien Krier, Senthooran Rajamanoharan, Sophie Bridgers, Tobi Ijitoye, Tom Everitt, Victoria Krakovna, Vikrant Varma, Vladimir Mikulik, Zachary Kenton, Dave Orr, Shane Legg, Noah Goodman, Allan Dafoe, Four Flynn, and Anca Dragan, arXiv, April 2, 2025, they made these salient points (excerpts): You can see from those key points that the researchers have tried to make a compelling case that there is such a thing as superhuman intellect. The superhuman consists of that which goes beyond the human ceiling. Furthermore, AI won't get stuck at the human intellect ceiling. AI will surpass the human ceiling and proceed into the superhuman intellect realm. Mystery Of Superhuman Intelligence Suppose that there is a ceiling to human intelligence. If that's true, would superhuman intelligence be something entirely different from the nature of human intelligence? In other words, we are saying that human intelligence cannot reach superhuman intelligence. But the AI we are devising seems to be generally shaped around the overall nature of human intelligence. How then can AI that is shaped around human intelligence attain superintelligence when human intelligence cannot apparently do so? Two of the most frequently voiced answers are these possibilities: The usual first response to the exasperating enigma is that size might make the difference. The human brain is approximately three pounds in weight and is entirely confined to the size of our skulls, roughly allowing brains to be about 5.5 inches by 6.5 inches by 3.6 inches in respective dimensions. The human brain consists of around 86 billion neurons and perhaps 1,000 trillion synapses. Human intelligence is seemingly stuck to whatever can happen within those sizing constraints. AI is software and data that runs across perhaps thousands or millions of computer servers and processing units. We can always add more. The size limit is not as constraining as a brain that is housed inside our heads. The bottom line is that the reason we might have AI that exhibits superhuman intelligence is due to exceeding the physical size limitations that human brains have. Advances in hardware would allow us to substitute faster processors and more processors to keep pushing AI onward into superhuman intelligence. The second response is that AI doesn't necessarily need to conform to the biochemical compositions that give rise to human intelligence. Superhuman intelligence might not be feasible with humans due to the brain being biochemically precast. AI can easily be devised and revised to exploit all manner of new kinds of algorithms and hardware that differentiate AI capabilities from human capabilities. Heading Into The Unknown Those two considerations of size and differentiation could also work in concert. It could be that AI becomes superhuman intellectually because of both the scaling aspects and the differentiation in how AI mimics or represents intelligence. Hogwash, some exhort. AI is devised by humans. Therefore, AI cannot do better than humans can do. AI will someday reach the maximum of human intellect and go no further. Period, end of story. Whoa, comes the retort. Think about humankind figuring out how to fly. We don't flap our arms like birds do. Instead, we devised planes. Planes fly. Humans make planes. Ergo, humans can decidedly exceed their own limitations. The same will apply to AI. Humans will make AI. AI will exhibit human intelligence and at some point reach the upper limits of human intelligence. AI will then be further advanced into superhuman intelligence, going beyond the limits of human intelligence. You might say that humans can make AI that flies even though humans cannot do so. A final thought for now on this beguiling topic. Albert Einstein famously said this: 'Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.' Quite a cheeky comment. Go ahead and give the matter of AI becoming AGI and possibly ASI some serious deliberation but remain soberly thoughtful since all of humanity might depend on what the answer is.
Yahoo
08-06-2025
- Politics
- Yahoo
Voices: Government needs the can-do mindset I experienced in the Army to push change through fast
Government moves too slowly. That's not just the fault of the current government or the last one – it's the system. Slowed down by bureaucracy. Paralysed by 'can't-do' figures. Obsessed with process over progress. I come from a background of delivery. In the Army, working in a 'human intelligence unit' – liaising with agents and special forces – we had to move from first gear to fifth in an instant. Lives depended on it. Getting ahead of the enemy, protecting our people and achieving results was the mission – not talking it to death. Confirming the location of a high-value target, whilst also ensuring they were alone and targetable, or identifying the precise site of an improvised explosive device factory, required creativity and a determined mindset – a willingness to take calculated risks to save lives and win. When I worked in counterterrorism at the Ministry of Defence, delivery wasn't optional. We built a culture of 'can-do'; creative, risk-aware and focused on action. It wasn't about perfection. It was about progress. Government could learn a lot from the mindset of the finest military in the world and the departments that work every day to protect the public from the threat of terrorism. An unstoppable political will must go hand in hand with a mindset of delivery. I think back to our counterterror planning meetings. The mission? To stop terrorists attacking our great country. No timewasters. Just serious professionals putting ideas on the table, pulling them apart, war-gaming every outcome, then locking in a plan and going all-out to deliver. That mindset – challenge, rigour and rapid execution – is what the system of government has desperately lacked for decades. Too often, it's delay by design. Endless consultations. Five-year strategies that take ten. Pet projects blocked by internal turf wars. Take the Lower Thames Crossing: more than £1.2 billion spent before a single spade in the ground – all because of drawn-out decision-making and red tape. Or the A9 dualling project in Scotland – promised by 2025, now pushed back to 2035. Ten years of drift. These delays are not acts of God. They are failures of will. The truth is, Whitehall needs reform. There are dedicated, brilliant people across the civil service – but too many are trapped in a system built to say 'no'. Risk aversion is often rewarded, not challenged. Delivery is too often deprioritised in favour of process, and meaningful reform is blocked by a sprawling web of arms-length bodies and quangos that diffuse responsibility and stifle urgency. We need a leaner, more focused state – one that empowers departments to move at pace and is held accountable for outcomes, not paperwork. That means streamlining quangos where appropriate, ending duplication, and changing the mindset within government itself. Ministers must be prepared to challenge officials – not to attack, but to sharpen decision-making and force clarity on delivery. Wes Streeting's approach to the NHS offers a blueprint. He's made clear that, as health secretary, he expects faster delivery, more accountability, and a culture that doesn't settle for 'this is just how things are done'. Abolishing NHS England shows a steely commitment to the change he expects. But reforming structures is only half the battle – changing the culture is the real prize. Government must operate with a sense of mission, not maintenance. The British public doesn't care whether a successful policy comes from Bevan or Thatcher. They care that it works. That it's delivered. We need to strip out the ideology and face complex problems with a solutions-based mindset. Let the evidence lead. Move fast. Be willing to make mistakes in the name of making progress. And above all, get things done. Because there's serious work to do.


The Independent
08-06-2025
- Politics
- The Independent
Government needs the can-do mindset I experienced in the Army to push change through fast
Government moves too slowly. That's not just the fault of the current government or the last one – it's the system. Slowed down by bureaucracy. Paralysed by 'can't-do' figures. Obsessed with process over progress. I come from a background of delivery. In the Army, working in a 'human intelligence unit' – liaising with agents and special forces – we had to move from first gear to fifth in an instant. Lives depended on it. Getting ahead of the enemy, protecting our people and achieving results was the mission – not talking it to death. Confirming the location of a high-value target, whilst also ensuring they were alone and targetable, or identifying the precise site of an improvised explosive device factory, required creativity and a determined mindset – a willingness to take calculated risks to save lives and win. When I worked in counterterrorism at the Ministry of Defence, delivery wasn't optional. We built a culture of 'can-do'; creative, risk-aware and focused on action. It wasn't about perfection. It was about progress. Government could learn a lot from the mindset of the finest military in the world and the departments that work every day to protect the public from the threat of terrorism. An unstoppable political will must go hand in hand with a mindset of delivery. I think back to our counterterror planning meetings. The mission? To stop terrorists attacking our great country. No timewasters. Just serious professionals putting ideas on the table, pulling them apart, war-gaming every outcome, then locking in a plan and going all-out to deliver. That mindset – challenge, rigour and rapid execution – is what the system of government has desperately lacked for decades. Too often, it's delay by design. Endless consultations. Five-year strategies that take ten. Pet projects blocked by internal turf wars. Take the Lower Thames Crossing: more than £1.2 billion spent before a single spade in the ground – all because of drawn-out decision-making and red tape. Or the A9 dualling project in Scotland – promised by 2025, now pushed back to 2035. Ten years of drift. These delays are not acts of God. They are failures of will. The truth is, Whitehall needs reform. There are dedicated, brilliant people across the civil service – but too many are trapped in a system built to say 'no'. Risk aversion is often rewarded, not challenged. Delivery is too often deprioritised in favour of process, and meaningful reform is blocked by a sprawling web of arms-length bodies and quangos that diffuse responsibility and stifle urgency. We need a leaner, more focused state – one that empowers departments to move at pace and is held accountable for outcomes, not paperwork. That means streamlining quangos where appropriate, ending duplication, and changing the mindset within government itself. Ministers must be prepared to challenge officials – not to attack, but to sharpen decision-making and force clarity on delivery. Wes Streeting 's approach to the NHS offers a blueprint. He's made clear that, as health secretary, he expects faster delivery, more accountability, and a culture that doesn't settle for 'this is just how things are done'. Abolishing NHS England shows a steely commitment to the change he expects. But reforming structures is only half the battle – changing the culture is the real prize. Government must operate with a sense of mission, not maintenance. The British public doesn't care whether a successful policy comes from Bevan or Thatcher. They care that it works. That it's delivered. We need to strip out the ideology and face complex problems with a solutions-based mindset. Let the evidence lead. Move fast. Be willing to make mistakes in the name of making progress. And above all, get things done. Because there's serious work to do.