logo
#

Latest news with #JohnMcCarthy

AI's Magic Cycle
AI's Magic Cycle

Forbes

time18-05-2025

  • Science
  • Forbes

AI's Magic Cycle

Linkedin: Here's some of what innovators are thinking about with AI research today Artificial Intelligence concept - 3d rendered image. When people talk about the timeline of artificial intelligence, many of them start in the 21st century. That's forgivable if you don't know a lot about the history of how this technology evolved. It's only in this new millennia that most people around the world got a glimpse of what the future holds with these powerful LLM systems and neural networks. But for people who have been paying attention and understand the history of AI, it really goes back to the 1950s. In 1956, a number of notable computer scientists and mathematicians met at Dartmouth to discuss the evolution of intelligent computation systems. And you could argue that the idea of artificial intelligence really goes back much further than that. When Charles Babbage made his analytical engine decades before, even rote computation wasn't something that machines could do. But when the mechanical became digital, and data became more portable in computation systems, we started to get those kinds of calculations and computing done in an automated way. Now there's the question of why artificial intelligence didn't come along in the 1950s, or in the 1960s, or in the 1970s. 'The term 'Artificial Intelligence' itself was introduced by John McCarthy as the main vision and ambition driving research defined moving forward,' writes Alex Mitchell at Expert Beacon. '65 years later, that pursuit remains ongoing.' What it comes down to, I think most experts would agree, is that we didn't have the hardware. In other words, you can't build human-like systems when your input/output medium is magnetic tape. But in the 1990s, the era of big data was occurring, and the cloud revolution was happening. And when those were done, we had all of the systems we needed to host LLM intelligence. Just to sort of clarify what we're talking about here, most of the LLMs that we use work on the context of next-word or next-token analysis – they're not sentient, per se, but they're using elegant and complex data sets to mimic intelligence. And to do that, they need big systems. That's why the colossal data centers are being built right now, and why they require so much energy, so much cooling, etc. At an Imagination in Action event this April, I talked to Yossi Mathias, a seasoned professional with 19 years at Google who is the head of research at Google, about research there and how it works. He talked about a cycle for a research motivation that involves publishing, vetting and applying back to impact. But he also spoke to that idea that AI really goes back father than most people think. 'It was always there,' he said, invoking the idea of the Dartmouth conference and what it represented. 'Over the years, the definition of AI has shifted and changed. Some aspects are kind of steady. Some of them are kind of evolving.' Then he characterized the work of a researcher, to compare motives for groundbreaking work. 'We're curious as scientists who are looking into research questions,' he said, 'but quite often, it's great to have the right motivation to do that, which is to really solve an important problem.' 'Healthcare, education, climate crisis,' he continued. 'These are areas where making that progress, scientific progress …actually leads into impact, that is really impacting society and the climate. So each of those I find extremely rewarding, not only in the intellectual curiosity of actually addressing them, but then taking that and applying it back to actually get into the impact that they'd like to get.' Ownership of a process, he suggested, is important, too. 'An important aspect of talking about the nature of research at Google is that we are not seeing ourselves as a place where we're looking into research results, and then throwing them off the fence for somebody else to pick up,' he said. 'The beauty is that this magic cycle is really part of what we're doing.' He talked about teams looking at things like flood prediction,where he noted to so potential for future advancements. We also briefly went over the issue of quantum computing,where Mathias suggested there's an important milestone ahead. 'We can actually reduce the quantum error, which is one of the hurdles, technological hurdles,' he said. 'So we see good progress, obviously, on our team.' One thing Mathias noted was the work of Peter Shore, whose algorithm, he suggested, demonstrated some of the capabilities that quantum research could usher in. 'My personal prediction is that as we're going to get even closer to quantum computers that work, we're going to see many more use cases that we're not even envisioning today,' he noted. Later, Mathias spoke about his notion that AI should be assistiveto humans, and not a replacement for human involvement. 'The fun part is really to come together, to brainstorm, to come up with ideas on things that we never anticipated coming upwith, and to try out various stuff,' he said. Explaining how AI can fill in certain gaps in the scientific process, he described a quick cycle by which, by the time a paper is published on a new concept, that new concept can already be in place in, say, a medical office. 'The one area that I expect actually AI to do much more (in) is really (in) helping our doctors and nurses and healthcare workers,' Mathias said. I was impressed by the scope of what people have done, at Google and elsewhere. So whether it's education or healthcare or anything else, we're likely to see quick innovation, and applications of these technologies to our lives. And that's what the magic cycle is all about.

Lincoln-Way East baseball star Jack Bauer is something to behold on the mound
Lincoln-Way East baseball star Jack Bauer is something to behold on the mound

CBS News

time12-05-2025

  • Sport
  • CBS News

Lincoln-Way East baseball star Jack Bauer is something to behold on the mound

The Lincoln-Way East High School baseball team in southwest suburban Frankfort, Illinois, is having a very good season led by a hard-throwing lefty. His name is Jack Bauer. No, he is not a TV action hero, and he is a bit young to be played by Kiefer Sutherland. But this Jack Bauer can throw a baseball amazingly fast, and when he takes the mound, it's something to behold. "It's just amazing. One of a kind. One of our coaches said, 'The ball kind of sizzles out of his hand,'" said Lincoln-Way East baseball coach John McCarthy. "It's amazing to watch, right? Like that type of talent." Bauer, the senior leader of a talented Lincoln-Way Griffins pitching staff, had a viral moment earlier this season at a tournament in Georgia — and for good reason. He hit 102 miles per hour on the radar gun. "You know, it doesn't feel that fast when you're on the mound. You don't get to see it like from the side like some of these other guys do, or from the plate view," said Bauer. "I'd love to hit against myself and just have no chance." Bauer was already garnering attention before breaking triple digits. He committed to Virginia as only a sophomore. But he has taken his game to another level this season — adding better control to go with his increased velocity. "You know, 3-2 counts, 1-1 counts, counts that we've got to win — he's done a good job throwing strikes, and you know, forcing them to hit his, you know, really good stuff," said McCarthy. "I've increased my workload in the weight room a ton — especially within my legs and within my core," said Bauer. "Putting on the extra weight really got my body under myself, and my legs especially. The motion is a lot easier to repeat now." And even though he is a little young for the show, Bauer leads into the fact that he shares his name with a pretty iconic television character. In fact, he wears the number 24 because of that other Jack Bauer. 'I used wear 8 just because of my birthday. I was like, you know what? I'll play into a little bit, because like, everyone always asks me, you know, like, '24, like after the show?' I said, 'I men, yeah,'" he said, "and my parents got some jokes in them for that." Bauer, who was not born yet when "24" went on the air, said he has not yet watched it. But while Jack Bauer the baseball player may not be saving the world, he is helping lead Lincoln Way East to a great season — as the Griffins chase a conference title and maybe even more in the playoffs. "You know, he's just been great as a leader. That's what I've been telling a lot of Major League scouts. Obviously, the stuff on the field is great and it speaks for itself," McCarthy said. "But for me, I think the biggest thing is what he's doing for the program and for the younger guys, and really setting the right tone, and for the future of this program." Bauer is excited for his future, and sounds ready to meet his lofty goals of pitching at the highest level. "I just want to pitch on the biggest stage. I want all the pressure on me," Bauer said. "I feel like I can compete on that level, and so that's really what I'm working towards. With a fastball that's already MLB-ready, that sure seems like a reachable goal. Young Jack Bauer is on a mission. He has been climbing up MLB Draft boards with listing him as the No. 32 overall prospect in their latest rankings released in late April.

Already set up to struggle, MLS Cup champion LA Galaxy are snakebitten, too
Already set up to struggle, MLS Cup champion LA Galaxy are snakebitten, too

New York Times

time08-05-2025

  • Sport
  • New York Times

Already set up to struggle, MLS Cup champion LA Galaxy are snakebitten, too

When the LA Galaxy take the field against the New York Red Bulls on Saturday evening, it will have been 154 days since the two MLS originals squared off in last year's MLS Cup final. The Galaxy won that game, 2-1, returning to their place as the most decorated club in league history. That remains the last MLS game LA has won. Advertisement The Galaxy made MLS history last weekend when they lost to Sporting Kansas City 1-0 without giving up a shot. Not a shot on goal. Any sort of shot. The lone tally in the game came off an own goal from defender Maya Yoshida. It sent LA spiraling to an 11th consecutive game without a win to start the season (0-8-3, three points), another MLS record. 'When it rains, it f—ing pours, man,' LA goalkeeper John McCarthy said. 'You've got to figure out how to get out of it, and no one's going to help you besides the 28 guys in the locker room. You can't start looking around and going, 'Who's going to do it for us?' It's truly got to be an 'us' thing.' There were warning signs that this would be a difficult season for the Galaxy. They lost star player Riqui Puig to a torn ACL last season in the Western Conference championship and will be without their talisman for most of this season. That was an especially difficult task because so much about how the Galaxy played was built around Puig. His 13 goals and 15 assists didn't tell the full picture of his influence. No one in the league touched the ball more than he did last season; Puig led the category by nearly 500 touches. He also had the most passes and most pass attempts in the league. The Galaxy haven't found a way to totally adjust their style of play without him. Puig's absence was compounded by salary cap issues going into the season. The Galaxy was so tight up against the cap that they moved several players to get compliant. That included midfielder Mark Delgado, who has started nine games for LAFC this season; Gastón Brugman, who won MLS Cup MVP; and forward Dejan Joveljíc, who had 15 goals and six assists last season. The loss of three veteran players, as well as injury issues for stars Joseph Paintsil, Gabriel Pec and Marco Reus, who was expected to step up in Puig's absence, has made matters worse. Advertisement 'With each player we lost, we lost something,' Galaxy coach Greg Vanney said. 'Even when you talk about transition defending and cutting off the series of errors that maybe happen at the top part of the field and transition all the way down to the bottom part of the field. We have two very experienced midfielders who aren't with us anymore that cut off those types of errors, that make reads and understand their priorities. So we have younger midfielders in those situations who are learning these moments and that are going there. 'Each guy that we lost, there's a percentage of who we were that went out the door with those guys. And now we are trying to add new guys, build them up, get them to the same level.' Getting just 322 minutes out of Reus, the German legend, this season has been especially difficult. To be without a designated player and a highly-paid star like Reus, whose budget charge is at max TAM levels, is deadly in MLS. MLS roster rules are designed to top-load the roster. When high-paid players are injured or not producing, it can often spell disaster. It has this season for the Galaxy. Galaxy general manager Will Kuntz, who built the roster that won MLS Cup but now must figure a way to strengthen a group that is floundering, said the injuries and absences can't be seen as an excuse. 'We knew what we were doing, we pushed our chips into the middle of the table last year to try to make it happen,' Kuntz said. 'We took a little bit of an aggressive stance because we thought we had a chance to win. The league rules stuff is a crutch. It's a reality, but it's not unique to us.' MLS rules are essentially set up so that it's difficult for any one team to build a dynasty. There have been exceptions with teams who have had a level of sustained success. Most recently, the Seattle Sounders and Toronto FC stand out. Advertisement Vanney coached the Toronto teams that went to MLS Cup three times in four years from 2016-19, winning once. He pointed out this week that TFC also missed the playoffs in 2018, then rebuilt and got back to MLS Cup in 2019 and challenged for a Supporters' Shield in 2020. 'Every single year that you're successful, you've got to be able to sell one or two players to try to generate money back into your cap so you can pay off the things that help you to become a champion,' Vanney said. 'And so we're going through those stresses ourselves.' The Galaxy, one of the more dynamic attacking teams last season, has sputtered on the offensive end. Paintsil missed the first seven games of the season, but his return didn't seem to lift LA as one would have expected. The Ghanaian winger had 10 goals and 10 assists last season, but he has no goals in five starts this year. Pec had 16 goals and 14 assists in 2024, but he has just one goal and two assists in 10 starts. The Galaxy has also been poor defensively, especially at the start of the season. They are tied with a league-worst 21 goals conceded and have a league-worst minus-13 goal differential. But Vanney sees improvement on both ends of the pitch. He noted that the team has had improved chance creation and better movement on the attacking end, while also limiting some of the defensive mistakes that have plagued them this season. There are some indications that's true. The Galaxy has allowed one or fewer goals in three of the last four games — a 4-2 loss to Portland is the exception there — after giving up two goals or more in six of their first seven games. And while the goals haven't come, Vanney said he clipped together 14 'highly-positive' attacking moments against Sporting KC to show the team that the goals will start to come. Advertisement According to data pulled from TruMedia via StatsPerform (Opta), 19.5 percent of the Galaxy's chances over the last five games were 'big' chances, a slight uptick from the 15.8 percent of the first six games of the season. The Galaxy still just aren't generating enough chances. Last year, they ranked fourth in MLS with 11.2 chances created per game. That has dropped this year to 8.9, tied with the Chicago Fire for 15th. The focus, Vanney said, has to be on the process rather than simply on the results – especially when those results have been so poor. He pointed to the fact that the Galaxy didn't give up a shot against Kansas City as evidence of the process even if the result was a historic loss. 'There's positive things inside of a sh—y result that we try to stay focused on so that we can utilize those things that are advantages going into the next game and not just sitting back and going, 'S—, we lost, and let's all feel terrible.' Because that doesn't help us in the next game either to try to create the margin we need for winning,' Vanney said. The key now is to find a way to create some momentum, because the season is quickly slipping away from LA. 'I do think that the results build confidence, right, it builds that positivity and that energy that you want to use to build momentum,' Vanney said. 'So I think those are key. Nobody has played us to a way that we feel like we were overwhelmed or we were really behind it in a game. And that's why I think the margins are thin for us to turn this thing into positive results. 'It's not big things, it's little things.'

Two ways to look at corporate tax take
Two ways to look at corporate tax take

Irish Times

time08-05-2025

  • Business
  • Irish Times

Two ways to look at corporate tax take

There are perhaps two ways of looking at the Department of Finance's latest warning over corporation tax: good and bad. While publishing the annual progress report (formerly the stability programme update), department officials noted, in passing, that they were expecting approximately €2 billion less in business tax receipts this year. But they insisted the downward revision was not related to tariffs. The department's chief economist, John McCarthy, said they had been informed by Revenue that several firms will be paying a lower-than-expected amount of tax this year 'on a one-off basis'. This had been incorporated into the department's forecasts without knowing anything about 'the specifics of the companies' involved, he said. READ MORE Now back to the two ways of looking at this. The loss of €2 billion on a forecast and when the total still promises to be in the region €29 billion, a multiple of what it was only a few years ago, is not a bad result – particularly when the State is still likely to run a big budget surplus anyway. Moreover, especially so when many peer countries are looking at budgetary cutbacks. We've been warned all along that these revenues are volatile. The more negative interpretation is that the €2 billion revision has seemingly arrived before the impact of US president Donald Trump's tariffs. Will DoorDash takeover of Deliveroo mean better pay and conditions for gig economy workers? Listen | 28:33 We might see a bigger downward revision later in the year or next year when the impact of these measures is felt. The impact of tariffs on global trade is negative; how negative depends on how far Trump pushes his trade agenda. All this will take time to play out and to appear on company balance sheets. The department's progress report is strewn with caveats and cautions because of the high level of uncertainty posed by Washington's trade policy. 'It is important to stress that the confidence intervals around these estimates are particularly large in the current environment,' it says. Instead of forecasting, a dangerous endeavour in this climate, the department says it is scenario planning. And even these (scenarios) may soon be replaced by new ones.

The science of AI and the AI of science
The science of AI and the AI of science

The Hindu

time01-05-2025

  • Science
  • The Hindu

The science of AI and the AI of science

The fundamental idea behind artificial intelligence (AI) stems from the British mathematician Alan Turing, who in the 1950s defined the idea of intelligence in a machine. During World War II, when attempting to break encryption code the Nazis used to transmit secret messages, he wondered whether machines would be able to find patterns in large amounts of data that humans couldn't. He speculated machines could learn from experience instead of being taught to work from the first principles. Computer scientist John McCarthy coined the term 'artificial intelligence' in a summer workshop in 1955 that many leading computer scientists attended. While the idea enjoyed enough attention in the subsequent decade — enough for the first chatbot, ELIZA, to be created in 1966 — funding dipped in the 1970s before rebounding after. By 1997, IBM's Deep Blue defeated chess champion Gary Kasparov, around the same time researchers were starting to build sophisticated artificial neural networks that supercharged machine learning. Soon, the idea emerged that these neural networks, which are computers that process information like networks of neurons in animal brains, could solve most scientific problems. From ANNs to GPUs Artificial neural networks, or ANNs, could learn to solve each problem by digesting large amounts of data, mapping the relationships between various problem statements and their respective solutions, and finally recreating these relationships for unsolved problems. This paradigm is called unsupervised learning. In supervised learning, humans label the data with tags the machine picks up on. For example, humans can create a database of images of cats and dogs and label them accordingly. The ANN that 'trains' with the database then 'learns' what 'cat' and 'dog' stand for. In another type of learning, called reinforcement learning, humans go through the results of what a machine has learnt and provide feedback for the machine to improve. Every ANN consists of nodes, small computers that accept input signals and provide an output. The nodes are divided into groups called layers. The layers are connected to each other like neurons in the brain: each node in one layer connects to a single other node in the next layer. It's a sandwich: the layers are the two slices of bread and in between are all the connections between the nodes. Not all connections are equal, some are more important than others. These relationships are adjusted by giving each connection a weight. The greater the weight, the more important the signal passing along that connection. By adjusting the weights, arrangement of nodes, and number of layers, then, the ANN can be adjusted to learn and process data in different ways. Machine-learning models that use such ANN architecture are capable of processing in a few hours databases that might take humans several months — as long as they have the requisite computing. This power comes from graphics processing units (GPUs), an advanced version of the central processing units (CPUs) that power home computers. GPUs are specialised to solve multiple mathematical problems simultaneously, speeding up the ANN's learning process. Machine learning v. artificial intelligence Recognising patterns in any form of data is in the domain of machine-learning (ML). It has applications in many fields. For example, ML models installed on self-driving cars are trained to check the condition of the cars' various components and if possible perform repairs. In the clinical realm, ML models can learn to find patterns in disorders that lead to new forms of treatment or read test reports to identify the risk of specific diseases. AI, on the other hand, is broader. It is based on more recent advances in ML that mimic human intelligence in problem-solving — like completing an unfinished sentence the way Arthur Clarke might or create an image in the style of Vincent van Gogh. Such AI models are being rapidly adapted for various applications. For example, researchers can build ML algorithms that digest the average behaviour of a user's financial accounts, like transaction frequency, spending limits, login times, and device use, according to Jia Zhai, senior associate professor in the Department of Finance at the Xi'an Jiaotong-Liverpool University in Suzhou, China. 'If a fraudster gains valid credentials but logs in via an unrecognised device at 3 am and initiates rapid microtransactions, clustering algorithms detect this as an outlier compared to the user's historical behaviour,' she said. Then, more specialised networks called convolutional neural networks look for complex patterns in transactions; recurrent neural networks identify deviations from average spending behaviour; and graph neural networks examine the connections between accounts, merchants, and IP addresses to uncover hidden money-laundering networks, said Shimeng Shi, assistant professor in the same department and institute. The capabilities of AI surged from around 2017, when researchers began using ML to process large amounts of data simultaneously using multiple GPUs. A major advance that resulted was the large language model. As private sector enterprises figured out how to apply this and other models to solve different but specific problems, manufacturers and vendors rushed to meet the demand for the underlying hardware. This in turn led to more computing power and faster chips entering the market. Another equally important and roughly simultaneous development was the availability of large datasets on which the new batch of AL/ML models could be trained. All together, the next major advance took shape: generative AI, where an AI model didn't just analyse what was in front of it but also put existing information together in new ways, e.g. creating an image based on a user's text instructions. Perhaps the most well-known products that make such capabilities available to users are ChatGPT and DALL-E, both made by US-based company OpenAI. Shimeng Shi also said financial firms have been trying to 'help their clients to generate real-time trade ideas' using 'AI-empowered tools' that are out of view. The technology isn't a silver bullet, of course. Completely autonomous AI agents are not yet a reality because of their tendency to 'hallucinate', i.e. invent information that doesn't exist in the real world. This happens when an AI model is confronted with a particular kind of data it hasn't been trained on, causing it to mix them up with data it is familiar with. Precision, speed, structure 'Your model is as good as your data,' Aditi Shanmugam, a research associate of analytics and databases at Bengaluru-based startup Ultrahuman who uses AI models to draw inferences in health data, said. 'For any good model, you need lots of data with good diversity,' Debnath Pal, professor in the Department of Computational and Data Sciences at the Indian Institute of Science (IISc), Bengaluru, added. The next thing a good model needs after training data is hardware resources. 'Each data centre — especially a large one with AI GPUs — can consume as much power as a whole nuclear power plant will produce,' Akash Pradhan, a member of the technical staff at chip-maker AMD, said. The machines also generate a large amount of heat of their own, which means they need to be cooled, which requires even more power. If the machines are performing a particularly complex task, the data they are manipulating need to be stored in high-speed hard drives. Given all these requirements, most of the better AI research today — especially of the cutting edge variety — is led by big corporations with deep pockets. But it may not always be this way. Many computer scientists are working on techniques to lower the power and hardware requirements for specific models without also compromising the latter's problem-solving ability. For example, Rakesh Sengupta, director of the Centre for Creative Cognition at S.R. University in Warangal is working on a technique called pruning. In a recent paper, he proposed a method in which some connections in a neural network are cut while the most important ones are preserved, then retraining the model to work with the smaller set of connections. He expressed belief that we can 'trim' existing models without sacrificing their reliability. 'I feel customising small language models for specialised tasks in healthcare or robotics will be most' improved, he added. The faster and more precise AI models become, the more precise application they will find — 'whether it's neural prosthetics or brain-computer interfaces or some [other] technologies that can interface seamlessly with the brain,' Sengupta said. Most AI researchers use the most accessible model and data to achieve specific goals. In their absence, researchers draw up datasets from first principles and mix them with available ones to create more complete, yet also more reliable, datasets. For example, Pal said, material science researchers integrate experimental data of the properties of materials with synthetic data of the presence of other materials to create synthetic datasets that are complete and contain more information for the models to search for. 'After doing all these experiments, you may be able to figure out that, 'oh, if I dope with this material, then I would get that property'. Such experiments are being done and then it is kind of reducing the time to realise those compositions,' Pal said. But defining the problems and arriving at solutions is not always straightforward, and often depends on factors that require researchers to dig deep into the specific peculiarities in the data and the models. For example, Adway Mitra, an assistant professor in the Centre of Excellence in Artificial Intelligence at IIT-Kharagpur, expressed belief that there is considerable scope to use AI models to improve weather and seasonal predictions, especially of Indian monsoons. This is what he does. Often, weather data exists as a combination of textual, visual, and numerical data 'We first condense the space of all weather patterns to a small number (about 10) of 'typical' patterns, and our claim is that every day's weather pattern is an approximate or noisy version of any one of these 'typical' patterns,' Mitra explained. Generative AI models train on these datasets and create new data from them that are easier to analyse and represent as mathematical structures. But real-world weather data is often noisy and difficult to interpret, and weather is a complex system with lots of parameters across various locations and times. 'The key technical challenge is the availability of weather data,' Mitra said. Weather data has structures that an ML model must be able to work with. Mitra's research focuses on what kind of algorithms or models scientists can use to best utilise that structure. Thus, researchers like Mitra are turning the idea of AI back to where it started from: while machines are good at understanding patterns, at the end of the day, the patterns must be supported by physics because weather patterns are created by physical processes. The question researchers are thus asking is: 'How can we constrain machine learning so that it provides us values which are consistent with the different laws of physics?' This exercise, Mitra said, will bring down the number of computations the AI models will need to perform to make accurate weather predictions and thus demand less power and data storage infrastructure. Towards AI agents Sandeep Juneja, a professor of computer science and director of Safexpress Centre for Data, Learning and Decision Sciences at Ashoka University, said corporations like Google have large data-driven AI models that are already doing this at scale — but that they may be running out of data to train with. On the other hand, he added, academicians in India and even worldwide don't have the computational capacity to develop such large models to develop nuanced weather predictions. He said models like DeepSeek provide hope as they have been able to use 'clever' tricks to use small amounts of data to train the models efficiently. But Chiranjib Bhattacharyya, a professor in the Department of Computer Science and Automation at IISc, said that even DeepSeek's model is large compared to what academics can presently access. Lixian Qian, an associate dean for research and professor in the Department of Intelligent Operations and Marketing at from Xi'an Jiaotong-Liverpool University, works on autonomous vehicles that use AI algorithms to model their complex environment, predict the movement of objects on the road, and decide how the vehicle moves to avoid accidents. While there has been significant integration of AI into autonomous vehicles, he said practical challenges remain — and AI has the ability to address them. 'AI algorithms can increase the number of tests on autonomous driving systems in diverse driving environments, so that the potential problems could be uncovered and diagnosed in advance.' In a sense, then, we are slowly transitioning from a world of generative AI to agentic AI. AI agents are more powerful than the present versions of AI which still specify on particular tasks. They integrate the power of different functionalities into an ecosystem that can be empowered to make particular decisions. For example, AI assistants may one day be able to parse data about a person's life, including their hobbies, expenses, health conditions, work, and life priorities, and help them with tasks like booking appointments or filling out forms. However, how much of such a technology will be accessible and usable to people at large will depend on data privacy protections and technological literacy. Bhattacharya said social scientists and law scholars will play an important role in shaping how such systems fit into our lives. Sohini Majumdar, a software engineering manager at Salesforce, agreed the time for agentic AI was near. Many business platforms are increasingly using agentic AI instead of simple chatbots to integrate their business and increase their impact. However, she added, fundamental challenges remain in using generative AI models too. The principal challenge is understanding why an AI model outputs one specific business decision rather than another — especially if the output deviates from a human understanding of the business. So she and her colleagues use yet other AI models to validate the decisions suggested by generative AI. Their aim is to understand what a model is doing and how to tweak various inputs so that the model does what she wants it to. In this way, her team will be able to make automated decisions and trust them as well. According to Bhattacharyya, the fundamental problem boils down to AI models currently lacking the ability to reason. Pal agreed: 'What is the path that [a model] follows? Is it following the same path that as a human I would want it to follow to do this inference? That we don't know.' Mathematicians, computer scientists, and physicists are currently trying to untangle this Gordian knot. Pradhan of AMD said these challenges are fundamental: despite neural networks being based on the human brain, the way the machines learn and the way the human brain functions are different. A fundamental difference is how the computational blocks in an artificial intelligence model — the GPUs — are different sites from where the parameters of the model are stored. In the brain, these two are stored at the same location. Second, chemical reactions run the brain whereas electricity runs digital machines. The challenges, Pradhan said, can be mitigated in neuromorphic computing, where the hardware more closely mimics how neural networks in our brain operate. 'Instead of you writing code to emulate a neural network, your hardware is the neural network,' he said. Functional neuromorphic computers of the future are expected to require less power and have the ability to update its model automatically when it encounters new data — just like our brain. But there are multiple hardware and software challenges to be surmounted before it can be realised, Pradhan said. Sengupta is sceptical of how much AI will truly mimic us. While each generation of humans has been more comfortable with the increasing presence of smarter gadgets and software, and the tools have changed us too, there might be a natural barrier to how much AI might affect us. But it has also made us think deeply about our technologies. Just like how we constantly grapple with understanding our own existence, we might have to do the same when we invite AI into every aspect of our lives. Debdutta Paul is a freelance science journalist.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store