logo
#

Latest news with #AssociationfortheAdvancementofArtificialIntelligence

Pakistan joins global AI community
Pakistan joins global AI community

Express Tribune

time26-07-2025

  • Business
  • Express Tribune

Pakistan joins global AI community

Listen to article Pakistan has officially joined the global AI landscape with the launch of the Pakistan Chapter of the Association for the Advancement of Artificial Intelligence (AAAI), founded in 1958. The launch event was held at Pearl Continental Hotel, with over 200 C-level executives, professors, researchers, and dignitaries in attendance. The event was chaired by Muhammad Tahir, the newly appointed Chair of AAAI Pakistan, who introduced the core vision and benefits of this global affiliation. "This is not just a chapter," he said. It's Pakistan's official entry into the global AI ecosystem. Through AAAI, Pakistanis will gain access to over 15,000 international AI research papers, global university collaborations, funding opportunities for startups, international hackathons, and a chance to host the official AAAI Regional Conference in Pakistan, he said. The event was attended by key figures from the government of Pakistan and technology leadership, including Dr Yasar Ayaz, Chairman of the National Centre of Artificial Intelligence (NCAI) and head of the national AI Task Force. He praised the AAAI initiative, stating: "This is a game-changing moment." With AAAI's presence, Pakistan's AI capabilities will now be recognised globally. It opens doors to academic, economic, and technological growth at a scale we've never seen before. The event opened with a historic AI-generated video appearance of Jack McCarthy, the late founder of Artificial Intelligence and AAAI. The symbolic gesture set the tone for what is now being called Pakistan's most prestigious AI affiliation to date. Jibran Jamshad, Regional Director at Microsoft, also shared insights on how AAAI's global influence contributed to breakthroughs like ChatGPT and other AI tools: "Without AAAI, the AI revolution we see today wouldn't exist. This chapter will allow Pakistani talent to join the global narrative." The official AAAI website for Pakistan will be launched on August 14, 2025, at providing memberships, research access, and community updates. With the presence of military officials, academic leaders, and corporate sponsors, this launch marked a powerful new era in Pakistan's tech ecosystem, with the potential to generate over $200 million in AI-driven economic impact and boost international collaboration in education, innovation, and entrepreneurship.

New report casts doubt on long-standing belief about AI's direction: 'Always seemed to me to be misplaced'
New report casts doubt on long-standing belief about AI's direction: 'Always seemed to me to be misplaced'

Yahoo

time05-04-2025

  • Science
  • Yahoo

New report casts doubt on long-standing belief about AI's direction: 'Always seemed to me to be misplaced'

Like it or not, AI has become an important tool in many industries across the world. From summarizing reports to compiling complex data into more digestible abstracts, AI is no longer the science fiction dream that it once was. However, according to a recent report, the inability to effectively reason will prevent AI from achieving lofty goals set by prospective developers. Conducted by scientists at the Association for the Advancement of Artificial Intelligence, 475 AI researchers were surveyed on their outlook on future AI advancements. According to the report, this lack of reasoning ability is key to the pursuit of facts. "Reasoning is used to derive new information from given base knowledge; this new information is guaranteed correct when sound formal reasoning is used, otherwise it is merely plausible," the report reads. Artificial general intelligence, or AGI, is a hypothetical future in which AI is able to possess human-level intelligence. This means that machines would be capable of performing any intellectual task that a human can. This would include using effective reasoning. When asked if "scaling up current AI approaches" could achieve AGI, 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed. Stuart Russell, a computer scientist at the University of California, Berkeley, and a member of the report's research team, was apparently not surprised by the results of the survey. While speaking to NewScientist, Russell offered up his take on the advancement of AI. "The vast investments in scaling, unaccompanied by any comparable efforts to understand what was going on, always seemed to me to be misplaced," Russell said. "I think that, about a year ago, it started to become obvious to everyone that the benefits of scaling in the conventional sense had plateaued." As reported by a Goldman Sachs newsletter, some of the largest players in tech will invest around $1 trillion in the next few years on AI data centers and microchips. According to the U.N. Environment Programme, data centers that are home to AI servers can produce electronic waste. These servers can also consume a significant amount of electricity, producing heat-trapping pollution that contributes to a warming climate. However, despite AI's apparent ceiling and potential environmental pitfalls, the AAAI research team expressed optimism that its limited applications could be harnessed in a way that benefits society, as the tech becomes more efficient and adjustments are made to grid infrastructure, among other things, writing: "AI can be a powerful enabler of climate and sustainability goals." Do you think America has a plastic waste problem? Definitely Only in some areas Not really I'm not sure Click your choice to see results and speak your mind. "Beyond improving efficiency and reducing carbon emissions across industries, AI is accelerating breakthroughs in areas such as advanced battery materials, carbon removal technologies, and high-precision climate modeling," according to the report. Join our free newsletter for good news and useful tips, and don't miss this cool list of easy ways to help yourself while helping the planet.

Gen AI's Accuracy Problems Aren't Going Away Anytime Soon, Researchers Say
Gen AI's Accuracy Problems Aren't Going Away Anytime Soon, Researchers Say

Yahoo

time21-03-2025

  • Science
  • Yahoo

Gen AI's Accuracy Problems Aren't Going Away Anytime Soon, Researchers Say

Generative AI chatbots are known to make a lot of mistakes. Let's hope you didn't follow Google's AI suggestion to add glue to your pizza recipe or eat a rock or two a day for your health. These errors are known as hallucinations: essentially, things the model makes up. Will this technology get better? Even researchers who study AI aren't optimistic that'll happen soon. That's one of the findings by a panel of two dozen artificial intelligence experts released this month by the Association for the Advancement of Artificial Intelligence. The group also surveyed more than 400 of the association's members. In contrast to the hype you may see about developers being just years (or months, depending on who you ask) away from improving AI, this panel of academics and industry experts seems more guarded about how quickly these tools will advance. That includes not just getting facts right and avoiding bizarre mistakes. The reliability of AI tools needs to increase dramatically if developers are going to produce a model that can meet or surpass human intelligence, commonly known as artificial general intelligence. Researchers seem to believe improvements at that scale are unlikely to happen soon. "We tend to be a little bit cautious and not believe something until it actually works," Vincent Conitzer, a professor of computer science at Carnegie Mellon University and one of the panelists, told me. The report's goal, AAAI president Francesca Rossi wrote in its introduction, is to support research in artificial intelligence that produces technology that helps people. Issues of trust and reliability are serious, not just in providing accurate information but in avoiding bias and ensuring a future AI doesn't cause severe unintended consequences. "We all need to work together to advance AI in a responsible way, to make sure that technological progress supports the progress of humanity and is aligned to human values," she wrote. The acceleration of AI, especially since OpenAI launched ChatGPT in 2022, has been remarkable, Conitzer said. "In some ways that's been stunning, and many of these techniques work much better than most of us ever thought that they would," he said. There are some areas of AI research where "the hype does have merit," John Thickstun, assistant professor of computer science at Cornell University, told me. That's especially true in math or science, where users can check a model's results. "This technology is amazing," Thickstun said. "I've been working in this field for over a decade, and it's shocked me how good it's become and how fast it's become good." Despite those improvements, there are still significant issues that merit research and consideration, experts said. Despite some progress in improving the trustworthiness of the information that comes from generative AI models, much more work needs to be done. A recent report from Columbia Journalism Review found chatbots were unlikely to decline to answer questions they couldn't answer accurately, confident about the wrong information they provided and made up (and provided fabricated links to) sources to back up those wrong assertions. Improving reliability and accuracy "is arguably the biggest area of AI research today," the AAAI report said. Researchers noted three main ways to boost the accuracy of AI systems: fine-tuning, such as reinforcing learning with human feedback; retrieval-augmented generation, in which the system gathers specific documents and pulls its answer from those; and chain-of-thought, where prompts break down the question into smaller steps that the AI model can check for hallucinations. Will those things make your chatbot responses more accurate soon? Not likely: "Factuality is far from solved," the report said. About 60% of those surveyed indicated doubts that factuality or trustworthiness concerns would be solved soon. In the generative AI industry, there has been optimism that scaling up existing models will make them more accurate and reduce hallucinations. "I think that hope was always a little bit overly optimistic," Thickstun said. "Over the last couple of years, I haven't seen any evidence that really accurate, highly factual language models are around the corner." Despite the fallibility of large language models such as Anthropic's Claude or Meta's Llama, users can mistakenly assume they're more accurate because they present answers with confidence, Conitzer said. "If we see somebody responding confidently or words that sound confident, we take it that the person really knows what they're talking about," he said. "An AI system, it might just claim to be very confident about something that's completely nonsense." Awareness of generative AI's limitations is vital to using it properly. Thickstun's advice for users of models such as ChatGPT and Google's Gemini is simple: "You have to check the results." General large language models do a poor job of consistently retrieving factual information, he said. If you ask it for something, you should probably follow up by looking up the answer in a search engine (and not relying on the AI summary of the search results). By the time you do that, you might have been better off doing that in the first place. Thickstun said the way he uses AI models most is to automate tasks that he could do anyway and that he can check the accuracy, such as formatting tables of information or writing code. "The broader principle is that I find these models are most useful for automating work that you already know how to do," he said. Read more: 5 Ways to Stay Smart When Using Gen AI, Explained by Computer Science Professors One priority of the AI development industry is an apparent race to create what's often called artificial general intelligence, or AGI. This is a model that is generally capable of a human level of thought or better. The report's survey found strong opinions on the race for AGI. Notably, more than three-quarters (76%) of respondents said scaling up current AI techniques such as large language models was unlikely to produce AGI. A significant majority of researchers doubt the current march toward AGI will work. A similarly large majority believe systems capable of artificial general intelligence should be publicly owned if they're developed by private entities (82%). That aligns with concerns about the ethics and potential downsides of creating a system that can outthink humans. Most researchers (70%) said they oppose stopping AGI research until safety and control systems are developed. "These answers seem to suggest a preference for continued exploration of the topic, within some safeguards," the report said. The conversation around AGI is complicated, Thickstun said. In some sense, we've already created systems that have a form of general intelligence. Large language models such as OpenAI's ChatGPT are capable of doing a variety of human activities, in contrast to older AI models that could only do one thing, such as play chess. The question is whether it can do many things consistently at a human level. "I think we're very far away from this," Thickstun said. He said these models lack a built-in concept of truth and the ability to handle truly open-ended creative tasks. "I don't see the path to making them operate robustly in a human environment using the current technology," he said. "I think there are many research advances in the way of getting there." Conitzer said the definition of what exactly constitutes AGI is tricky: Often, people mean something that can do most tasks better than a human but some say it's just something capable of doing a range of tasks. "A stricter definition is something that would really make us completely redundant," he said. While researchers are skeptical that AGI is around the corner, Conitzer cautioned that AI researchers didn't necessarily expect the dramatic technological improvement we've all seen in the past few years. "We did not see coming how quickly things have changed recently," he said, "and so you might wonder whether we're going to see it coming if it continues to go faster."

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store