logo
#

Latest news with #FAIR

FAIR 2025 Conference - A Global Gathering of Reinsurers and Insurers in Mumbai
FAIR 2025 Conference - A Global Gathering of Reinsurers and Insurers in Mumbai

Business Standard

time5 hours ago

  • Business
  • Business Standard

FAIR 2025 Conference - A Global Gathering of Reinsurers and Insurers in Mumbai

PNN Mumbai (Maharashtra) [India], July 29: General Insurance Corporation of India (GIC Re) is honoured to announce that registrations are now open for the 29th FAIR (Federation of Afro-Asian Insurers and Reinsurers) Conference, to be held in the vibrant city of Mumbai from 5th to 8th October 2025. The Federation of Afro-Asian Insurers and Reinsurers (FAIR) is a professional business association committed to promoting regional cooperation and advancing the insurance industry across Afro-Asian countries. Given FAIR's strong brand recognition and its network of 204 member companies across 52 countries, the conference brings together a diverse array of nationalities and cultures. Being a biennial event held since 1967, the 2025 edition will be the 29th FAIR Conference which also coincides with the 60th anniversary of FAIR. This seminal event promises to unite a distinguished assembly of leaders, strategists, regulators, and practitioners from the insurance and reinsurance fraternity across Asia, Africa, and the broader global risk landscape. Held under the resonant theme "Emerging Markets - Towards Resilient Growth", this edition invites the industry to contemplate not only the opportunities but also the responsibilities of navigating an era defined by systemic volatility, technological disruption, geopolitical flux, and climate unpredictability. It serves as a clarion call to industry visionaries: to forge new pathways of collaboration, embrace sustainable innovation, and strengthen regional self-reliance amidst global uncertainties. Notably, this edition also features insights from industry experts on enhancing insurance accessibility through reinsurance strategies, in alignment with IRDAI's Vision 2047. Carrying forward FAIR's legacy as a beacon of dialogue and direction, the Mumbai Conference is poised to welcome over 600 delegates, representing a cross-section of markets, institutions, and thought traditions. The multi-layered agenda will feature keynote sessions from global policy architects and reinsurance pioneers, strategic panel discussions and opportunities for bilateral meetings. As Rainer Maria Rilke once wrote, "The future enters into us long before it happens." FAIR 2025 seeks to be that very threshold -- where the invisible challenges of tomorrow are named, shaped, and faced. We welcome you to Mumbai, not just as a location but as a metaphor -- a city that reflects the spirit of emergence and endurance. Here, amidst dialogue and diversity, we aim to build not just strategies, but a shared resilience worthy of the age we inhabit.

What Is Superintelligence? Everything You Need to Know About AI's Endgame
What Is Superintelligence? Everything You Need to Know About AI's Endgame

CNET

time13 hours ago

  • Science
  • CNET

What Is Superintelligence? Everything You Need to Know About AI's Endgame

You've probably chatted with ChatGPT, experimented with Gemini, Claude or Perplexity, or even asked Grok to verify a post on X. These tools are impressive, but they're just the tip of the artificial intelligence iceberg. Lurking beneath is something far bigger that has been all the talk in recent weeks: artificial superintelligence. Some people use the term "superintelligence" interchangeably with artificial general intelligence or sci-fi-level sentience. Others, like Meta CEO Mark Zuckerberg, use it to signal their next big moonshot. ASI has a more specific meaning in AI circles. It refers to an intelligence that doesn't just answer questions but could outthink humans in every field: medicine, physics, strategy, creativity, reasoning, emotional intelligence and more. We're not there yet, but the race has already started. In July, Zuckerberg said during an interview with The Information that his company is chasing "personal superintelligence" to "put the power of AI directly into individuals' hands." Or, in Meta's case, probably in everyone's smart glasses. Scott Stein/CNET That desire kicked off a recruiting spree for top researchers in Silicon Valley and a reshuffling inside Meta's FAIR team (now Meta AI) to push Meta closer to AGI and eventually ASI. So, what exactly is superintelligence, how close are we to it, and should we be excited or terrified? Let's break it down. What is superintelligence? Superintelligence doesn't have a formal definition, but it's generally described as a hypothetical AI system that would outperform humans at every cognitive task. It could process vast amounts of data instantly, reason across domains, learn from mistakes, self-improve, develop new scientific theories, write flawless code, and maybe even make emotional or ethical judgments. The idea became popularized through philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies , which warned of a scenario where an AI bot becomes smarter than humans, self-improves rapidly and then escapes our control. That vision sparked both excitement and fear among tech experts. Speaking to CNET, Bostrom says many of his 2014 warnings "have proven quite prescient." What has surprised him, he says, is "how anthropomorphic current AI systems are," with large language models behaving in surprisingly humanlike ways. Bostrom says he's now shifting his attention toward deeper issues, including "the moral status of digital minds and the relationship between the superintelligence we build with other superintelligences," which he refers to as "the cosmic host." For some, ASI represents the pinnacle of progress, a tool to cure disease, reverse climate change and crack the secrets of the universe. For others, it's a ticking time bomb -- one wrong move and we're outmatched by a machine we can't control. It's sometimes called the last human invention, not because it's final, but because ASI could invent everything else we need. British mathematician Irving John Good described it as an "intelligence explosion." Superintelligence doesn't exist yet. We're still in the early stages of what's called artificial narrow intelligence. It's an AI system that is great at specific tasks like translation, summarization and image generation, but not capable of broader reasoning. Tools like ChatGPT, Gemini, Copilot, Claude and Grok fall into this category. They're good at some tasks, but still flawed, prone to hallucinations and incapable of true reasoning or understanding. To reach ASI, AI needs to first pass through another stage: artificial general intelligence. What is AGI? AGI, or artificial general intelligence, refers to a system that can learn and reason across a wide range of tasks, not just one domain. It could match human-level versatility, such as learning new skills, adapting to unfamiliar problems and transferring knowledge across fields. Unlike current chatbots, which rely heavily on training data and struggle outside of predefined rules, AGI would handle complex problems flexibly. It wouldn't just answer questions about math and history; it could invent new solutions, explain them and apply them elsewhere. Current models hint at AGI traits, like multimodal systems that handle text, images and video. But true AGI requires breakthroughs in continual learning (updating knowledge without forgetting old stuff) and real-world grounding (understanding context beyond data). And none of the major models today qualify as true AGI, though many AI labs, including OpenAI, Google DeepMind and Meta, list it as their long-term target. Once AGI arrives and self-improves, ASI could follow quickly as a system smarter than any human in every area. How close are we to superintelligence? A superintelligent future concept I generated using Grok AI. Grok / Screenshot by CNET That depends on who you ask. A 2024 survey of 2,778 AI researchers paints a sobering picture. The aggregate forecasts give a 50% chance of machines outperforming humans in every possible task by 2047. That's 13 years sooner than a 2022 poll predicted. There's a 10% chance this could happen as early as 2027, according to the survey. For job automation specifically, researchers estimate a 10% chance that all human occupations become fully automatable by 2037, reaching 50% probability by 2116. Most concerning, 38% to 51% of experts assign at least a 10% risk of advanced AI causing human extinction. Geoffrey Hinton, often called the Godfather of AI, warned in a recent YouTube podcast that if superintelligent AI ever turned against us, it might unleash a biological threat like a custom virus -- super contagious, deadly and slow to show symptoms -- without risking itself. Resistance would be pointless, he said, because "there's no way we're going to prevent it from getting rid of us if it wants to." Instead, he argued that the focus should be on building safeguards early. "What you have to do is prevent it ever wanting to," he said in the podcast. He said this could be done by pouring resources into AI that stays friendly. Still, Hinton confessed he's struggling with the implications: "I haven't come to terms with what the development of superintelligence could do to my children's future. I just don't like to think about what could happen." Factors like faster computing, quantum AI and self-improving models could accelerate things. Hinton expects superintelligence in 10 to 20 years. Zuckerberg said during that podcast that he believes ASI could arrive within the next two to three years, and OpenAI CEO Sam Altman estimates it'll be somewhere in between those time frames. Most researchers agree we're still missing key ingredients, like more advanced learning algorithms, better hardware and the ability to generalize knowledge like a human brain. IBM points to areas like neuromorphic computing (hardware inspired by human neurons), evolutionary algorithms and multisensory AI as building blocks that might get us there. Meta's quest for 'personal superintelligence' Meta launched its Superintelligence Labs in June, led by Alexandr Wang (ex-Scale AI CEO) and Nat Friedman (ex-GitHub CEO), with $14.3 billion invested in Scale AI and $64 billion to $72 billion for data centers and AI infrastructure. Zuckerberg doesn't shy away from Greek mythology, with names like Prometheus and Hyperion for his two AI data superclusters (massive computing centers). He also doesn't talk about artificial superintelligence in abstract terms. Instead, he claims that Meta's specific focus is on delivering "personal super intelligence to everyone in the world." This vision, according to Zuckerberg, sets Meta apart from other research labs that he says primarily concentrate on "automating economically productive work." Bostrom thinks this isn't mere hype. "It's possible we're only a small number of years away from this," he said of Meta's plans, noting that today's frontier labs "are quite serious about aiming for superintelligence, so it is not just marketing moves." Though still in its early stages, Meta is actively recruiting top talent from companies like OpenAI and Google. Zuckerberg explained in his interview with The Information that the market is extremely competitive because so few people possess the requisite high level of skills. Facebook and Zuckerberg didn't respond to requests for comment. Should humans subscribe to the idea of superintelligent AI? There are two camps in the AI world: those who are overly enthusiastic, inflating its benefits and seemingly ignoring its downsides; and the doomers who believe AI will inevitably take over and end humanity. The truth probably lands somewhere in the middle. Widespread public fear and resistance, fueled by dystopian sci-fi and very real concerns over job loss and massive economic disruption, could slow progress toward superintelligence. One of the biggest problems is that we don't really know what even AGI looks like in machines, much less ASI. Is it the ability to reason across domains? Hold long conversations? Form intentions? Build theories? None of the current models, including Meta's Llama 4 and Grok 4, can reliably do any of this. There's also no agreement on what counts as "smarter than humans." Does it mean acing every test, inventing new math and physics theorems or solving climate change? And even if we get there -- should we? Building systems vastly more intelligent than us could pose serious risks, especially if they act unpredictably or pursue goals misaligned with ours. Without strict control, it could manipulate systems or even act autonomously in ways we don't fully understand. Brendan Englot, director of the Stevens Institute for Artificial Intelligence, shared with CNET that he believes "an important first step is to approach cyber-physical security similarly to how we would prepare for malicious human-engineered threats, except with the expectation that they can be generated and launched with much greater ease and frequency than ever before." That said, Englot isn't convinced that current AI can truly outpace human understanding. "AI is limited to acting within the boundaries of our existing knowledge base," Englot tells CNET. "It is unclear when and how that will change." Regulations like the EU AI Act aim to help, but global alignment is tricky. For example, China's approach differs wildly from the West's. Trust is one of the biggest open questions. A superintelligent system might be incredibly useful, but also nearly impossible to audit or constrain. And when AI systems draw from biased or chaotic data like real-time social media, those problems compound. Some researchers believe that given enough data, computing power and clever model design, we'll eventually reach AGI and ASI. Others argue that current AI approaches (especially LLMs) are fundamentally limited and won't scale to true general or superhuman intelligence because the human brain has 100 trillion connections. That's not even accounting for our capability of emotional experience and depth, arguably humanity's strongest and most distinctive attribute. But progress moves fast, and it would be naive to dismiss ASI as impossible. If it does arrive, it could reshape science, economics and politics -- or threaten them all. Until then, general intelligence remains the milestone to watch. If and when superintelligence does become a reality, it could profoundly redefine human life itself. According to Bostrom, we'd enter what he calls a "post-instrumental condition," fundamentally rethinking what it means to be human. Still, he's ultimately optimistic about what lies on the other side, exploring these ideas further in his most recent book, Deep Utopia. "It will be a profound transformation," Bostrom tells CNET.

Meta chief AI scientist Yann LeCun clarifies his role after the company hires another chief AI scientist
Meta chief AI scientist Yann LeCun clarifies his role after the company hires another chief AI scientist

Business Insider

time3 days ago

  • Business
  • Business Insider

Meta chief AI scientist Yann LeCun clarifies his role after the company hires another chief AI scientist

The more the merrier at Meta. The AI talent wars took another turn on Friday when Meta CEO Mark Zuckerberg announced that Shengjia Zhao, co-creator of ChatGPT and the former lead scientist at OpenAI, is now the chief scientist at Meta's Superintelligence Labs. "In this role, Shengjia will set the research agenda and scientific direction for our new lab working directly with me and Alex," a statement shared to Zuckerberg's Threads account said. "Shengjia co-founded the new lab and has been our lead scientist from day one." The statement said Meta chose to formalize Zhao's leadership position because recruiting is "going well" and the team "is coming together." While the announcement elicited congratulatory remarks from some AI enthusiasts online, and more discussion about Meta's ongoing poaching spree, others asked: What about Yann LeCun? LeCun became a prominent figure in the AI industry after joining Meta, then Facebook, in 2013. He serves as the chief AI scientist for Meta's Fundamental AI Research, formerly known as Facebook AI Research. On LinkedIn, LeCun acknowledged the questions and clarified his role at Meta. "My role as Chief Scientist for FAIR has always been focused on long-term AI research and building the next AI paradigms," LeCun wrote on Friday. "My role and FAIR's mission are unchanged." Zuckerberg and Alexandr Wang, the Scale AI founder who joined Meta in June as its chief AI officer, confirmed that LeCun's role is unchanged on their respective social media accounts. What's the difference between Meta's FAIR and its Superintelligence Labs? Although both FAIR and the Superintelligence Labs deal with AI, they're slightly different. Meta created FAIR over a decade ago to research and advance AI technology, which resulted in the 2023 release of its open-source large language model, Llama. LeCun is now largely focused on developing a new kind of model, known as a world model, that could one day replace large language models. The Superintelligence Labs, meanwhile, is the umbrella department housing Meta's FAIR, foundations, and products teams, Zuckerberg said in an internal memo in June. Zuckerberg said the Superintelligence Labs would focus on developing "personal superintelligence for everyone." Bloomberg reported that LeCun would report to Wang. Wang praised Zhao in an X post on Friday. "Shengjia is a brilliant scientist who most recently pioneered a new scaling paradigm in his research," Wang said in an X post on Friday. "He will lead our scientific direction for our team." LeCun said he's looking forward to working with Zhao "to accelerate the integration of new research into our most advanced models."

Where it costs the most to give birth
Where it costs the most to give birth

Axios

time5 days ago

  • Health
  • Axios

Where it costs the most to give birth

The average total in-network cost of giving birth in the U.S. is about $15,200 for vaginal deliveries and $19,300 for C-sections, per data from FAIR Health, a national independent nonprofit. By the numbers: For vaginal deliveries, Alaska has the highest average cost (about $29,200), followed by New York and New Jersey (both about $21,800). Alaska also has the highest average cost for C-sections ($39,500), followed by Maine ($28,800) and Vermont ($28,700). How it works: The amounts in FAIR's Cost of Giving Birth Tracker include delivery, ultrasounds, lab work and more. They reflect total costs paid by patients and their insurance companies, as applicable. Insured patients' financial responsibilities are typically well below the total amount paid, with average out-of-pocket costs of just under $3,000 in 2018-2020, per a 2022 Peterson-KFF analysis. What they're saying: Many factors drive the differences between states, FAIR Health's Rachel Kent tells Axios, including provider training levels, local salaries and costs of living, malpractice insurance costs and insurers' bargaining power. Between the lines: Black and Hispanic people paid more out-of-pocket for maternal care than Asian and white patients with the same insurance, per a study published earlier this year in JAMA Health Forum.

Meta Swears This Time Is Different
Meta Swears This Time Is Different

Atlantic

time18-07-2025

  • Business
  • Atlantic

Meta Swears This Time Is Different

Mark Zuckerberg was supposed to win the AI race. Eons before ChatGPT and AlphaGo, when OpenAI did not exist and Google had not yet purchased DeepMind, there was FAIR: Facebook AI Research. In 2013, Facebook tapped one of the 'godfathers' of AI, the legendary computer scientist Yann LeCun, to lead its new division. That year, Zuckerberg personally traveled to one of the world's most prestigious AI conferences to announce FAIR and recruit top scientists to the lab. FAIR has since made a number of significant contributions to AI research, including in the field of computer vision. Although the division was not focused on advancing Facebook's social-networking products per se, the premise seemed to be that new AI tools could eventually support the company's core businesses, perhaps by improving content moderation or image captioning. But for years, Facebook didn't develop AI as a stand-alone, consumer-facing product. Now, in the era of ChatGPT, the company lags behind. Facebook, now called Meta, trails not just OpenAI and Google but also newer firms such as Anthropic, xAI, and DeepSeek—all of which have launched advanced generative-AI models and chatbots over the past few years. In response, Zuckerberg's company quickly launched its own flagship model, Llama, but it has struggled relative to its competitors. In April, Meta proudly rolled out a Llama 4 model that Zuckerberg called a 'beast' —but after an experimental version of the model scored second in the world on a widely used benchmarking test, the version released to the public ranked only 32nd. In the past year, every other top AI lab has released new 'reasoning' models that, thanks to a new training paradigm, are generally much better than previous chatbots at advanced math and coding problems; Meta has yet to deliver its own. So, a dozen years after building FAIR, Meta is effectively starting over. Last month, Zuckerberg went on a new recruiting spree. He hired Alexandr Wang, the 28-year-old ex-head of the start-up Scale, as chief AI officer to lead yet another division—dubbed Meta Superintelligence Labs, or MSL—and has reportedly been personally asking top AI researchers to join. The goal of this redo, Zuckerberg wrote in an internal memo to employees, is 'to build towards our vision: personal superintelligence for everyone.' Meta is reportedly attempting to lure top researchers by offering upwards of $100 million in compensation. (The company has contested this reporting; for comparison, LeBron James was paid less than $50 million last year.) More than a dozen researchers from rival companies, mainly OpenAI, have joined Meta's new AI lab so far. Zuckerberg also announced that Meta plans to spend hundreds of billions of dollars to build new data centers to support its pursuit of superintelligence. FAIR will still exist but within the new superintelligence team, meaning Meta has both a chief AI 'scientist' (LeCun) and a chief AI 'officer' (Wang). At the same time, MSL is cloistered off from the rest of Meta in an office space near Zuckerberg himself, according to The New York Times. When I reached out to Meta to ask about its 'superintelligence' overhaul, a spokesperson pointed me to Meta's most recent earnings call, in which Zuckerberg described 'how AI is transforming everything we do' and said that he is 'focused on building full general intelligence.' I also asked about comments made by an outgoing AI researcher at Meta: 'You'll be hard pressed to find someone that really believes in our AI mission,' the researcher wrote in an internal memo, reported in The Information, adding that 'to most, it's not even clear what our mission is.' The spokesperson told me, in response to the memo, 'We're excited about our recent changes, new hires in leadership and research, and continued work to create an ideal environment for revolutionary research.' Meta's superintelligence group may well succeed. Small, well-funded teams have done so before: After a group of former OpenAI researchers peeled off to form Anthropic a few years ago, they quickly emerged as a top AI lab. Elon Musk's xAI was even later to the race, but its Grok chatbot is now one of the most technically impressive AI products around (egregious racism and anti-Semitism notwithstanding). And regardless of how far Meta has fallen behind in the AI race, the company has proved its ability to endure: Meta's stock reached an all-time high earlier this year, and it made more than $17 billion in profit from January through the end of March. Billions of people around the world use its social apps. The company's approach is also different from that of its rivals, which frequently describe generative AI in ideological, quasi-religious terms. Executives at OpenAI, Anthropic, and Google DeepMind are all prone to writing long blog posts or giving long interviews about the future they hope to usher in, and they harbor long-standing philosophical disagreements with one another. Zuckerberg, by comparison, does not appear interested in using AI to transform the world. In his most recent earnings call, he focused on five areas AI is influencing at Meta: advertising, social-media content, online commerce, the Meta AI assistant, and devices, notably smart glasses. The grandest future he described to investors was trapped in today's digital services and conventions: 'We're all going to have an AI that we talk to throughout the day—while we're browsing content on our phones, and eventually as we're going through our days with glasses—and I think this will be one of the most important and valuable services that has ever been created.' Zuckerberg also said that AI-based updates to content recommendations on Facebook, Instagram, and Threads have increased the amount of time that users spend on each platform. In this framework, superintelligence may just be a way to keep people hooked on Meta's legacy social-media apps and devices. Initially, it seemed that Meta would take a different path. When the company first entered the generative-AI race, a few months after the launch of ChatGPT, the firm bet big on 'open source' AI software, making its Llama model free for nearly anyone to access, modify, and use. Meta touted this strategy as a way to turn its AI models into an industry standard that would enable widespread innovation and eventually improve Meta's AI offerings. Because open-source software is popular among developers, Zuckerberg claimed, this strategy would help attract top AI talent. Whatever industry standards Zuckerberg was hoping to set, none have come to fruition. In January, the Chinese company DeepSeek released an AI model that was more capable than Llama despite having been developed with far fewer resources. Catching up to OpenAI may now require Meta to leave behind the company's original, bold, and legitimately distinguishing bet on 'open' AI. According to the Times, Meta has internally discussed the possibility of stopping work on its most powerful open-source model ('Behemoth') in favor of a closed model akin to those from OpenAI, Anthropic, and Google. In his memo to employees, Zuckerberg said that Meta will continue developing Llama while also exploring 'research on our next generation of models to get to the frontier in the next year or so.' The Meta spokesperson pointed me to a 2024 interview in which Zuckerberg explicitly said that although the firm is generally 'pro open source,' he is not committed to releasing all future Meta models in this way. While Zuckerberg figures out the path forward, he will also have to contend with the basic reality that generative AI may alienate some of his users. The company rolled back an early experiment with AI characters after human users found that the bots could easily go off the rails (one such bot, a self-proclaimed 'Black queer momma of 2' that talked about cooking fried chicken and celebrating Kwanzaa, tied itself in knots when a Washington Post columnist asked about its programming); the firm's stand-alone AI app released earlier this year also led many users to unwittingly share ostensibly private conversations to the entire platform. AI-generated media has overwhelmed Facebook and Instagram, turning these platforms into oceans of low-quality, meaningless content known as 'AI slop.' Still, with an estimated 3.4 billion daily users across its platforms, it may be impossible for Meta to fail. Zuckerberg might appear to be burning hundreds of millions of dollars on salaries and much more than that on new hardware, but it's all part of a playbook that has worked before. When Instagram and WhatsApp emerged as potential rivals, he bought them. When TikTok became dominant, Meta added a short-form-video feed to Instagram; when Elon Musk turned Twitter into a white-supremacist hub, Meta launched Threads as an alternative. Quality and innovation have not been the firm's central proposition for many, many years. Before the AI industry obsessed over scaling up its chatbots, scale was Meta's greatest and perhaps only strength: It dominated the market by spending anything to, well, dominate the market.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store