
Elon Musk's X adds AI to its Community Notes, promises faster fact-checks with a human touch
The initiative will allow developers to submit their own AI systems for evaluation. These agents will initially produce practice notes that remain behind the scenes. If deemed helpful by the platform, the AI will then be permitted to generate fact-checking notes that are published publicly.
Despite the automation, human oversight will remain central to the process. According to Keith Coleman, a product executive at X and head of the Community Notes programme, the system requires that notes be approved by individuals from a broad spectrum of viewpoints before going live, mirroring the criteria already in place for user-submitted notes.
'They can help deliver a lot more notes faster with less work, but ultimately the decision on what's helpful enough to show still comes down to humans,' Coleman said in an interview on Tuesday. 'So we think that combination is incredibly powerful.'
Coleman indicated that the platform currently publishes hundreds of Community Notes daily. While he did not offer a precise estimate, he suggested that the introduction of AI-generated contributions could lead to a 'significant' increase in volume.
Originally launched under the Twitter brand prior to Musk's acquisition of the company in 2022, Community Notes has seen renewed focus under his leadership. The approach has since attracted interest from other tech firms, including Meta and TikTok-owner ByteDance, which have begun exploring similar community-driven fact-checking systems.
Musk has often praised the Community Notes feature, describing it as 'hoax kryptonite' in the fight against misinformation. However, the feature has not spared Musk himself from scrutiny; he has been flagged multiple times for sharing misleading content. Earlier this year, he warned the system could potentially be 'gamed by governments & legacy media.'
Coleman views the uptake by rival platforms as evidence that X's model is among the most effective fact-checking mechanisms available. He also believes that human moderation of AI-generated notes will establish a valuable 'feedback loop' to further improve the technology over time.
'It is a new feedback cycle,' he said. 'The model can be improved not just by one random human's feedback, but by feedback from a diverse audience.'
Importantly, AI agents contributing to Community Notes will not be restricted to Musk's own xAI-developed bot, Grok. Coleman clarified that developers can utilise any AI technology, provided it meets the platform's standards for accuracy and relevance.
The first wave of AI-generated Community Notes is expected to begin appearing later this month.
(With inputs from Bloomberg)

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles

Business Standard
7 minutes ago
- Business Standard
Silicon Valley needs to get over its obsession with superhuman AI
Building a machine more intelligent than ourselves. It's a centuries-old theme, inspiring equal amounts of awe and dread, from the agents in 'The Matrix' to the operating system in 'Her.' To many in Silicon Valley, this compelling fictional motif is on the verge of becoming reality. Reaching artificial general intelligence, or A.G.I. (or going a step further, superintelligence), is now the singular aim of America's tech giants, which are investing tens of billions of dollars in a fevered race. And while some experts warn of disastrous consequences from the advent of A.G.I., many also argue that this breakthrough, perhaps just years away, will lead to a productivity explosion, with the nation and company that get there first reaping all the benefits. This frenzy gives us pause. It is uncertain how soon artificial general intelligence can be achieved. We worry that Silicon Valley has grown so enamored with accomplishing this goal that it's alienating the general public and, worse, bypassing crucial opportunities to use the technology that already exists. In being solely fixated on this objective, our nation risks falling behind China, which is far less concerned with creating A.I. powerful enough to surpass humans and much more focused on using the technology we have now. The roots of Silicon Valley's fascination with artificial general intelligence go back decades. In 1950 the computing pioneer Alan Turing proposed the imitation game, a test in which a machine proves its intelligence by how well it can fool human interrogators into believing it's human. In the years since, the idea has evolved, but the goal has stayed constant: to match the power of a human brain. A.G.I. is simply the latest iteration. In 1965, Mr. Turing's colleague I.J. Good described what's so captivating about the idea of a machine as sophisticated as the human brain. Mr. Good saw that smart machines could recursively self-improve faster than humans could ever catch up, saying, 'The first ultraintelligent machine is the last invention that man need ever make.' The invention to end all other inventions. In short, reaching A.G.I. would be the most significant commercial opportunity in history. Little wonder that the world's top talents are all devoting themselves to this ambitious endeavor. The current modus operandi is build at all cost. Every tech giant is in the race to reach A.G.I. first, erecting data centers that can cost more than $100 billion and with some like Meta offering signing bonuses to A.I. researchers that top $100 million. The costs of training foundation models, which serve as a general-purpose base for many different tasks, have continued to rise. Elon Musk's start-up xAI is reportedly burning through $1 billion a month. Anthropic's chief executive, Dario Amodei, expects training costs of leading models to go up to $10 billion or even $100 billion in the next two years. To be sure, A.I. is already better than the average human at many cognitive tasks, from answering some of the world's hardest solvable math problems to writing code at the level of a junior developer. Enthusiasts point to such progress as evidence that A.G.I. is just around the corner. Still, while A.I. capabilities have made extraordinary leaps since the debut of ChatGPT in 2022, science has yet to find a clear path to building intelligence that surpasses humans. In a recent survey of the Association for the Advancement of Artificial Intelligence, an academic society that includes some of the most respected researchers in the field, more than three-quarters of the 475 respondents said our current approaches were unlikely to lead to a breakthrough. While A.I. has continued to improve as the models get larger and ingest more data, there's concern that the exponential growth curve might falter. Experts have argued that we need new computing architectures beyond what underpins large language models to reach the goal. The challenge with our focus on A.G.I. goes beyond the technology and into the vague, conflicting narratives that accompany it. Both grave and optimistic predictions abound. This year the nonprofit AI Futures Project released 'A.I. 2027,' a report that predicted superintelligent A.I. potentially controlling or exterminating humans by 2030. Around the same time, computer scientists at Princeton published a paper titled 'A.I. as Normal Technology,' arguing that A.I. will remain manageable for the foreseeable future, like nuclear power. That's how we get to this strange place where Silicon Valley's biggest companies proclaim ever shorter timelines for how soon A.G.I. will arrive, while most people outside the Bay Area still barely know what that term means. There's a widening schism between the technologists who feel the A.G.I. — a mantra for believers who see themselves on the cusp of the technology — and members of the general public who are skeptical about the hype and see A.I. as a nuisance in their daily lives. With some experts issuing dire warnings about A.I., the public is naturally even less enthused about the technology. Now let's look at what's happening in China. The country's scientists and policymakers aren't as A.G.I.-pilled as their American counterparts. At the recent World Artificial Intelligence Conference in Shanghai, Premier Li Qiang of China emphasized 'the deep integration of A.I. with the real economy' by expanding application scenarios. While some Silicon Valley technologists issue doomsday warnings about the grave threat of A.I., Chinese companies are busy integrating it into everything from the superapp WeChat to hospitals, electric cars and even home appliances. In rural villages, competitions among Chinese farmers have been held to improve A.I. tools for harvest; Alibaba's Quark app recently became China's most downloaded A.I. assistant in part because of its medical diagnostic capabilities. Last year China started the A.I.+ initiative, which aims to embed A.I. across sectors to raise productivity. It's no surprise that the Chinese population is more optimistic about A.I. as a result. At the World A.I. Conference, we saw families with grandparents and young children milling about the exhibits, gasping at powerful displays of A.I. applications and enthusiastically interacting with humanoid robots. Over three-quarters of adults in China said that A.I. has profoundly changed their daily lives in the past three to five years, according to an Ipsos survey. That's the highest share globally and double that of Americans. Another recent poll found that only 32 percent of Americans say they trust A.I., compared with 72 percent in China. Many of the purported benefits of A.G.I. — in science, education, health care and the like — can already be achieved with the careful refinement and use of powerful existing models. For example, why do we still not have a product that teaches all humans essential, cutting-edge knowledge in their own languages in personalized, gamified ways? Why are there no competitions among American farmers to use A.I. tools to improve their harvests? Where's the Cambrian explosion of imaginative, unexpected uses of A.I. to improve lives in the West? The belief in an A.G.I. or superintelligence tipping point flies in the face of the history of technology, in which progress and diffusion have been incremental. Technology often takes decades to reach widespread use. The modern internet was invented in 1983, but it wasn't until the early 2000s that it reshaped business models. And although ChatGPT has seen incredible user growth, a recent working paper by the National Bureau of Economic Research showed that most people in the United States still use generative A.I. infrequently. When a technology eventually goes mainstream, that's when it's truly game changing. Smartphones got the world online not because of the most powerful, sleekest versions; the revolution happened because cheap, adequately capable devices proliferated across the globe, finding their way into the hands of villagers and street vendors. It's paramount that more people outside Silicon Valley feel the beneficial impact of A.I. on their lives. A.G.I. isn't a finish line; it's a process that involves humble, gradual, uneven diffusion of generations of less powerful A.I. across society. Instead of only asking 'Are we there yet?' it's time we recognize that A.I. is already a powerful agent of change. Applying and adapting the machine intelligence that's currently available will start a flywheel of more public enthusiasm for A.I. And as the frontier advances, so should our uses of the technology. While America's flagship tech companies race to the uncertain goal of getting to artificial general intelligence first, China and its leadership have been more focused on deploying existing technology across traditional and emerging sectors, from manufacturing and agriculture to robotics and drones. Being too fixated on artificial general intelligence risks distracting us from A.I.'s everyday impact. We need to pursue both.

Business Standard
37 minutes ago
- Business Standard
Meta splits AI group into four parts in pursuit of superintelligence
Meta Platforms Inc. is splitting its newly formed artificial intelligence group into four distinct teams and reassigning many of the company's existing AI employees, an attempt to better capitalize on billions of dollars' worth of recently acquired talent. The new structure is meant to 'accelerate' the company's pursuit of so-called superintelligence, according to an internal memo sent Tuesday by Alexandr Wang, the former Scale AI chief executive officer who recently joined Meta as chief AI officer. 'Superintelligence is coming, and in order to take it seriously, we need to organize around the key areas that will be critical to reach it — research, product and infra,' Wang wrote in the memo, which was reviewed by Bloomberg News. The group, known as Meta Superintelligence Labs, or MSL, will now have four parts: TBD Lab, led by Wang, which will oversee Meta's large language models, including the Llama tools that underpin its AI assistant. FAIR, an internal AI research lab that's existed within the company for more than a decade. The team, whose name stands for fundamental AI research, is focused on longer-term projects. Products and Applied Research, a team led by former GitHub CEO Nat Friedman, which will take those models and research and put them into consumer products. MSL Infra, which will focus on the expensive infrastructure needed to support Meta's AI ambitions. No layoffs were part of Tuesday's reorganization, according to a person familiar with the matter, who asked not to be identified because the deliberations are private. Details of the new structure were first reported by the Information. Meta is hoping to stabilize its AI efforts after months spent poaching dozens of top AI researchers from competitors with lofty pay packages, many reaching hundreds of millions in total compensation. CEO Mark Zuckerberg has said the company's goal is to achieve superintelligence, or AI technology that can complete tasks even better than humans, and he expects to spend hundreds of billions of dollars on the talent and infrastructure needed to get there. But Meta's AI leadership has faced several shake-ups in the past few years, including multiple changes this year alone as the company has raced to keep pace with rivals like OpenAI and Google. Before announcing MSL in June, the social media giant had three primary AI teams — FAIR, an AI products group, and the AGI foundations team, which focused on generative AI products and research. The AGI foundations group is being dissolved, and leaders Ahmad Al-Dahle and Amir Frenkel are now 'focusing on strategic MSL initiatives' and reporting to Wang, according to the memo. The former head of the AI products group, Connor Hayes, was already reassigned to run Threads, Meta's rival product to Elon Musk's X. As part of Tuesday's reorganization, Aparna Ramani, a Meta vice president charged with leading the company's AI, data and developer infrastructure units, will run the MSL Infra team, according to the memo. Robert Fergus will continue to lead FAIR, an organization he co-founded in 2014. He had previously left the group and spent several years at Alphabet Inc.'s DeepMind before returning to run FAIR this spring. Loredana Crisan, who previously led the company's Messenger app and moved to the company's generative AI group in February, is departing Meta for Figma Inc., according to a person familiar with the move.


Time of India
5 hours ago
- Time of India
SpaceX revamps its mega-rocket — here's when the redesigned Starship may soar again
SpaceX is getting ready to enter into a pivotal phase in the development of Starship, the world's tallest and most powerful rocket. Flight 10 the massive 400-foot launch system intended to carry astronauts and cargo to the Moon and Mars and even go deeper into the solar system. The company concentrates to showcase new design enhancements that may shape the future of the program. From Four to Three: A New Approach Until recently, the Super Heavy depended on four smaller grid fins positioned high on its structure to guide descent back to Earth. In the new configuration, SpaceX has replaced them with three much larger fins, each mounted lower down the rocket. The shift is more than a structural change. The fresh placement is made to align with the mechanical 'chopsticks' of the Starbase launch tower in South Texas. If successful, this alignment could enable SpaceX to catch returning boosters mid-air, bypassing ocean landings and unlocking faster reusability by Taboola by Taboola Sponsored Links Sponsored Links Promoted Links Promoted Links You May Like Most Families Are Shocked by Senior Apartment Costs in Kuala Lumpur (Take a Look) AskLayers Learn More Why SpaceX Made the Switch The redesign comes after a string of mixed outcomes from previous test flights. On Flight 9, the Super Heavy booster failed to finish its controlled descent and fell into the Gulf of Mexico. The upper-stage Starship didn't fare much better, disintegrated apart during re-entry. The new three-fin system concentrates to provide more stability and maneuverability while streamlining the rocket's overall aerodynamics. Engineers also restructured the fin actuators, positioning them inside the propellant tanks for enhanced protection, and built the fins with honeycomb-style panels for durability without adding necessary weight. Live Events These changes could prove important if SpaceX hopes to achieve rapid turnaround introduces, an innovation central to Elon Musk's long-term vision of affordable, routine access to space travel. Pressure on Flight 10 SpaceX has now confirmed a target launch date for Flight 10, calling it a significant test in the Starship program. This will be the company's initial flight in nearly three months, despite earlier ambitions of ramping up test missions throughout 2025. That gap underscores the stakes. While the Starship program has delivered moments of progress, this year has seen fewer successful achievements compared to 2023 and 2024. The commercial spaceflight firm, founded by Musk, must now prove it can translate its quick prototyping philosophy into consistent performance. Flight 10 will test the redesigned fins, enhanced control systems, and several other upgrades all under close scrutiny from regulators, space industry competitors, and fans watching around globally. Starbase Disputes Beyond technical challenges, SpaceX's South Texas base has become a focal point for local tensions. In May, Cameron County residents voted to include Starbase as an official town, cementing SpaceX's influence in the area. But controversy soon followed. In June, city commissioners voted unanimously to close different public roads to outsiders, a move that frustrated longtime residents and property owners. Critics say the closures limit community access and give SpaceX too much control over the region. Apart from these disputes, the site still stays at the heart of the Starship program, hosting engine tests, vehicle assembly, and every launch to date. Looking Toward the Future The upcoming flight reflects SpaceX's philosophy: iterate fast , learn from failures, and enhance with each launch. Every redesign, from fin structures to re-entry shielding, feeds into the larger mission of building a fully reusable launch system capable of reaching not only the Earth but also Moon, Mars, and beyond. FAQs: Q1. What is SpaceX? A1. SpaceX (Space Exploration Technologies Corp.) is a private aerospace company founded by Elon Musk in 2002. It designs, manufactures, and launches rockets and spacecraft. Q2. What is Starship? A2. Starship is SpaceX's next-generation fully reusable spacecraft, designed for missions to the Moon, Mars, and beyond. It consists of the Starship upper stage and the Super Heavy booster.