
Inside multi-billion dollar plan to build new tunnel under the Swiss Alps
& Tessa Charles
Scientists have unveiled a 70-year plan to build a massive, 91km tunnel under the Swiss Alps to house the Future Circular Collider (FCC).
The FCC, more than three times larger than the current Large Hadron Collider, will collide electrons and positrons in its first stage for precise Higgs boson measurements.
The second stage involves colliding protons at much higher energies than the LHC, potentially uncovering new particles and addressing fundamental physics questions like dark matter and the matter-antimatter imbalance.
The project's estimated cost is around $18 billion for the first stage, with a significant portion allocated for tunnel construction.
While costly, the FCC is expected to yield scientific advancements and technological spin-offs, similar to the LHC's contributions to medical technology and software.

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Reuters
4 days ago
- Reuters
Trump-Musk spat: Does it pose a regulatory threat for the billionaire's businesses?
June 5 (Reuters) - The risk U.S. transportation, environmental, communications and other regulators will take aim at Elon Musk's many businesses became a real threat after the billionaire's deep political ties with President Donald Trump disintegrated on Thursday. Below is a list of U.S. regulators who oversee Musk's companies, including automaker Tesla (TSLA.O), opens new tab, rocket and satellite company SpaceX, brain implant company Neuralink, social media platform X and construction firm The Boring Company. Federal Communications Commission The Federal Communications Commission in April voted to open a review of the decades-old spectrum sharing regime between satellite systems sought by SpaceX. SpaceX wants access to new spectrum from the FCC in the coming years to speed deployment of satellite-based internet service. The review by the U.S. telecom regulator aims to allow a greater and more intensive use of spectrum for space activities. Existing reductions approved in the 1990s limit power usage that prevent better coverage from SpaceX's Starlink and other systems. Food and Drug Administration The FDA oversees the clinical trials for Neuralink, Musk's brain implant company, deciding whether such trials can take place and whether Neuralink can eventually sell its device to consumers. The agency already approved such trials in the U.S. Neuralink has also been pursuing clinical trials outside the U.S, including in Canada. The FDA had initially rejected Neuralink's request to start clinical trials, citing safety risks, Reuters reported in 2023. The agency has since given the startup approval to do clinical trials, which are ongoing. Environmental Protection Agency SpaceX faces environmental regulations from the EPA, which oversees the company's wastewater discharges at its operating site in Texas. The company's operations are also subject to environmental reviews under the National Environmental Policy Act, with several agencies including the Fish and Wildlife Service required to analyze the impact of the company's rocket launches and landings on land, water and wildlife. National Highway Traffic Safety Administration Tesla faces ongoing oversight from U.S. auto safety investigators about the safety of its vehicles especially when using advanced driver assistance systems. Last month, the National Highway Traffic Safety Administration asked Tesla to answer questions on its plans to launch a paid robotaxi service in Austin, Texas, in June, in order to assess how the electric vehicle maker's cars with full self-driving technology will perform in poor weather. NHTSA has been investigating Tesla's full self-driving collisions in reduced roadway visibility conditions since October. The agency said it is seeking additional information about Tesla's development of robotaxis "to assess the ability of Tesla's system to react appropriately to reduced roadway visibility conditions" as well details on robotaxi deployment plans and the technology being used. Federal Aviation Administration In September, the FAA proposed a $633,000 fine against SpaceX for failing to follow license requirements in 2023 before two launches. That investigation remains open. The FAA could also impose new restrictions or additional scrutiny after a series of explosions of SpaceX launches. Securities and Exchange Commission Musk is embroiled in litigation with the SEC over his 2022 takeover of Twitter, with Musk now due to respond, opens new tab to those allegations next month. The regulator also had opened an investigation into his company Neuralink according to a December letter from Musk's lawyer he posted on social media platform X. Federal Trade Commission The FTC is a consumer protection agency that ensures social media companies like Musk's platform X protect children's privacy and safeguard Americans' data. The FTC, which also enforces antitrust law, recently opened a probe into coordination between media watchdog groups, some of whom Musk has accused of orchestrating an illegal group advertiser boycott against his social media site.


Reuters
22-05-2025
- Reuters
US agency votes to bar Chinese labs deemed security risks from testing US electronics
May 22 (Reuters) - The Federal Communications Commission voted on Thursday to finalize rules barring Chinese labs deemed risks to U.S. national security from testing electronic devices like smartphones, cameras and computers for use in the United States. All electronics used in the United States must go through the FCC's equipment authorization process before they can be imported. The FCC says about 75% of all electronics are tested in labs located inside China.


Reuters
19-05-2025
- Reuters
State AGs fill the AI regulatory void
May 19, 2025 - As generative artificial intelligence (AI) technologies rapidly proliferate and permeate society, state attorneys general (AGs) have warned of potential AI misuse and legal violations. And while only California, Colorado, and Utah have enacted laws governing AI, the dearth of AI-specific laws has not prevented states from advising and taking AI-related action under existing law. Indeed, state AGs have indicated that they will utilize privacy, consumer protection, and anti-discrimination laws to regulate AI. AGs are focused on how AI systems utilize personal identifying information, potentially facilitate fraud using deepfakes, operate relative to company representations, and could perpetrate bias and discrimination in decision-making processes. In January 2024, a bipartisan group of AGs sent a letter to the Federal Communications Commission (FCC) warning of potential fraud where AI is used to imitate human voices in telemarketing campaigns. See Implications of Artificial Intelligence Technologies on Protecting Consumers from Unwanted Robocalls and Robotexts, Federal Communications Commission, CG Docket No. 23-362 (January 17, 2024). While California, Colorado, and Utah will ramp up enforcement under their AI laws, businesses must be aware of regulatory risks from other states under traditional laws and ensure robust AI compliance measures. Since the FCC letter, AGs from California, Massachusetts, Oregon, New Jersey, and Texas have issued AI-specific guidance or taken AI-related enforcement actions that address common themes, despite not having AI-specific laws in their states. California AG Rob Bonta has been among the most vocal state AGs on the use of AI. Earlier this year, AG Bonta issued two legal advisories warning that companies may be liable under California's Unfair Competition Law and Civil Rights Act if AI tools mislead consumers, deny access to services, or cause other discriminatory outcomes. In particular, AG Bonta has highlighted the risks associated with using AI in the health care industry and hiring, where replacing trained professionals with opaque and unproven AI systems could result in actionable harm. Massachusetts AG Joy Campbell was the first to issue formal guidance warning that AI systems could potentially run afoul of existing law. Specifically, companies risk violating the Massachusetts Consumer Protection Act by misrepresenting the reliability of an AI system or falsely advertising its quality, such as touting functionality that does not in fact exist. AG Campbell also warns of fraud by misrepresenting "audio or video content of a person for the purpose of deceiving another to engage in a business transaction or supply personal information as if to a trusted business partner as in the case of deepfakes, voice cloning, or chatbots." Attorney General Advisory on the Application of the Commonwealth's Consumer Protection, Civil Rights, and Data Privacy Laws to Artificial Intelligence, Mass. Atty. Gen. (April 16, 2024). There are also privacy risks under the "Standards for the Protection of Personal Information of Residents of the Commonwealth," requiring AI developers and deployers to take appropriate steps to safeguard personal information used within AI systems. Finally, the advisory warns that Massachusetts' Anti-Discrimination Law may be implicated if AI makes decisions based on "legally protected characteristics," such as race or gender. Following suit, former Oregon AG Ellen Rosenblum issued AI guidance highlighting concerns with the unpredictability of AI outputs, noting that they can compromise privacy, affect fairness, obscure accountability, and impact trustworthiness. See What You Should Know About How Oregon's Laws May Affect Your Company's Use of Artificial Intelligence, Or. Atty. Gen (December 24, 2024). The lack of transparency in AI decision-making processes can make it difficult for humans to identify, understand, and correct such decisions. With these risks in mind, the guidance notes that the state will regulate AI under several laws. The Oregon Unlawful Trade Practices Act prohibits misrepresentations in consumer transactions, including misrepresenting the characteristics, uses, benefits, or qualities of AI products, or using AI to falsely claim nonexistent sponsorships, approvals, or connections (such as artificial celebrity endorsements). The Oregon Consumer Privacy Act guarantees a consumer's right to control the distribution of their personal data, which is particularly relevant for generative AI systems trained with such data. AI developers should obtain express consumer consent before using data to train AI systems. Consumers must also be able to opt out of AI profiling when AI is used to make significant decisions, such as those pertaining to housing, education, or lending. The Oregon Consumer Information Protection Act also requires AI developers to safeguard personal information through reasonable cybersecurity measures. Finally, like Massachusetts, Oregon warns of possible bias and anti-discrimination violations. The Oregon Equality Act prohibits discrimination based on protected classes and bars discrimination resulting from AI use, particularly in housing and public accommodations. New Jersey AG Matthew Platkin launched a "Civil Rights and Technology Initiative" aimed at addressing discrimination and bias-based harassment risks associated with AI. As part of that effort, he published guidance to explain how the New Jersey Law Against Discrimination (LAD) may apply to "algorithmic discrimination" inherent in some AI systems. Guidance on Algorithmic Discrimination and the New Jersey Law Against Discrimination, N.J. Atty. Gen (January 2025). Through the LAD, New Jersey residents are protected from AI algorithmic discrimination by "covered entities," which include employers, housing providers, places of public accommodation, credit providers, and contractors, among others. Misuse of AI tools may violate the LAD in these areas. For example, in the employment context, automated decision-making tools have been used to determine whether an employee should be retained or terminated. While these tools streamline and simplify processes, their use may result in racial and gender discrimination based on the tools' design, training, or deployment. Covered entities can violate the LAD even absent an intent to discriminate, such as by simply deploying an AI system developed and controlled by a third party. Specifically, tool deployment may violate the LAD if their use results in either disparate treatment or disparate impact based on protected characteristics. The law also provides recourse for affected individuals when AI decisions result in the preclusion or impediment of reasonable accommodations. Texas AG Ken Paxton has taken the lead with action under traditional law enforcement. In September 2024, AG Paxton reached a settlement with health care technology company Pieces Technology (Pieces) under the Texas Deceptive Trade Practices-Consumer Protection Act (DTPA). See State of Texas v. Pieces Technologies Inc., Cause No. DC-24-13476 (2025). This represents the first AG settlement under a state consumer protection act involving generative AI. Pieces uses AI to assist medical facilities by summarizing, charting, and drafting clinical notes. To measure accuracy, Pieces developed several metrics and benchmarks. The company advertised the accuracy of its AI product on its website, claiming a "critical hallucination rate" and "severe hallucination rate" of less than .001% and less than 1 per 100,000. AI hallucinations are instances where the output is false or misleading, and Pieces' metrics represent an extremely low incidence. According to the AG's claims, these representations "may" have violated the DPTA because they were allegedly "false, misleading, or deceptive." Pieces denied any violation of the DPTA under the settlement terms. The settlement requires that, should Pieces advertise the accuracy of AI products using metrics, it must disclose "the meaning or definition of such metric, benchmark, or similar measurement," and "the method, procedure, or any other process used by Pieces to calculate the metric, benchmark, or similar measurement used in … [its] marketing or advertising of its products and services." Further, the company is prohibited from making false or misleading statements concerning AI products and must disclose any harmful or "potentially harmful" uses or misuses of its products. The settlement does not contain a monetary penalty; however, Pieces is required to demonstrate its compliance with the settlement upon the state's request for an indefinite period. State AGs will scrutinize generative AI development and deployment within consumer protection, privacy, and anti-discrimination statutory frameworks. Companies using AI and conducting business with consumers in multiple states should ensure they are engaging in defensible privacy and cybersecurity practices in accordance with state laws. They need a firm grasp on the AI system's foundational model and capabilities and should perform a thorough risk assessment before employing AI products. Businesses advertising AI products must be aware of state consumer protection acts and avoid false or misleading claims. The Federal Trade Commission has developed guidance for companies employing AI products and advertising their capabilities. See Keep Your AI Claims in Check, Federal Trade Commission (February 27, 2023). Companies cannot assume they are insulated from liability simply because they are using AI without discriminatory intent. Several state AGs have signaled that enforcement actions could result from an AI system's disparate impact alone. Therefore, companies must carefully examine AI inputs and outputs and ensure the system is producing fair and unbiased results, particularly if legally protected characteristics such as age, gender, or race are implicated. Companies considering implementing AI systems must prepare for potential exposure under a patchwork of state laws and should consult competent outside counsel accordingly. To mitigate against ever-expanding risk, all levels of decision-makers, including executives, IT staff, and in-house counsel, should be aware of AI risks and capabilities and involved in its implementation.