logo
Congress Isn't Stepping Up to Regulate AI. Where Does That Leave Us Now?

Congress Isn't Stepping Up to Regulate AI. Where Does That Leave Us Now?

CNET3 days ago
When you turn on the faucet, you expect the water that comes out to be clean. When you go to the bank, you expect your money will still be there. When you go to the doctor, you expect they will keep your medical information private. Those expectations exist because there are rules to protect you. But when a technology arises almost overnight, the problems come first. The rules, you'd hope, would follow.
Right now, there's no technology with more hype and attention than artificial intelligence. Since ChatGPT burst on to the scene in 2022, generative AI has crept into nearly every corner of our lives. AI boosters say it's transformative, comparing it to the birth of the internet or the industrial revolution in its potential to reshape society. The nature of work itself will be transformed. Scientific discovery will accelerate beyond our wildest dreams. All this from a technology that right now, is mostly just kind of good at writing a paragraph.
The concerns about AI? They're legion. There are questions of privacy and security. There's concerns about how AI impacts the climate and the environment. There's the problem of hallucination -- that AI will completely make stuff up, with tremendous potential for misinformation. There are liability concerns: Who is responsible for the actions of an AI, or an autonomous system running off of one? Then there are the already numerous lawsuits around copyright infringement related to training data. (Disclosure: Ziff Davis, CNET's parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Those are just today's worries. Some argue that a potential artificial intelligence smarter than humans could pose a massive, existential threat to humanity.
What to do about AI is an international debate. In Europe, the EU AI Act, which is currently being phased in, imposes guidelines on AI-based systems based on their risk to individual privacy and safety. In the US, meanwhile, Congress recently proposed barring states from enforcing their own rules around AI for a decade, without a national framework in place, until backing off during last-minute negotiations around the big tax and spending bill.
"I think in the end, there is a balance here between enjoying the innovation of AI and mitigating the risks that come with AI," Alon Yamin, CEO of Copyleaks, which runs an AI-powered system for detecting AI-generated writing, told me. "If you're going too far in one end, you will lose something. The situation now is that we're very far to the direction of no regulation at all."
Here's a look at some of the issues raised around AI, how regulations might or might not address them and what it all means for you.
Watch this: $90 Billion AI Investments, MLB Robot Umpires and More | Tech Today
03:07
Different approaches, with an ocean in between
Listen to the debates in Congress about how to regulate artificial intelligence, and a refrain quickly becomes apparent: AI companies and many US politicians don't want anything like the rules that exist in Europe.
The EU AI Act has become shorthand for a strict regulatory structure around AI. In brief, it requires companies to ensure their technology is safe, transparent and responsible. It sorts AI technologies into categories based on the level of risk. The highest-risk categories are either prohibited entirely (things like social scoring or manipulative technologies) or heavily restricted (things like biometrics and tools for hiring and law enforcement). Lower-risk technologies, like most of the work done by large language models we're familiar with (ChatGPT, etc.), are subject to less scrutiny but still must meet certain transparency and privacy requirements.
A key feature of the EU's standards and those in other places, like the United Kingdom, is transparency about the use of AI.
"What these things are fundamentally saying is, we're not trying to block the use of AI but giving consumers the right to opt into it or not or even to know it's even there," said Ben Colman, CEO of the identity verification company Reality Defender.
During a May hearing on AI regulation in the US Senate Commerce, Science and Transportation Committee, Sen. Ted Cruz referred to the EU's standards as "stifling" and "heavy-handed." Cruz, a Texas Republican, specifically objected to any kind of prior approval for AI technologies. He asked OpenAI CEO Sam Altman what effect similar rules would have on the industry in the US, and Altman said it would be "disastrous."
Earlier this month, Meta said it wouldn't sign the EU's Code of Practice for general-purpose AI, which is intended to provide a framework to help AI companies follow the regulations of the EU AI Act. In a post on LinkedIn, Joel Kaplan, Meta's chief global affairs officer, called it an "over-reach" that "will throttle the development and deployment of frontier AI models in Europe."
"Europe is heading down the wrong path on AI," Kaplan said.
But regulations focused on high-risk systems like those used in hiring, health care and law enforcement might miss some of the more subtle ways AI can affect our lives. Think about the spread of AI-generated slop on social media or the creation of realistic-looking videos for political misinformation. Those are also social media issues, and the battle over regulation to minimize the harms with that technology may illuminate what could happen with AI.
Read more: AI Essentials: 29 Ways You Can Make Gen AI Work for You, According to Our Experts
Lessons from social media
After a South by Southwest panel in March on regulating AI, I asked Harvard Law School professor Lawrence Lessig, long a vocal observer of tech's problems, what worried him most about AI. His response: "AI totally screwing up in the context of social media and making it so we have no coherence in our understanding of national politics."
Social media has long been fraught with harmful social implications. The spread of misinformation and erosion of trust in the last decade or so are largely results of the growth of these networks. Generative AI, which can reinforce biases and produce believable but false content with ease, now poses those same problems. On top of those parallels, some of the companies and key figures in AI come straight from the world of social media technology, like Meta and Elon Musk's X.
"We're seeing a lot of the same repeats of social media fights, of privacy fights where companies do whatever they want and do a sort of vague gesture of doing something about it," said Ben Winters, director of AI and privacy at the Consumer Federation of America.
There are some key differences between those fights and the ones around AI, Winters said. One is that lawmakers and regulators are familiar with the mistakes associated with social media and want to avoid repeating them. "I think we're ahead of the curve in terms of response, but one thing that I really hope we can see at the federal level is a willingness to put some basic requirements on these companies," he said.
At the May Senate committee hearing, OpenAI's Altman said he's also wary of repeating past mistakes. "We're trying to learn the lessons of the previous generation," he said. "That's kind of the way it goes. People make mistakes and you do it better next time."
What kinds of AI regulations are we talking about?
In my conversations with artificial intelligence experts and observers, some themes have emerged regarding the rules and regulations that could be implemented. They boil down, in the short term, to questions about the role of AI in impactful decision-making, misinformation, copyright and accountability. Other concerns, like the threat of "superintelligence" or the loss of jobs, also exist, although those are far more complicated.
High-risk systems
This is where the EU AI Act and many other international laws around artificial intelligence focus. In the US, it's also at the center of Colorado's AI law, which passed in 2024 and will be effective in 2026. The idea is that when AI tools are used to make important decisions, about things employment, health care or insurance, they are used in a way that minimizes discrimination and errors and maximizes transparency and accountability.
AI and other predictive technologies can be used in a lot of different ways, whether by governments for programs like child protective services or by private entities for advertising and tracking, Anjana Susarla, a professor at Michigan State University, told me recently.
"The question becomes, is this something where we need to monitor the risks of privacy, the risks of consumer profiling, should we monitor any kind of consumer harms or liabilities?" she said.
Misinformation
Gen AI has a well-documented history of making stuff up. And that's if you're using it in good faith. It can also be used to produce deepfakes -- realistic-looking images and video intended to manipulate people into believing something untrue, changing the behavior of voters and undermining democracy.
"Social media is the main instrument now for disinformation and hate speech," said Shalom Lappin, a professor of computational linguistics at Queen Mary University of London and author of the new book Understanding the Artificial Intelligence Revolution: Between Catastrophe and Utopia. "AI is a major factor because much of this content is coming from artificial agents."
Lies and rumors have spread since the dawn of communication, but generative AI tools like video and image generators can produce fabricated evidence more convincing than any past counterfeit, at tremendous speed and very little cost. On the internet today, too often you cannot, and should not, believe your own eyes.
It can be hard for people to see just how easy it is to fake something -- and just how convincing those fakes can be. Colman, with Reality Defender, said seeing the possible problem is believing. "When we show somebody a good or a bad deepfake of them, they have that 'a-ha' moment of, 'wow, this is happening, it can happen to me,'" he said.
Sen. Josh Hawley, a Missouri Republican, points to a poster during a July 2025 hearing on artificial intelligence model training and copyright infringement.Copyright
There are two copyright issues when it comes to generative AI. The first is the most well-documented: Did AI companies violate copyright laws by using vast amounts of information available on the internet and elsewhere without permission or compensation? That issue is working itself out in the courts, with mixed results so far, and will likely take much longer before something all-encompassing comes out of it.
"They've essentially used everything that's available. It's not only text, it's images, photographs, charts, sound, audio files," Lappin said. "The copyright violations are huge."
But what about the copyright of content created by AI tools? Is it owned by the person who prompted it or by the company that produced the language model? What if the model produces content that copies or plagiarizes existing content without credit, or violates copyrights?
Accountability
The second copyright issue gets at the problem of accountability: What happens when an AI does something wrong, violates a law or hurts somebody?
On the content front, social media companies have long been protected behind a US legal standard, known colloquially as Section 230, that says they aren't responsible for what their users do. But that's a harder test for AI companies, because the user isn't the one creating this content, the company's language model is, Winters said.
Then there are actual, material harms that can come from the interactions people have with AI. A prominent example of this is mental health, where people using AI characters and chatbots as therapists have received bad advice, the kind that could cost a human provider their license or worse, the kind that resulted in self-harm or worse outcomes for the person involved. The issue is magnified even more when it comes to children, who likely have even less understanding of how they should treat what an AI says.
Who should regulate AI?
The question of whose job it is to regulate AI was at the heart of the congressional debate over the moratorium on state laws and rules. In that discussion, the question was whether, in the US, companies should have to navigate one set of rules passed by Congress or 50 or more sets of regulations implemented by the states.
AI companies and business groups said the creation of a "patchwork" of laws would hinder development. In a June letter to Senate leaders, Consumer Technology Association CEO and Vice Chair Gary Shapiro pointed to more than 1,000 state bills that had been introduced regarding AI in 2025 so far.
"This isn't regulation -- it's chaos," he wrote.
But those bill introductions haven't turned into an avalanche of laws on the books. "Despite the amount of interest from policymakers at the state level, there haven't been a ton of AI-specific laws passed in the United States," said Cobun Zweifel-Keegan, managing director, DC, for the privacy trade group IAPP.
States can experiment with new approaches. California can try one thing, Colorado another and Texas something entirely different. An approach that works will spread to other states and could lead to rules that protect consumers without stifling businesses.
But other experts say in the 21st century, companies with the size and scope of those pushing artificial intelligence can only truly be regulated at the international level. Lappin said he believes an appropriate venue is international trade agreements, which could keep companies from hiding some services in certain countries and having customers circumvent protections with VPNs.
"Because these are international rather than national concerns, it seems to me that without international constraints, the regulation will not be effective," Lappin said.
What about superintelligence?
So far, we've mostly focused on the impact of the tech that is available today. But the biggest boosters of AI are always talking about how much smarter the next model will be and how soon we'll get technology that exceeds human intelligence.
Yes, that worries some folks. And they think regulation is important to ensure AI doesn't view that explanation from Morpheus in The Matrix as an instruction manual for world domination. The Future of Life Institute has suggested a government agency with a view into the development of the most advanced AI models. And maybe an off switch, said Jason Van Beek, FLI's chief government affairs officer. "You theoretically would not be able to control them at some point, so just trying to make sure there's some technology that would allow these systems to be turned off if there's some evidence of a loss of control of the situation," he told me.
Other experts were more skeptical that "artificial general intelligence" or superintelligence or anything like that was on the horizon. A survey earlier this year of AI experts found three-quarters doubted current large language models would scale up to AGI.
"You're getting a lot of hype over general intelligence and stuff like that, superintelligent agents taking over, and I don't see a solid scientific or engineering basis for those fears," Lappin said.
The fact is, human beings don't need to wait for a genius-level robot to pose an existential threat. We're more than capable of that ourselves.
Should regulators worry about job losses?
One of those more immediate threats is the possibility that AI will cause mass layoffs as large numbers of jobs are replaced by AI or otherwise made redundant. That poses significant social challenges, especially in the United States, where many fundamentals of life, like health care, are still tied to having a job.
Van Beek said FLI has suggested the US Department of Labor start keeping track of AI-related job losses. "That's certainly a major concern about whether these frontier technologies are going to be taking over huge swaths of industries in terms of jobs or those kinds of things and affecting the economy in very, very deep ways," he said.
There have been major technological innovations that have caused massive displacement or replacement of workers before. Think of the Industrial Revolution or the dawn of the computer age. But those often happened over decades or generations. AI could throw the economy into chaos over a matter of years, Lappin said. The Industrial Revolution also put industries out of work at varying times, but AI could hit every industry at once. "The direction is toward much, much more widespread automation across a very broad domain or range of professions," he said. "And the faster that happens, the much more disruptive that will become."
What matters most? Transparency and privacy
The first step, as with laws already passed in the EU, California and Colorado, is to provide some sort of visibility into how AI systems work and how they're being used. For you, the consumer, the citizen, the person just trying to exist in the world, that transparency means you have a sense of how AI is being used when you interact with it. This could be transparency into how models operate and what went into training them. It could be understanding how models are being used to do things like decide who a company hires and fires.
Right now, that doesn't really exist, and it definitely doesn't exist in a way that's easy for a person to understand. Winters suggested a system similar to that used by financial institutions to evaluate whether someone can get loans -- the credit report. You have the right to inspect your credit report, see what has been said about you and ensure it's right. "You have this number that is impactful about you; therefore, you have transparency and can seek corrections," he said.
The other centerpiece of most proposals right now is privacy -- protecting people against unauthorized recreations of themselves in AI, guarding against exploitation of personal information and identity. While some existing, technology-neutral privacy laws should be able to protect consumers, policymakers need to keep an eye on the changing ways AI is used to ensure they're still doing the job.
"It has to be some kind of balance," Susarla said. "We don't want to stop innovation, but on the other hand we also need to recognize that there can be real consequences."
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

NFL players, employees fined for selling Super Bowl tickets: reports
NFL players, employees fined for selling Super Bowl tickets: reports

Yahoo

time20 minutes ago

  • Yahoo

NFL players, employees fined for selling Super Bowl tickets: reports

More than 100 NFL players and dozens of club employees are to be fined or suspended for selling their allocations of tickets for this year's Super Bowl on secondary markets, US media reported on Friday. ESPN reported that players who sold allotted tickets will be fined one-and-a-half times the face value of the tickets sold and be barred from receiving tickets to the next two editions of the Super Bowl. Players amongst those caught will be given the option of purchasing tickets if their team reaches the Super Bowl in 2026 or 2027. Players who decline to pay the fines face being suspended, ESPN cited league and union sources as saying. ESPN quoted an NFL memo sent to teams which said employees and players had sold tickets to "bundlers" working with a ticket resale site. Tickets to the Super Bowl are consistently one of the hottest -- and most expensive -- tickets in North American sport, fetching as much as $10,000 on resale sites. "Our initial investigation has determined that a number of NFL players and coaches, employed by several NFL Clubs, sold Super Bowl tickets for more than the ticket's face value in violation of the policy," NFL chief compliance officer Sabrina Perel wrote in the memo. Perel cited "long-standing league policy" which "prohibits League or club employees, including players, from selling NFL game tickets acquired from their employer for more than the ticket's face value or for an amount greater than the employee originally paid for the ticket, whichever is less." Perel added that the league will enhance mandatory training before Super Bowl LX for all league personnel to emphasize the rules and "the broader principle that no one should profit personally from their NFL affiliation at the expense of our fans." The league, meanwhile, also planned to improve training to avoid a repeat, with the possibility of stiffer sanctions for future offenses. "No one should profit personally from their NFL affiliation at the expense of our fans," Perel wrote in the memo. rcw/js

Meta Clashes With Apple, Google Over Age Check Legislation
Meta Clashes With Apple, Google Over Age Check Legislation

Yahoo

time20 minutes ago

  • Yahoo

Meta Clashes With Apple, Google Over Age Check Legislation

(Bloomberg) -- The biggest tech companies are warring over who's responsible for children's safety online, with billions of dollars in fines on the line as states rapidly pass conflicting laws requiring companies to verify users' ages. Trump Awards $1.26 Billion Contract to Build Biggest Immigrant Detention Center in US The High Costs of Trump's 'Big Beautiful' New Car Loan Deduction Can This Bridge Ease the Troubled US-Canadian Relationship? Salt Lake City Turns Winter Olympic Bid Into Statewide Bond Boom Trump Administration Sues NYC Over Sanctuary City Policy The struggle has pitted Meta Platforms Inc. and other app developers against Apple Inc. and Alphabet Inc.'s Google, the world's largest app stores. Lobbyists for both sides are moving from state to state, working to water down or redirect the legislation to minimize their clients' risks. This year alone, at least three states — Utah, Texas and Louisiana — passed legislation requiring tech companies to authenticate users' ages, secure parental consent for anyone under 18 and ensure minors are protected from potentially harmful digital experiences. Now, lobbyists for all three companies are flooding into South Carolina and Ohio, the next possible states to consider such legislation. The debate has taken on new importance after the Supreme Court this summer ruled age verification laws are constitutional in some instances. A tech group on Wednesday petitioned the Supreme Court to block a social media age verification law in Mississippi, teeing up a highly consequential decision in the next few weeks. Child advocates say holding tech companies responsible for verifying the ages of their users is key to creating a safer online experience for minors. Parents and advocates have alleged the social media platforms funnel children into unsafe and toxic online spaces, exposing young people to harmful content about self harm, eating disorders, drug abuse and more. Blame Game Meta supporters argue the app stores should be responsible for figuring out whether minors are accessing inappropriate content, comparing the app store to a liquor store that checks patrons' IDs. Apple and Google, meanwhile, argue age verification laws violate children's privacy and argue the individual apps are better-positioned to do age checks. Apple said it's more accurate to describe the app store as a mall and Meta as the liquor store. The three new state laws put the responsibility on app stores, signaling Meta's arguments are gaining traction. The company lobbied in support of the Utah and Louisiana laws putting the onus on Apple and Google for tracking their users' ages. Similar Meta-backed proposals have been introduced in 20 states. Federal legislation proposed by Republican Senator Mike Lee of Utah would hold the app stores accountable for verifying users' ages. Still, Meta's track record in its state campaigns is mixed. At least eight states have passed laws since 2024 forcing social media platforms to verify users' ages and protect minors online. Apple and Google have mobilized dozens of lobbyists across those states to argue that Meta is shirking responsibility for protecting children. 'We see the legislation being pushed by Meta as an effort to offload their own responsibilities to keep kids safe,' said Google spokesperson Danielle Cohen. 'These proposals introduce new risks to the privacy of minors, without actually addressing the harms that are inspiring lawmakers to act.' Meta spokesperson Rachel Holland countered that the company is supporting the approach favored by parents who want to keep their children safe online. 'Parents want a one-stop-shop to oversee their teen's online lives and 80% of American parents and bipartisan lawmakers across 20 states and the federal government agree that app stores are best positioned to provide this,' Holland said. As the regulation patchwork continues to take shape, the companies have each taken voluntary steps to protect children online. Meta has implemented new protections to restrict teens from accessing 'sensitive' content, like posts related to suicide, self-harm and eating disorders. Apple created 'Child Accounts,' which give parents more control over their children's' online activity. At Apple, spokesperson Peter Ajemian said it 'soon will release our new age assurance feature that empowers parents to share their child's age range with apps without disclosing sensitive information.' Splintered Groups As the lobbying battle over age verification heats up, influential big tech groups are splintering and new ones emerging. Meta last year left Chamber of Progress, a liberal-leaning tech group that counts Apple and Google as members. Since then, the chamber, which is led by a former Google lobbyist and brands itself as the Democratic-aligned voice for the tech industry, has grown more aggressive in its advocacy against all age verification bills. 'I understand the temptation within a company to try to redirect policymakers towards the company's rivals, but ultimately most legislators don't want to intervene in a squabble between big tech giants,' said Chamber of Progress CEO Adam Kovacevich. Meta tried unsuccessfully to convince another major tech trade group, the Computer & Communications Industry Association, to stop working against bills Meta supports, two people familiar with the dynamics said. Meta, a CCIA member, acknowledged it doesn't always agree with the association. Meta is also still a member of NetChoice, which opposes all age verification laws no matter who's responsible. The group currently has 10 active lawsuits on the matter, including battling some of Meta's preferred laws. The disagreements have prompted some of the companies to form entirely new lobbying outfits. Meta in April teamed up with Spotify Technology SA and Match Group Inc. to launch a coalition aimed at taking on Apple and Google, including over the issue of age verification. Competing Campaigns Meta is also helping to fund the Digital Childhood Alliance, a coalition of conservative groups leading efforts to pass app-store age verification, according to three people familiar with the funding. Neither the Digital Childhood Alliance nor Meta responded directly to questions about whether Meta is funding the group. But Meta said it has collaborated with Digital Childhood Alliance. The group's executive director, Casey Stefanski, said it includes more than 100 organizations and child safety advocates who are pushing for more legislation that puts responsibility on the app stores. Stefanski said the Digital Childhood Alliance has met with Google 'several times' to share their concerns about the app store in recent months. The App Association, a group backed by Apple, has been running ads in Texas, Alabama, Louisiana and Ohio arguing that the app store age verification bills are backed by porn websites and companies. The adult entertainment industry's main lobby said it is not pushing for the bills; pornography is mostly banned from app stores. 'This one-size fits all approach is built to solve problems social media platforms have with their systems while making our members, small tech companies and app developers, collateral damage,' said App Association spokesperson Jack Fleming. In South Carolina and Ohio, there are competing proposals placing different levels of responsibility on the app stores and developers. That could end with more stringent legislation that makes neither side happy. 'When big tech acts as a monolith, that's when things die,' said Joel Thayer, a supporter of the app store age verification bills. 'But when they start breaking up that concentration of influence, all the sudden good things start happening because the reality is, these guys are just a hair's breath away from eating each other alive.' (Updates with App Association statement in 24th paragraph.) Burning Man Is Burning Through Cash Confessions of a Laptop Farmer: How an American Helped North Korea's Wild Remote Worker Scheme It's Not Just Tokyo and Kyoto: Tourists Descend on Rural Japan Elon Musk's Empire Is Creaking Under the Strain of Elon Musk A Rebel Army Is Building a Rare-Earth Empire on China's Border ©2025 Bloomberg L.P. Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data

New dating advice app Tea rockets to No. 1 app spot: What to know
New dating advice app Tea rockets to No. 1 app spot: What to know

Yahoo

time20 minutes ago

  • Yahoo

New dating advice app Tea rockets to No. 1 app spot: What to know

A dating advice app called Tea has rocketed to the top of the free apps chart in Apple's App Store after it went viral on Reddit, TikTok and across social media. The app varies from traditional dating apps because it's only for women, and instead of looking for dates, women go on the app to share information about and look for tips on potential male partners. The company behind Tea said in an Instagram post that it had reached over 4 million female users and has a waiting list of approximately 900,000 new, prospective users. Get to know the viral new app below. New dating app says it helps women identify red flags in dating What is the Tea app? The Tea app is marketed as a "women-only" app that offers "dating tools for women" and lets women anonymously share and search for information, advice, and photos of men they say they have dates with or are looking to date. According to the Apple App Store, to use the Tea app, users must be 17+. The purpose of a woman adding a man's "dating data" and "dating history" would, in theory, build community and help other women share "red flags" or warnings about people they say they've dated or any "green flags" or vouch for men. Women can share noteworthy dating experiences, both positive and negative. Matchmaker has dating advice for people seeking romance in 2025 The Tea app is available for iOS on Apple's App Store and for Android on the Google Play Store. What does Tea let users do? In one promotional Instagram post, Tea compared itself to the user review app Yelp. "An app that's like Yelp, except for it's reviews of men," the text in the video post reads. After creating an account by logging in through an Apple or Facebook account, a Tea user can use the app's tools to check phone number lookups, do background checks, check criminal records, check if someone is listed on sex offender registries and reverse image search photos to see if photos are used elsewhere, according to explanations posted on the Tea app's social media pages. Users can also search for others based on their location. Tea users can also share first names, photos, and reviews of men they've dated and their date experiences. What are the concerns behind Tea? On Reddit, some users have called for the deletion and removal of the Tea app, calling it a platform that "can be used to spread misinformation," a platform that allows for easy doxing, or the sharing of someone's identity and private information publicly, and a "privacy blindspot" that doesn't let men search for themselves or verify what is shared about them. The Tea app claims to verify all potential users and calls safety its "No. 1 priority." "Because this is a women-only app, our approval process is extremely tedious, to protect the women in Tea," the app stated in an Instagram post. "We pride ourselves on being very thorough, which takes extra time (real human team, no shortcuts!)," the message continued in part. The Tea app's developer, Tea Dating Advice Inc., declined to weigh in on how the app works, how the app's vetting process works, privacy concerns or comment for this article. Solve the daily Crossword

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store