logo
Last call for entries into Tioga County's ‘I Voted' Sticker Contest

Last call for entries into Tioga County's ‘I Voted' Sticker Contest

Yahoo21-05-2025

TIOGA COUNTY, N.Y. (WIVT/WBGH) – Time is running out for young Tioga County residents to submit their entries for the 'I Voted' Sticker Contest.
The Tioga County Board of Elections is reminding young artists that the deadline to submit their applications to the contest is May 30.
Any young person between the ages of 5 and 18 who resides in Tioga County can participate. Participants are asked to create a unique image or logo for a two-inch sticker that will be distributed at the general election in November, as well as at subsequent elections. Entries must contain the phrase 'I Voted' somewhere in the design.
The Tioga County Election Commissioners will select the top three designs from each of the three age groups: 5-9, 10-13, and 14-18. The finalists will be presented to the Tioga County Legislature, which will determine the winners. One winner from each age group will be selected.
Winners will be awarded at the Regular Legislative Meeting on July 15. The custom stickers will be handed out to Tioga County voters beginning with the November 4 general election.
'This is a great chance for our kids and teens to share some of their amazing creativity with their community, and also to engage with our electoral process,' said James Wahls, Tioga County Election Commissioner.
Entries are due no later than May 30 along with the participant's Entry Form.
For participation rules and to obtain an Entry Form, visit the Board of Elections department page at tiogacountyny.gov.
Local 2025-2026 school budget & election results
Last call for entries into Tioga County's 'I Voted' Sticker Contest
NY lawmakers push AI safety bills
Elementary students join Mayor Kraham for first carousel ride of the season
Local Rotary Clubs prepare for Band Organ Rally and Carousel Festival
Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Trump-backed bill could block Tennessee's AI laws, threaten deepfake protections
Trump-backed bill could block Tennessee's AI laws, threaten deepfake protections

Yahoo

time2 hours ago

  • Yahoo

Trump-backed bill could block Tennessee's AI laws, threaten deepfake protections

NASHVILLE, Tenn. (WKRN) — President Donald Trump's 'big, beautiful bill' could change the way Tennessee regulates artificial intelligence. New language added to the legislation would block states from regulating AI or risk losing federal broadband funding. That means state laws like the Elvis Act, which protects artists from having their voices or images stolen by AI, would be put on pause. 'We are just hoping that we're not taking a couple steps backward since there's no federal regulation currently in place,' Kaley Bonett, a Nashville entertainment lawyer at Hall Booth Smith, said. MAY: House Republicans include a 10-year ban on US states regulating AI in 'big, beautiful' bill 'The songwriters — all — are entitled to revenue generated from the performances and distribution of that particular sound recording of their song, so if there's a deepfake imposing and competing with the authentic, then it's going to affect where the revenue share goes,' a Nashville managing partner and shareholder for Hall Booth Smith, Karl Braun, explained. 'It's going to be very difficult for, let's say, a songwriter to protect themselves legally from this evolving kind of internet Wild, wild west.' 'The big beautiful bill that the president is advancing does not replace those regulations with anything, which is almost unprecedented,' Plaintiffs Attorney at Spragens Law, David Kieley, said. Also at risk is the 'Preventing Deep Fakes Images Act,' which makes it a felony to post fake AI images designed to harm someone's reputation. Nashville-based meteorologist Bree Smith was targeted by AI imagery herself, explaining that someone used AI to take her face and put it on someone else's semi-naked body. '[This bill] just means that we are saying for 10 years we are going to let people potentially be victimized in this way, and that can't happen,' Smith said. 'It's not going to be any more okay for something like that to happen 10 years from now than it is for it to happen today.' 'Even looking prospectively at things we haven't really encountered yet, but this would stop the state from being able to regulate things like driverless cars in your town, driverless 18-wheelers on Tennessee highways,' Kieley added. 'We don't know really how this technology could affect public safety things like 9-1-1, dispatch.' A law signed just last week by Governor Lee banning AI-generated child sexual abuse material could also be blocked. ⏩ 'It would put child sex predators and big tech over the safety of our families,' Sen. Heidi Campbell (D-Nashville) said. 'This is really antithetical to the values that people across the aisle in Tennessee have been communicating with our legislation.' The president's bill passed in the House of Representatives. It's now in the Senate, where supporters hope to finalize it by July. Senator Marsha Blackburn has previously stated that Tennessee needs AI safeguards, saying, 'Until we pass something that is federally preemptive, we can't call for a moratorium on those things.' Copyright 2025 Nexstar Media, Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.

The New York Times lobbies up
The New York Times lobbies up

Politico

time5 hours ago

  • Politico

The New York Times lobbies up

With Katherine Long, Daniel Lippman FIRST IN PI — NYT HIRES BROWNSTEIN: The New York Times has registered to lobby for the first time in more than a decade. The news giant has retained K Street heavyweight Brownstein Hyatt Farber Schreck to represent it on a range of issues affecting the industry, from encroachments on press freedoms to the threats posed by artificial intelligence. — The Times' parent company first hired Brownstein last year, but the firm's work didn't trigger lobbying registration requirements until April, according to a spokesperson for the company and a draft disclosure filing shared with PI. More than half a dozen lobbyists are listed as working on the account for Brownstein: Republicans Marc Lampkin, Will Moschella and Greta Joynes and Democrats Al Mottur, Alice Lugo, Rob Robillard and John Menges. — One issue drawing the paper off the lobbying sidelines is the increasing ubiquity of artificial intelligence and the thorny issues it poses for news organizations. The Times, for example, introduced a suite of AI tools for internal use earlier this year. And many news outlets (including the Times, POLITICO and parent company Axel Springer) have partnered with AI companies on licensing deals and consumer-facing products. — But NYT is also one of several publishers facing off in court against ChatGPT creator OpenAI or other AI companies accused of copyright infringement for using outlets' content to train its models without permission or compensation. — In a congressional hearing on the issue last year, Sen. Richard Blumenthal (D-Conn.) warned that AI was 'literally eating away at the lifeblood of our democracy' and contributing to an 'existential crisis' among local publishers in particular. The last Congress also mulled legislation that would require online platforms like Google and Meta to negotiate with and pay news publishers in exchange for hosting their content. — At the same time, Times publisher A.G. Sulzberger has been repeatedly sounding the alarm about threats to press freedom both around the world and from leaders in the U.S. — In a speech last month that was republished as an essay in the Times, Sulzberger took note of President Donald Trump's escalating pressure campaign against news outlets already in his second term — from curtailing access to certain mainstream outlets in favor of friendlier partisan ones to suing, seeking to defund or opening federal probes of others and calling for the jailing of reporters. — 'It takes significant time, effort, and resources to produce The New York Times's fact-based original journalism and other content. Because of that, we have for years taken steps to protect and defend our intellectual property rights and uphold its value,' a Times spokesperson told PI in a statement. — 'In line with these efforts, we have engaged a Washington, DC-based government affairs firm to ensure our rights and legal protections are clearly and accurately represented among policymakers and regulators focused on publishing, media, copyright law and press freedoms,' it added. — The Times hasn't lobbied at the federal level since 2014, when the company paid Keightley & Ashner $30,000 for three quarters of work related to a pension issue. TGIF and welcome to PI. This newsletter runs on tips, so let's hear 'em. You can add me on Signal, email me at coprysko@ and be sure to follow me on X: @caitlinoprysko. ALL ABOUT AVA: A coalition of budget airlines this week launched the Association of Value Airlines, a new industry group aimed at boosting low-budget carriers in policy debates often dominated by the trade association representing major carriers, Airlines for America. AVA's founding members include Allegiant, Avelo, Frontier, Spirit and Sun Country. — Chris Brown currently serves as the group's interim executive director. He previously was vice president of legislative and regulatory affairs at larger industry trade association Airlines for America, and lobbied for low-fare airlines at the National Air Carrier Association. — In an interview with PI, Brown said the goal of AVA is to amplify the priorities of budget airlines, as larger organizations encompassing a range of air carriers often lead to competing priorities. Those priorities, Brown said, include a focus on rising costs due to the pilot shortage and the implementation of simulator training programs within the current 1,500-hour training requirement for pilots. — While the budget airlines business model thrived during the pandemic, they're now at an inflection point, Brown said. Brown attributes these airlines' success to ancillary fees, a model that other high-budget airlines have started to mirror. Southwest Airlines, for example, announced in March plans to do away with its free checked bag policy. — 'It's a critical time for our carriers, because of where they are financially and wanting to take advantage of what should be a more favorable regulatory environment with the Trump administration,' Brown said. PAUL WEISS LOSES ANOTHER BIG NAME: Another prominent lawyer has left Paul Weiss Rifkind Wharton & Garrison after the law firm struck a deal with the Trump administration to avoid being punished by an executive order. — Damian Williams, the former U.S. attorney for the Southern District of New York, left the firm after just five months to join Jenner & Block as the co-chair of its litigation department and investigations, compliance and defense practice. — Williams served as one of the nation's top federal prosecutors during the Biden administration, securing several high-profile convictions including those of former crypto kingmaker Sam Bankman-Fried and former Sen. Bob Menendez (D-N.J.). — Williams is the latest in a string of notable departures from Paul Weiss in the aftermath of its deal with the White House. Top Democratic attorney Karen Dunn and former top DOJ official Jeannie Rhee left the firm along with two other partners last month. — But Williams' landing spot is almost more notable: Jenner & Block became one of the first white shoe law firms targeted by Trump to take the president to court over his executive orders — and so far, they've been winning. OFF TARGET: 'A top Democratic organization strongly encouraged state campaigns to do much of their digital ad-buying business with a company that one of its members is set to soon join as CEO — a development that has puzzled and concerned some party insiders,' POLITICO's Holly Otterbein and Daniel report. — 'At a meeting in Little Rock, Arkansas last week, the Association of State Democratic Committees — an umbrella group for state parties — voted to recommend state races use one liberal firm, TargetSmart, for a major portion of digital ad buys, which could be worth millions.' — 'TargetSmart announced on May 7 that Liz Walters, outgoing chair of the Ohio Democratic Party, is taking over as CEO this summer. Walters, who made her departure public in a post on X, said she would leave the state party role by June 30. And until the week before the group's meeting, she was part of the ASDC's leadership team as treasurer.' — Though Walters recused herself from the TargetSmart vote, the incident is prompting fears among the party of even the appearance of a conflict of interest as Democrats struggle to find their footing. BLAST RADIUS: 'The spectacular breakup between Elon Musk and President Donald Trump threatens to leave Tesla with few political friends,' POLITICO's David Ferris reports. 'Musk has spent the past few months alienating the electric automaker's base of climate-minded car buyers by moonlighting as Trump's government-slasher-in-chief. Now, the billionaire's fixation on the GOP megabill has opened a dangerous rift with the president, who threatened Thursday to end all subsidies to Musk's companies.' — Musk's competitors in the space industry, meanwhile, are already moving to take advantage of the split, according to our Sam Skove. The SpaceX founder has long been an avid proponent of next bringing humans to Mars, rather than the moon. — But in some of the first significant pushback against Musk and SpaceX, 'a number of major space companies … are launching an ad campaign going big on the moon, according to two industry officials granted anonymity to discuss the effort.' — 'A separate letter addressed to the Senate Commerce Committee, and obtained by POLITICO, backs investments in the moon, and is signed by a lengthy slate of prominent space companies — but not SpaceX.' RELATED READ: 'What do Musk and Tesla want from the Republican megabill?' by E&E News' Kelsey Brugger. SHE MEANS BUSINESS: Casey Means, Trump's new nominee to be U.S. surgeon general, 'has repeatedly said the nation's medical, health and food systems are corrupted by special interests and people out to make a profit at the expense of Americans' health,' The Associated Press' Michelle R. Smith and Ali Swenson write. — But as Means 'has criticized scientists, medical schools and regulators for taking money from the food and pharmaceutical industries, she has promoted dozens of health and wellness products — including specialty basil seed supplements, a blood testing service and a prepared meal delivery service — in ways that put money in her own pocket.' — 'In her newsletter, on her social media accounts, on her website, in her book and during podcast appearances, the entrepreneur and influencer has at times failed to disclose that she could profit or benefit in other ways from sales of products she recommends. In some cases, she promoted companies in which she was an investor or adviser without consistently disclosing the connection, the AP found.' AI GROUP STAFFS UP: AI advocacy group Americans for Responsible Innovation is adding four new staffers to its policy team, Morning Tech reports. Brandie Nonnecke, previously an associate research professor at the UC Berkeley Goldman School of Public Policy, is now a senior policy director, while Rachel Hovde is joining as director of policy. She previously served as a senior advisor at the Bureau of Industry and Security. — Sarah Kessel, who worked under former Senate Energy and Natural Resources Chair Sen. Joe Manchin (D-W.V.), is now ARI's manager of government affairs, and Evan Sarnor has joined as a legislative analyst from Akin Gump Strauss Hauer & Feld. CORRECTION: Thursday's edition of this newsletter gave the incorrect state for former Gov. Jim Gilmore and misidentified the division at Seven Letter that has promoted three staffers. PI regrets the error. Jobs report — Chip Kahn will step down as president and CEO of the Federation of American Hospitals at the end of 2025, after 24 years with the group. — Shelly O'Neill Stoneman has launched StonePoint Strategies, a strategic consulting firm. She previously was senior vice president of government affairs at Lockheed Martin. — Chelsea Blink is now legislative director for Rep. Lauren Underwood (D-Ill.). She previously was director of farm animal legislation at the ASPCA. — Gustavo Torres is retiring as executive director of CASA, after more than three decades in the role. — Jerzy Piatkowski is now counsel at Fenwick. He most recently was vice president of contracts and associate general counsel at General Dynamics Mission Systems. — Kevin Orellana will be a legislative assistant for Rep. Vince Fong (R-Calif.), handling his financial services portfolio. He previously was a legislative aide for Rep. Young Kim (R-Calif.). New Joint Fundraisers None. New PACs Earnin PAC Activehours Inc. Fund (Earnin PAC) (PAC) Our Virginia PAC (Super PAC) RIGHT TECH PAC (Hybrid PAC) Taproot Collective Fund (PAC) Vets Against Trump PAC (Super PAC) New Lobbying REGISTRATIONS Akin Gump Strauss Hauer & Feld: Bkv Corporation Arrow Group Consulting, LLC: Pelican Reef Of St. Augustine LLC Becker & Poliakoff, P.A.: Chicanos Por La Causa Becker & Poliakoff, P.A.: City Of St. Pete Beach, Fl Becker & Poliakoff, P.A.: National Iamerican Indian Housing Coalition Becker & Poliakoff, P.A.: Significance, Inc. Becker & Poliakoff, P.A.: Tampa Port Authority Becker & Poliakoff, P.A.: Treatment Alternatives For Safe Communities, Inc. Bgr Government Affairs: Ameresco, Inc. Bgr Government Affairs: Exelon Business Services Company LLC Bgr Government Affairs: Galaxy Digital Holdings Lp Bgr Government Affairs: Millennium Management LLC Bgr Government Affairs: National Rural Electric Cooperative Association Bgr Government Affairs: Phantom Bgr Government Affairs: St Energy Bgr Government Affairs: Welbehealth Boundary Stone Partners: Carbonbuilt Buchanan Ingersoll & Rooney Pc: Bright Path Labs Capital City Consulting, LLC.: Miller Strategies, LLC On Behalf Of Merchants For America, Inc. Continental Strategy, LLC: Stubhub Inc. Greenberg Traurig, LLP: Archkey Solutions Hb Strategies: Viamericas Mercury Public Affairs, LLC: Estar Partners Mindset Advocacy, LLC: Mark Foods LLC Red Carr LLC: Team Hallahan (On Behalf Of City Of Cleveland) Riley Executive Government Solutions: Okeefe Industries Squire Patton Boggs: Bitcoin Policy Institute The Jackson Group, LLC: 47G The Jackson Group, LLC: Pelion Venture Partners The Nuclear Company: The Nuclear Company Todd Strategy Group: Glaxosmithkline LLC Williams And Jensen, Pllc: Dupont Law Firm New Lobbying Terminations Bgr Government Affairs: Grail, LLC Bgr Government Affairs: Sandvine Kiley Capitol Solutions, LLC: Deckard Technologies Kiley Capitol Solutions, LLC: Leading Builders Of America Kiley Capitol Solutions, LLC: Lowe Syndrome Association Walker Strategies: Alliance Of Health Care Sharing Ministries

AI Safety: Beyond AI Hype To Hybrid Intelligence
AI Safety: Beyond AI Hype To Hybrid Intelligence

Forbes

time8 hours ago

  • Forbes

AI Safety: Beyond AI Hype To Hybrid Intelligence

Autonomous electric cars with artificial intelligence self driving on metropolis road, 3d rendering The artificial intelligence revolution has reached a critical inflection point. While CEOs rush to deploy AI agents and boast about automation gains, a sobering reality check is emerging from boardrooms worldwide: ChatGPT 4o has 61% hallucinations according to simple QA developed by OpenAI, and even the most advanced AI systems fail basic reliability tests with alarming frequency. In a recent OpEd Dario Amodei, Anthropic's CEO, called for regulating AI arguing that voluntary safety measures are insufficient. Meanwhile, companies like Klarna — once poster children for AI-first customer service — are quietly reversing course on their AI agent-only approach, and rehiring human representatives. These aren't isolated incidents; they're the cusp of the iceberg signaling a fundamental misalignment between AI hype and AI reality. Today's AI safety landscape resembles a high-stakes experiment conducted without a safety net. Three competing governance models have emerged: the EU's risk-based regulatory approach, the US's innovation-first decentralized framework, and China's state-led centralized model. Yet none adequately addresses the core challenge facing business leaders: how to harness AI's transformative potential while managing its probabilistic unpredictability. The stakes couldn't be higher. Four out of five finance chiefs consider AI "mission-critical," while 71% of technology leaders don't trust their organizations to manage future AI risks effectively. This paradox — simultaneous dependence and distrust — creates a dangerous cognitive dissonance in corporate decision-making. AI hallucinations remain a persistent and worsening challenge in 2025, where artificial intelligence systems confidently generate false or misleading information that appears credible but lacks factual basis. Recent data reveals the scale of this problem: in just the first quarter of 2025, close to 13,000 AI-generated articles were removed from online platforms due to hallucinated content, while OpenAI's latest reasoning systems show hallucination rates reaching 33% for their o3 model and a staggering 48% for o4-mini when answering questions about public figures 48% error rate. The legal sector has been particularly affected, with more than 30 instances documented in May 2025 of lawyers using evidence that featured AI hallucinations. These fabrications span across domains, from journalism where ChatGPT falsely attributed 76% of quotes from popular journalism sites to healthcare where AI models might misdiagnose medical conditions. The phenomenon has become so problematic that 39% of AI-powered customer service bots were pulled back or reworked due to hallucination-related errors highlighting the urgent need for better verification systems and user awareness when interacting with AI-generated content. The future requires a more nuanced and holistic approach than the traditional either-or perspective. Forward-thinking organizations are abandoning the binary choice between human-only and AI-only approaches. Instead, they're embracing hybrid intelligence — deliberately designed human-machine collaboration that leverages each party's strengths while compensating for their respective weaknesses. Mixus, which went public in June 2025, exemplifies this shift. Rather than replacing humans with autonomous agents, their platform creates "colleague-in-the-loop" systems where AI handles routine processing while humans provide verification at critical decision points. This approach acknowledges a fundamental truth that the autonomous AI evangelists ignore: AI without natural intelligence is like building a Porsche and giving it to people without a driver's license. The autonomous vehicle industry learned this lesson the hard way. After years of promising fully self-driving cars, manufacturers now integrate human oversight into every system. The most successful deployments combine AI's computational power with human judgment, creating resilient systems that gracefully handle edge cases and unexpected scenarios. LawZero is another initiative in this direction, which seeks to promote scientist AI as a safer, more secure alternative to many of the commercial AI systems being developed and released today. Scientist AI is non-agentic, meaning it doesn't have agency or work autonomously, but instead behaves in response to human input and goals. The underpinning belief is that AI should be cultivated as a global public good — developed and used safely towards human flourishing. It should be prosocial. While media attention focuses on AI hallucinations, business leaders face more immediate threats. Agency decay — the gradual erosion of human decision-making capabilities — poses a systemic risk as employees become overly dependent on AI recommendations. Mass persuasion capabilities enable sophisticated social engineering attacks. Market concentration in AI infrastructure creates single points of failure that could cripple entire industries. 47% of business leaders consider people using AI without proper oversight as one of the biggest fears in deploying AI in their organization. This fear is well-founded. Organizations implementing AI without proper governance frameworks risk not just operational failures, but legal liability, regulatory scrutiny, and reputational damage. Double literacy — investing in both human literacy (a holistic understanding of self and society) and algorithmic literacy — emerges as our most practical defense against AI-related risks. While waiting for coherent regulatory frameworks, organizations must build internal capabilities that enable safe AI deployment. Human literacy encompasses emotional intelligence, critical thinking, and ethical reasoning — uniquely human capabilities that become more valuable, not less, in an AI-augmented world. Algorithmic literacy involves understanding how AI systems work, their limitations, and appropriate use cases. Together, these competencies create the foundation for responsible AI adoption. In healthcare, hybrid systems have begun to revolutionize patient care by enabling practitioners to spend more time in direct patient care while AI handles routine tasks, improving care outcomes and reducing burnout. Some leaders in the business world are also embracing the hybrid paradigm, with companies incorporating AI agents as coworkers gaining competitive advantages in productivity, innovation, and cost efficiency. Practical Implementation: The A-Frame Approach If you are a business reader and leader, you can start building AI safety capabilities in-house, today using the A-Frame methodology – 4 interconnected practices that create accountability without stifling innovation: Awareness requires mapping both AI capabilities and failure modes across technical, social, and legal dimensions. You cannot manage what you don't understand. This means conducting thorough risk assessments, stress-testing systems before deployment, and maintaining current knowledge of AI limitations. Appreciation involves recognizing that AI accountability operates across multiple levels simultaneously. Individual users, organizational policies, regulatory requirements, and global standards all influence outcomes. Effective AI governance requires coordinated action across all these levels, not isolated interventions. Acceptance means acknowledging that zero-failure AI systems are mythical. Instead of pursuing impossible perfection, organizations should design for resilience — systems that degrade gracefully under stress and recover quickly from failures. This includes maintaining human oversight capabilities, establishing clear escalation procedures, and planning for AI system downtime. Accountability demands clear ownership structures defined before deployment, not after failure. This means assigning specific individuals responsibility for AI outcomes, establishing measurable performance indicators, and creating transparent decision-making processes that can withstand regulatory scrutiny. The AI safety challenge isn't primarily technical — it's organizational and cultural. Companies that successfully navigate this transition will combine ambitious AI adoption with disciplined safety practices. They'll invest in double literacy programs, design hybrid intelligence systems, and implement the A-Frame methodology as standard practice. The alternative — rushing headlong into AI deployment without adequate safeguards — risks not just individual corporate failure, but systemic damage to AI's long-term potential. As the autonomous vehicle industry learned, premature promises of full automation can trigger public backlash that delays beneficial innovation by years or decades. Business leaders face a choice: they can wait for regulators to impose AI safety requirements from above, or they can proactively build safety capabilities that become competitive advantages. Organizations that choose the latter approach — investing in hybrid intelligence and double literacy today — will be best positioned to thrive in an AI-integrated future while avoiding the pitfalls that inevitably accompany revolutionary technology transitions. The future belongs not to companies that achieve perfect AI automation, but to those that master the art of human-AI collaboration. In a world of probabilistic machines, our most valuable asset remains deterministic human judgment — enhanced, not replaced, by artificial intelligence.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into the world of global news and events? Download our app today from your preferred app store and start exploring.
app-storeplay-store