
Gorsuch warns Supreme Court decision gives IRS 'powerful new tool to avoid accountability'
Justice Neil Gorsuch wrote a dissent to the Supreme Court's decision to limit the U.S. Tax Court's authority in certain Internal Revenue Service (IRS) cases, asserting that the federal tax collecting service could avoid accountability in the future.
Gorsuch wrote the dissent to the high court's opinion in Commissioner of Internal Revenue v. Zuch, a case that centers on Jennifer Zuch's dispute with the IRS that began in 2012 over the agency's moves regarding her late 2010 federal tax return filing.
"Along the way, the Court's decision hands the IRS a powerful new tool to avoid accountability for its mistakes in future cases like this one," Gorsuch wrote in his dissent.
In this case, Zuch claimed that the IRS made a mistake, crediting a $50,000 payment to her then-husband's account instead of her own. The IRS disagreed and sought to collect her unpaid taxes with a levy to seize and sell her property.
Over the years after the dispute began, Zuch filed several annual tax returns showing overpayments. Instead of being issued refunds, the IRS applied these to her outstanding 2010 tax liability.
Once the IRS settled Zuch's outstanding sum, her liability reached zero, and the IRS no longer had a reason to levy her property.
The IRS then moved to dismiss Zuch's case in Tax Court, arguing that Tax Court lacked jurisdiction since there was no longer a levy on her property. The Tax Court agreed.
The Supreme Court upheld that Tax Court no longer had jurisdiction without a levy.
"Because there was no longer a proposed levy, the Tax Court properly concluded that it lacked jurisdiction to resolve questions about Zuch's disputed tax liability," read the high court's opinion.
The decision will not only prevent Zuch from recouping her overpayments that she believes the IRS has wrongly retained, but give the IRS a way to avoid accountability, Gorsuch wrote in his dissent.
"The IRS seeks, and the Court endorses, a view of the law that gives that agency a roadmap for evading Tax Court review and never having to answer a taxpayer's complaint that it has made a mistake," the justice wrote.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Fast Company
11 minutes ago
- Fast Company
Deepfake interviews: Navigating the growing AI threat in recruitment and organizational security
The breakneck speed of artificial intelligence (AI) technology has fundamentally reshaped how businesses manage recruitment, communication, and information dissemination. Among these developments, deepfake technology has emerged as a significant threat, particularly through its use in fraudulent interviews. Deepfake interviews leverage advanced AI techniques, predominantly Generative Adversarial Networks (GANs), to generate hyper-realistic but entirely fabricated audio, video, or imagery. These synthetic media forms convincingly manipulate appearances, voices, and actions, making it exceedingly difficult for average users—and even experts—to discern authenticity. IMPLICATIONS AND MOTIVATIONS FOR DEEPFAKE USE The motivations behind deploying deepfake technology for scams and fraud are varied but consistently damaging. Criminals use deepfakes primarily for financial gain, identity theft, psychological manipulation and disinformation. For instance, deepfakes can facilitate vishing (voice phishing), whereby scammers convincingly mimic a trusted individual's voice, deceiving victims into transferring funds or revealing sensitive information. Additionally, these AI-generated falsifications enable sophisticated blackmail, extortion, and reputation sabotage by disseminating maliciously altered content. Further, deepfakes significantly disrupt corporate trust and operational integrity. Financial crimes involving deepfakes include unauthorized transactions orchestrated by impersonating company executives. A notable case occurred in Hong Kong, where cybercriminals successfully impersonated executives, causing multi-million-dollar losses and severe reputational harm. Beyond immediate financial damage, deepfake attacks can erode consumer trust, destabilize markets, and inflict lasting damage to brand reputation. Moreover, malicious actors exploit deepfake technology politically, disseminating misinformation designed to destabilize governments, provoke conflicts, and disrupt public order. Particularly during elections or significant political events, deepfakes have the potential to manipulate public opinion significantly, challenging the authenticity of democratic processes. TECHNOLOGICAL MECHANISMS AND ACCESSIBILITY The core technological mechanism behind deepfake interviews involves GANs, where AI systems are trained to produce realistic synthetic media by learning from authentic audio and video datasets. The recent democratization of this technology means anyone can produce deepfakes cheaply or freely using readily accessible online tools, exacerbating risks. The emergence of ' deepfake-as-a-service ' models on dark web platforms further compounds these concerns, enabling sophisticated attacks without extensive technical expertise. In recruiting scenarios, deepfake candidates use synthetic identities, falsified resumes, fabricated references, and convincingly altered real-time video interviews to infiltrate organizations. These fraudulent candidates pose acute threats, particularly within industries that rely heavily on remote hiring practices, such as IT, finance, healthcare, and cybersecurity. According to Gartner predictions, one in four job candidates globally will be fake by 2028, highlighting the scale and urgency of addressing this issue. ORGANIZATIONAL RISKS AND CONSEQUENCES Organizations face numerous operational and strategic threats from deepfake attacks. Financially, companies victimized by deepfake fraud experience significant losses, averaging $450,000 per incident. Deepfake infiltration can also lead to data breaches, loss of intellectual property, and compromised cybersecurity infrastructure, all of which bear significant financial and regulatory repercussions. Moreover, deepfake-driven scams lead to broader social engineering attacks. For instance, remote IT workers fraudulently hired through deepfakes have successfully conducted espionage activities, extracting sensitive data or installing malware within corporate networks. Often linked to state-sponsored groups, such incidents further emphasize deepfake-related geopolitical threats. PROACTIVE STRATEGIES FOR MITIGATION AND DEFENSE Given the complexity and severity of deepfake threats, organizations must adopt comprehensive mitigation strategies. Technological solutions include deploying sophisticated AI-powered detection tools designed explicitly for deepfake identification. Platforms such as GetReal Security (no relationship)offer integrated solutions providing proactive detection, advanced forensic analysis, and real-time authentication of digital content. Combining AI-driven solutions with manual forensic analysis has proven particularly effective, as human expertise can spot contextual inconsistencies that AI alone might miss. Furthermore, businesses should enhance cybersecurity awareness and employee training programs. Regular training on recognizing visual, audio, and behavioral anomalies in deepfake content is crucial. Organizations can adopt robust authentication measures like multi-factor authentication (MFA), biometric verification, and blockchain-based methods for verifying digital authenticity, although scalability remains challenging. Additionally, continuous investment in adaptive threat intelligence platforms ensures rapid responses to emerging threats. It's now a necessity to adopt scalable deepfake detection technologies integrated seamlessly within recruitment workflows and organizational infrastructures. My team has encountered a few deepfake interviews ourselves, through contractors. Since then, we've required deeper vendor due diligence and vendor technology to mitigate as well as recruiter training to detect red flags. COLLABORATIVE AND REGULATORY ACTIONS Addressing deepfake threats effectively requires robust collaborative efforts across tech companies, government agencies, and industry bodies. Regulatory frameworks, such as the European Union's AI Act and various U.S. federal and state initiatives, represent important steps toward transparency, accountability, and comprehensive protection against malicious AI misuse. Nevertheless, current regulations remain fragmented and incomplete, underscoring the urgent need for standardized, comprehensive legislation tailored to the risks posed by deepfakes. Deepfake technology presents profound ethical, societal, and cybersecurity challenges. The increasing prevalence and sophistication of AI-driven fraud in recruitment and beyond require proactive, multi-layered defensive measures. Organizations must enhance technical defenses, raise employee awareness, and advocate for robust regulatory frameworks. By taking informed, collaborative, and proactive approaches, businesses can significantly mitigate the risks associated with deepfake technology while leveraging its beneficial applications responsibly.


Bloomberg
14 minutes ago
- Bloomberg
New Home Construction Falls to Five-Year Low
By Updated on Save New US residential construction declined in May to the slowest pace since the onset of the pandemic as an elevated inventory of homes for sale and high mortgage rates sapped the motivation to build. Housing starts decreased 9.8% to an annualized rate of 1.26 million homes last month, according to government figures released Wednesday. The median projection in a Bloomberg survey of economists was for a 1.35 million pace.


New York Post
28 minutes ago
- New York Post
Supreme Court Justice Ketanji Brown Jackson reports $2M payment for her memoir ‘Lovely One' in 2024
Supreme Court Justice Ketanji Brown Jackson reported receiving over $2 million from Penguin Random House, which published her book, 'Lovely One: A Memoir,' in 2024. Jackson's financial disclosure report indicated that Penguin Random House paid her a $2,068,750 book advance in 2024. The company also provided reimbursements for transportation, food, and lodging to promote her book at events across the country. 'Lovely One,' whose title references her West African birth name's meaning, was published in early September, and is described by Amazon as 'tracing her family's ascent from segregation to her confirmation on America's highest court within the span of one generation.' Her book tour spanned the country with stops in major cities including San Francisco, Seattle, Chicago, Miami and Atlanta. This is not the first time Penguin Random House has sent her a massive payment, as a similar disclosure report revealed the company paid her a $893,750 book advance in 2023, bringing the total over two years to almost $3 million. Jackson's financial disclosure report indicated that Penguin Random House paid her a $2,068,750 book advance in 2024. AP The Supreme Court recently adopted a formal ethics code for receiving free travel and other gifts. However, there is no current cap on how much justices may earn from book deals. 'Last month, Barrett, Jackson, Gorsuch and Sotomayor recused themselves from a decision over whether to hear a case involving the parent company of the book publisher Penguin Random House,' the Washington Post reported. Her book tour spanned the country with stops in major cities including San Francisco, Seattle, Chicago, Miami and Atlanta. Penguin Random House 'The justices did not explain their reasoning for sitting out the discussion, but an ethics expert said it was probably because the case involved the German company Bertelsmann, which owns the publishing house that has published or will be publishing their books.'