logo
Instagram enshitification.

Instagram enshitification.

The Verge3 days ago
Posted Jul 22, 2025 at 11:20 AM UTC Instagram enshitification.
Reportedly in testing since June, I've now been served three of these unskippable 'Ad break' ads over the last two days. It's jarring, and has accelerated my desire to quit the platform that's increasingly less fun and flooded with AI slop.
Orange background

Try Our AI Features

Explore what Daily8 AI can do for you:

Comments

No comments yet...

Related Articles

Massachusetts Bill Expands AI Rules For Hiring And Background Checks
Massachusetts Bill Expands AI Rules For Hiring And Background Checks

Forbes

time29 minutes ago

  • Forbes

Massachusetts Bill Expands AI Rules For Hiring And Background Checks

Massachusetts SD 3007 reflects growing legislative interest in the regulation of artificial ... More intelligence in employment. The bill introduces compliance obligations and liability standards that may significantly affect how employers and service providers use technology to support hiring. Massachusetts is joining the growing list of states looking to regulate artificial intelligence in employment, but its latest proposal could reach further than most. State lawmakers have introduced Senate Docket 3007 (SD 3007), a bill that would impose new obligations on organizations using artificial intelligence or other automated tools to support decision-making in employment, housing, credit, education, and other critical areas. Titled An Act to Prevent Unlawful Algorithmic Discrimination, the bill aims to prevent automated systems from producing discriminatory outcomes, whether intentional or not. If enacted, the bill would add Chapter 151G to the state's General Laws and apply to any private or public organization operating within Massachusetts. The legislation would regulate the use of automated decision systems (ADS), broadly defined to include tools that use artificial intelligence or statistical models to make or inform decisions, recommendations, classifications, or categorizations. The bill's stated goal is to ensure that such systems do not result in discrimination against individuals based on race, gender, disability, or other protected characteristics, including combinations of characteristics. It does so by imposing a range of new audit, transparency, and accountability requirements on covered entities that use ADS in connection with 'fundamental opportunities,' a term that includes employment decisions such as hiring, pay, and termination. Key Requirements for Employers Using Automated Systems Under SD 3007, employers that use artificial intelligence or automated tools in employment-related decision-making would be subject to a detailed compliance framework. One of the central obligations is a recurring audit requirement. Covered entities would need to evaluate their automated systems at least every 90 days, with additional reviews required within 30 days of any substantial modification. These audits must assess whether the system results in discriminatory outcomes and, if so, whether appropriate steps have been taken to mitigate those effects. In addition to audit requirements, the bill would require employers to maintain thorough documentation about how each system was designed, the data used to train and evaluate it, and the outcomes of any completed audits. Employers would also be expected to publicly disclose summary information about the automated tools in use, including the sources of training data and the measures taken to manage known or foreseeable risks. Another key provision centers on transparency to candidates. Employers would be required to notify individuals when an automated decision system is being used in a way that may influence hiring or employment terms. Importantly, the bill grants candidates the right to opt out of automation in favor of a human review process. That opt-out mechanism must be no more burdensome than the automated pathway and may not result in any form of penalty. These requirements apply to any automated decision system used to determine eligibility for employment or to inform terms and conditions of employment, such as pay, promotion, or termination. Liability Provisions and Shared Responsibility SD 3007 introduces a strict liability framework for employers and service providers. Under the bill, a covered entity may be held liable for discriminatory outcomes regardless of whether the discrimination was intentional, whether the employer was aware of the issue, or whether the system complied with federal laws such as Title VII of the Civil Rights Act or the Fair Credit Reporting Act (FCRA). The bill also includes a provision for joint and several liability, meaning that employers and any service providers involved in the development or deployment of the automated system could be held jointly responsible for violations. This would apply even in cases where service providers had no direct control over how the tool was implemented in practice. Considerations for Background Screening Providers For the background screening industry, SD 3007 raises important considerations. Consumer reporting agencies (CRAs) provide factual, legally reportable information, such as criminal history, employment, or education verification, that employers may use as part of a broader decision-making process. These reports are not designed to render hiring decisions or recommendations but to present objective data that informs employer evaluations. Many CRAs have implemented automation to enhance process efficiency, improve consistency, and support compliance with disclosure and authorization requirements. These automated processes may include electronic data retrieval, standardization of report formatting, and system-driven notifications. While these tools streamline operations, they do not evaluate candidate suitability or drive employment outcomes. However, SD 3007 defines an 'automated decision system' as any system that makes or informs eligibility determinations. Depending on interpretation, this may encompass tools used in background screening, even if those tools do not directly influence hiring decisions. The bill does not currently distinguish between process automation, which facilitates the delivery of factual information, and decision automation, which actively determines outcomes. Under the proposed framework, a CRA could be required to conduct regular audits, disclose elements of its data sourcing and processing methods, and potentially provide human review options for systems that do not currently support decision-making functionality. Additionally, the inclusion of joint and several liability may extend legal responsibility to CRAs for how employers use screening reports, regardless of the CRA's level of involvement in the hiring process. Employers and vendors may wish to evaluate whether existing service agreements, product designs, and workflows would need to be adjusted to meet the bill's requirements if it is enacted. Operational Impacts for Employers Employers operating in Massachusetts, or those hiring Massachusetts-based candidates, would face a heightened compliance burden under SD 3007. In addition to regular audit and documentation requirements, employers would need to establish opt-out pathways, manage public disclosures, and coordinate with vendors to ensure compliance across multiple systems. Because the bill covers not only tools that make decisions but also those that inform them, employers may need to assess whether common technologies, such as applicant tracking systems with AI-driven filtering, fraud detection software, or automated adjudication tools, fall within the bill's scope. Organizations that operate in multiple jurisdictions may also face challenges developing consistent compliance strategies, especially if other states adopt differing standards for AI use in employment. Looking Ahead Massachusetts SD 3007 reflects growing legislative interest in the regulation of artificial intelligence in employment. While its core objective, preventing discriminatory outcomes in automated decision-making, is consistent with emerging national and international AI governance frameworks, the bill introduces compliance obligations and liability standards that may significantly affect how employers and service providers use technology to support hiring. Employers and background screening providers should closely monitor this legislation and begin evaluating whether current practices would align with the bill's requirements. In particular, companies may benefit from clarifying the distinction between tools that support hiring through automation and those that make or influence employment decisions directly. As artificial intelligence becomes more deeply embedded in employment practices, clear policy guidance, particularly around accountability and role-specific obligations, will remain essential to balancing fairness, innovation, and operational integrity.

Measuring AI's Impact And Value: 20 Essential Factors To Consider
Measuring AI's Impact And Value: 20 Essential Factors To Consider

Forbes

time34 minutes ago

  • Forbes

Measuring AI's Impact And Value: 20 Essential Factors To Consider

getty As AI systems become more embedded in core business functions, traditional metrics like precision and recall capture only part of the picture. Measuring ROI now requires a holistic lens—one that accounts for AI's impact on workflows, decision-making speed and long-term adaptability. Whether a business is assessing its internal AI tools or the AI-powered features included in its products, relying solely on technical benchmarks can result in missing or misinterpreting the broader value—or potential risk—AI systems introduce. Below, members of Forbes Technology Council highlight key factors worth considering when assessing AI success and ROI, explaining why each one offers a more complete view of performance. 1. Hours Reclaimed A practical metric I use to measure AI's ROI is hours reclaimed. I recently rebuilt our GTM messaging across three segments—what previously took 20 hours to do manually, I completed in two, and then in 45 minutes using AI. That time saved is measurable, repeatable and directly tied to productivity gains, reduced errors and faster execution across teams. - Farrukh Mahboob, PackageX 2. Decision Latency Reduction Decision latency reduction is a powerful AI success metric. It measures how quickly AI enables smart, confident decisions, compressing the time between insight and action. Unlike cost savings, this reflects real strategic agility. When decisions speed up, it shows AI is truly embedded in how the business moves. - Jason Missildine, Intentional Intensity Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify? 3. CO2 Usage A metric recently brought into the measurement equation is CO2 usage. Along with tracking more traditional efficiency metrics that showcase faster or cheaper results thanks to an AI system, calculating how much energy it uses provides an offset figure that can be incorporated into evaluations and influence longer-term strategy. - Mark Thirlwell, BSI Group 4. Ethical Outcomes One powerful metric is how well AI systems translate human values into safe, bias-free outcomes that benefit society and stakeholders. More than delivering correct answers, AI systems need to model responsible behaviors, which in turn leads to growth, innovation and a better customer experience. - Vishal Talwar, Wipro Ltd. 5. Contextual Adaptation Quotient Contextual adaptation quotient is a powerful new metric that measures how well AI systems sustain performance across varying domains, users or conditions without retraining. Unlike static accuracy scores, CAQ captures real-world adaptability, highlighting robustness, transferability and long-term ROI in dynamic environments. - Nikhil Jain, SmartThings, Inc. 6. 'Trust Delta' One insightful metric is the 'trust delta,' or how much more (or less) people trust your system after you add AI. You can measure this through user feedback and behavior changes. The smartest AI is useless if people won't use it. If your AI makes people second-guess themselves or feel uneasy, it's actually slowing them down. The trust delta shows whether you're building something people want to work with or work around. - Kehinde Fawumi, Amazon 7. Time To Confidence A genuinely insightful ROI metric for AI systems is time to confidence—how quickly a user reaches a decision they trust. In high-stakes fields like investing, speed alone isn't enough; decisions must also be defensible. - Mike Conover, Brightwave 8. Innovation Rate In my view, the innovation rate metric stands out above all. This tracks the number of new products, services or process improvements directly enabled by AI-driven insights. While ROI focuses on optimizing the present, this metric reveals how effectively AI is building a company's future. A high innovation rate proves AI is not just a cost center, but a strategic engine for growth and market leadership. - Mohan Mannava, Texas Health 9. Autonomy-To-Intervention Ratio A cutting-edge metric is the autonomy-to-intervention ratio, which tracks how long an AI system can operate before needing human correction. It moves beyond traditional KPIs like precision to reveal trust, scalability and operational ROI in real terms. A high AIR means AI isn't just working; it's learning, adapting and truly offloading cognitive burden. - Nicola Sfondrini, PWC 10. Time To Insight Reduction One emerging and insightful metric is time to insight reduction, which is how much more quickly actionable intelligence is derived from data. It reflects the AI system's real-world impact on decision velocity, efficiency and responsiveness, making it a powerful indicator of true ROI beyond cost savings or accuracy alone. - Hrushikesh Deshmukh, Fannie Mae 11. Decision Outcome Improvement The true measure of AI isn't just technical performance, but its real-world impact. Decision outcome improvement quantifies the tangible uplift in valuable results achieved when AI influences a decision, versus the baseline without it. This metric is crucial because it cuts through tech specs to show the practical, profitable difference AI makes, revealing its true ROI where it matters most. - Raghu Para, Ford Motor Company 12. Revenue Per AI Decision Revenue per AI decision is a metric that I find myself looking at quite often. It shows how well an AI system drives actual business outcomes. At our company, if an AI model suggests a payment plan and it closes faster or with higher value, that's measurable success. It ties AI performance directly to bottom-line impact, which matters more than model accuracy or usage stats alone. - Ashish Srimal, Ratio 13. Time To Value Realization One insightful metric is time to value realization, which measures how quickly a company can start deriving business value from an AI implementation. A shorter TTVR indicates efficient deployment, effective user adoption and that the AI is solving a real problem quickly, directly correlating to faster benefits and competitive advantage. - Ambika Saklani Bhardwaj, Walmart Inc. 14. Adaptive Learning Rate One unique metric for measuring AI success is adaptive learning rate, which helps quantify the speed at which an AI system can learn from new data. For instance, in audio processing, a high ALR means an AI can quickly adapt to new accents or background noises, continuously improving without constant retraining. This shows an AI's true long-term value, beyond initial deployment. - Harshal Shah 15. Autonomous Resolution Rate A powerful new metric is autonomous resolution rate, which is the percentage of tasks completed end-to-end by AI agents without human intervention. In ERP/CRM, ARR reflects true ROI by measuring how effectively AI agents handle processes like order creation, invoice matching or case resolution. High ARR signals reduced operational costs, improved efficiency and successful agent adoption at scale. - Giridhar Raj Singh Chowhan, Microsoft 16. Model Utilization Rate One enlightening measure is the model utilization rate—the percentage of the output of an AI model that gets used for decision-making or operations. It's instructive because accuracy is of no consequence if the truths are not acted on. It's a measure of real-world application and trust in AI that demonstrates the relevance and value it has in business. - Saket Chaudhari, TriNet Inc. 17. Feature Abandonment Recovery Feature abandonment recovery is the percentage of users who return to an AI feature after experiencing initial frustration. Most metrics show first-touch success, but this shows resilience. If users give your AI a second chance after it fails them, you've built something valuable. It indicates your AI provides enough value that users forgive mistakes—the ultimate sign of product-market fit. - Marc Fischer, Dogtown Media LLC 18. Resource Efficiency Index The resource efficiency index measures how well AI saves time, effort and resources by reducing manual work and enhancing productivity. Unlike traditional ROI, REI captures indirect benefits such as innovation and agility, providing a holistic view of AI's impact on workforce efficiency and strategic value in modern business operations. - Dileep Rai, Hachette Book Group 19. Access Management Data Access management data provides powerful, real-time metrics that analyze the impact and adoption of technologies and digital systems, such as those using AI. This data offers actionable insights into how tools are being used and their effect on productivity. By mapping usage trends to business outcomes, organizations can identify gaps, optimize training and prove ROI. - Fran Rosch, Imprivata 20. Return On Disruption One novel metric is return on disruption, which measures how AI redefines workflows or business models, not just cost or revenue gains. ROD captures transformative impact, signaling true innovation and long-term competitive advantage rather than incremental efficiency. - Lori Schafer, Digital Wave Technology

Sam Altman says your ChatGPT therapy session might not stay private in a lawsuit
Sam Altman says your ChatGPT therapy session might not stay private in a lawsuit

Business Insider

timean hour ago

  • Business Insider

Sam Altman says your ChatGPT therapy session might not stay private in a lawsuit

More people are turning to ChatGPT as a therapist, but it might not always be a safe space. Sam Altman, the CEO of OpenAI, said those therapy-style conversations don't have the same legal protections as conversations with a real therapist. "So if you go talk to ChatGPT about your most sensitive stuff and then there's like a lawsuit or whatever, we could be required to produce that, and I think that's very screwed up," Altman told podcaster Theo Von in an episode that aired Wednesday. "Right now, if you talk to a therapist or a lawyer or a doctor about those problems, there's like legal privilege for it — there's doctor-patient confidentiality, there's legal confidentiality," Altman added. "We haven't figured that out yet for when you talk to ChatGPT." Altman said there should be the "same concept of privacy for your conversations with AI that we do with a therapist" and that it should be "addressed with some urgency." More users — particularly young people — are using ChatGPT as a therapist, life coach, or consulting it for relationship advice, Altman said. "No one had to think about that even a year ago, and now I think it's this huge issue of like, 'How are we gonna treat the laws around this?'" Altman said. Unlike conversations on encrypted messaging services like WhatsApp or Signal, it is possible for OpenAI to read chats between users and ChatGPT. This includes staff using conversations to fine-tune the AI model and monitoring for misuse. According to OpenAI's data retention policies, deleted chats on ChatGPT Free, Plus, and Pro are permanently deleted within 30 days unless the company is required to keep them for "legal or security reasons." In June, The New York Times and other news plaintiffs filed a court order against OpenAI seeking that it retain all ChatGPT user logs, including deleted chats, indefinitely. The order, which OpenAI is appealing, came as part of a wider copyright lawsuit. Business Insider could not immediately reach OpenAI for comment. Elsewhere on the Theo Von podcast, Altman, who became a father in February, said he was also worried about the psychological impact addictive social media platforms could have on children.

DOWNLOAD THE APP

Get Started Now: Download the App

Ready to dive into a world of global content with local flavor? Download Daily8 app today from your preferred app store and start exploring.
app-storeplay-store