
Massachusetts Bill Expands AI Rules For Hiring And Background Checks
Massachusetts is joining the growing list of states looking to regulate artificial intelligence in employment, but its latest proposal could reach further than most. State lawmakers have introduced Senate Docket 3007 (SD 3007), a bill that would impose new obligations on organizations using artificial intelligence or other automated tools to support decision-making in employment, housing, credit, education, and other critical areas. Titled An Act to Prevent Unlawful Algorithmic Discrimination, the bill aims to prevent automated systems from producing discriminatory outcomes, whether intentional or not.
If enacted, the bill would add Chapter 151G to the state's General Laws and apply to any private or public organization operating within Massachusetts. The legislation would regulate the use of automated decision systems (ADS), broadly defined to include tools that use artificial intelligence or statistical models to make or inform decisions, recommendations, classifications, or categorizations.
The bill's stated goal is to ensure that such systems do not result in discrimination against individuals based on race, gender, disability, or other protected characteristics, including combinations of characteristics. It does so by imposing a range of new audit, transparency, and accountability requirements on covered entities that use ADS in connection with 'fundamental opportunities,' a term that includes employment decisions such as hiring, pay, and termination.
Key Requirements for Employers Using Automated Systems
Under SD 3007, employers that use artificial intelligence or automated tools in employment-related decision-making would be subject to a detailed compliance framework. One of the central obligations is a recurring audit requirement. Covered entities would need to evaluate their automated systems at least every 90 days, with additional reviews required within 30 days of any substantial modification. These audits must assess whether the system results in discriminatory outcomes and, if so, whether appropriate steps have been taken to mitigate those effects.
In addition to audit requirements, the bill would require employers to maintain thorough documentation about how each system was designed, the data used to train and evaluate it, and the outcomes of any completed audits. Employers would also be expected to publicly disclose summary information about the automated tools in use, including the sources of training data and the measures taken to manage known or foreseeable risks.
Another key provision centers on transparency to candidates. Employers would be required to notify individuals when an automated decision system is being used in a way that may influence hiring or employment terms. Importantly, the bill grants candidates the right to opt out of automation in favor of a human review process. That opt-out mechanism must be no more burdensome than the automated pathway and may not result in any form of penalty.
These requirements apply to any automated decision system used to determine eligibility for employment or to inform terms and conditions of employment, such as pay, promotion, or termination.
Liability Provisions and Shared Responsibility
SD 3007 introduces a strict liability framework for employers and service providers. Under the bill, a covered entity may be held liable for discriminatory outcomes regardless of whether the discrimination was intentional, whether the employer was aware of the issue, or whether the system complied with federal laws such as Title VII of the Civil Rights Act or the Fair Credit Reporting Act (FCRA).
The bill also includes a provision for joint and several liability, meaning that employers and any service providers involved in the development or deployment of the automated system could be held jointly responsible for violations. This would apply even in cases where service providers had no direct control over how the tool was implemented in practice.
Considerations for Background Screening Providers
For the background screening industry, SD 3007 raises important considerations. Consumer reporting agencies (CRAs) provide factual, legally reportable information, such as criminal history, employment, or education verification, that employers may use as part of a broader decision-making process. These reports are not designed to render hiring decisions or recommendations but to present objective data that informs employer evaluations.
Many CRAs have implemented automation to enhance process efficiency, improve consistency, and support compliance with disclosure and authorization requirements. These automated processes may include electronic data retrieval, standardization of report formatting, and system-driven notifications. While these tools streamline operations, they do not evaluate candidate suitability or drive employment outcomes.
However, SD 3007 defines an 'automated decision system' as any system that makes or informs eligibility determinations. Depending on interpretation, this may encompass tools used in background screening, even if those tools do not directly influence hiring decisions. The bill does not currently distinguish between process automation, which facilitates the delivery of factual information, and decision automation, which actively determines outcomes.
Under the proposed framework, a CRA could be required to conduct regular audits, disclose elements of its data sourcing and processing methods, and potentially provide human review options for systems that do not currently support decision-making functionality. Additionally, the inclusion of joint and several liability may extend legal responsibility to CRAs for how employers use screening reports, regardless of the CRA's level of involvement in the hiring process.
Employers and vendors may wish to evaluate whether existing service agreements, product designs, and workflows would need to be adjusted to meet the bill's requirements if it is enacted.
Operational Impacts for Employers
Employers operating in Massachusetts, or those hiring Massachusetts-based candidates, would face a heightened compliance burden under SD 3007. In addition to regular audit and documentation requirements, employers would need to establish opt-out pathways, manage public disclosures, and coordinate with vendors to ensure compliance across multiple systems.
Because the bill covers not only tools that make decisions but also those that inform them, employers may need to assess whether common technologies, such as applicant tracking systems with AI-driven filtering, fraud detection software, or automated adjudication tools, fall within the bill's scope.
Organizations that operate in multiple jurisdictions may also face challenges developing consistent compliance strategies, especially if other states adopt differing standards for AI use in employment.
Looking Ahead
Massachusetts SD 3007 reflects growing legislative interest in the regulation of artificial intelligence in employment. While its core objective, preventing discriminatory outcomes in automated decision-making, is consistent with emerging national and international AI governance frameworks, the bill introduces compliance obligations and liability standards that may significantly affect how employers and service providers use technology to support hiring.
Employers and background screening providers should closely monitor this legislation and begin evaluating whether current practices would align with the bill's requirements. In particular, companies may benefit from clarifying the distinction between tools that support hiring through automation and those that make or influence employment decisions directly.
As artificial intelligence becomes more deeply embedded in employment practices, clear policy guidance, particularly around accountability and role-specific obligations, will remain essential to balancing fairness, innovation, and operational integrity.
Hashtags

Try Our AI Features
Explore what Daily8 AI can do for you:
Comments
No comments yet...
Related Articles


Forbes
18 hours ago
- Forbes
Massachusetts Bill Expands AI Rules For Hiring And Background Checks
Massachusetts SD 3007 reflects growing legislative interest in the regulation of artificial ... More intelligence in employment. The bill introduces compliance obligations and liability standards that may significantly affect how employers and service providers use technology to support hiring. Massachusetts is joining the growing list of states looking to regulate artificial intelligence in employment, but its latest proposal could reach further than most. State lawmakers have introduced Senate Docket 3007 (SD 3007), a bill that would impose new obligations on organizations using artificial intelligence or other automated tools to support decision-making in employment, housing, credit, education, and other critical areas. Titled An Act to Prevent Unlawful Algorithmic Discrimination, the bill aims to prevent automated systems from producing discriminatory outcomes, whether intentional or not. If enacted, the bill would add Chapter 151G to the state's General Laws and apply to any private or public organization operating within Massachusetts. The legislation would regulate the use of automated decision systems (ADS), broadly defined to include tools that use artificial intelligence or statistical models to make or inform decisions, recommendations, classifications, or categorizations. The bill's stated goal is to ensure that such systems do not result in discrimination against individuals based on race, gender, disability, or other protected characteristics, including combinations of characteristics. It does so by imposing a range of new audit, transparency, and accountability requirements on covered entities that use ADS in connection with 'fundamental opportunities,' a term that includes employment decisions such as hiring, pay, and termination. Key Requirements for Employers Using Automated Systems Under SD 3007, employers that use artificial intelligence or automated tools in employment-related decision-making would be subject to a detailed compliance framework. One of the central obligations is a recurring audit requirement. Covered entities would need to evaluate their automated systems at least every 90 days, with additional reviews required within 30 days of any substantial modification. These audits must assess whether the system results in discriminatory outcomes and, if so, whether appropriate steps have been taken to mitigate those effects. In addition to audit requirements, the bill would require employers to maintain thorough documentation about how each system was designed, the data used to train and evaluate it, and the outcomes of any completed audits. Employers would also be expected to publicly disclose summary information about the automated tools in use, including the sources of training data and the measures taken to manage known or foreseeable risks. Another key provision centers on transparency to candidates. Employers would be required to notify individuals when an automated decision system is being used in a way that may influence hiring or employment terms. Importantly, the bill grants candidates the right to opt out of automation in favor of a human review process. That opt-out mechanism must be no more burdensome than the automated pathway and may not result in any form of penalty. These requirements apply to any automated decision system used to determine eligibility for employment or to inform terms and conditions of employment, such as pay, promotion, or termination. Liability Provisions and Shared Responsibility SD 3007 introduces a strict liability framework for employers and service providers. Under the bill, a covered entity may be held liable for discriminatory outcomes regardless of whether the discrimination was intentional, whether the employer was aware of the issue, or whether the system complied with federal laws such as Title VII of the Civil Rights Act or the Fair Credit Reporting Act (FCRA). The bill also includes a provision for joint and several liability, meaning that employers and any service providers involved in the development or deployment of the automated system could be held jointly responsible for violations. This would apply even in cases where service providers had no direct control over how the tool was implemented in practice. Considerations for Background Screening Providers For the background screening industry, SD 3007 raises important considerations. Consumer reporting agencies (CRAs) provide factual, legally reportable information, such as criminal history, employment, or education verification, that employers may use as part of a broader decision-making process. These reports are not designed to render hiring decisions or recommendations but to present objective data that informs employer evaluations. Many CRAs have implemented automation to enhance process efficiency, improve consistency, and support compliance with disclosure and authorization requirements. These automated processes may include electronic data retrieval, standardization of report formatting, and system-driven notifications. While these tools streamline operations, they do not evaluate candidate suitability or drive employment outcomes. However, SD 3007 defines an 'automated decision system' as any system that makes or informs eligibility determinations. Depending on interpretation, this may encompass tools used in background screening, even if those tools do not directly influence hiring decisions. The bill does not currently distinguish between process automation, which facilitates the delivery of factual information, and decision automation, which actively determines outcomes. Under the proposed framework, a CRA could be required to conduct regular audits, disclose elements of its data sourcing and processing methods, and potentially provide human review options for systems that do not currently support decision-making functionality. Additionally, the inclusion of joint and several liability may extend legal responsibility to CRAs for how employers use screening reports, regardless of the CRA's level of involvement in the hiring process. Employers and vendors may wish to evaluate whether existing service agreements, product designs, and workflows would need to be adjusted to meet the bill's requirements if it is enacted. Operational Impacts for Employers Employers operating in Massachusetts, or those hiring Massachusetts-based candidates, would face a heightened compliance burden under SD 3007. In addition to regular audit and documentation requirements, employers would need to establish opt-out pathways, manage public disclosures, and coordinate with vendors to ensure compliance across multiple systems. Because the bill covers not only tools that make decisions but also those that inform them, employers may need to assess whether common technologies, such as applicant tracking systems with AI-driven filtering, fraud detection software, or automated adjudication tools, fall within the bill's scope. Organizations that operate in multiple jurisdictions may also face challenges developing consistent compliance strategies, especially if other states adopt differing standards for AI use in employment. Looking Ahead Massachusetts SD 3007 reflects growing legislative interest in the regulation of artificial intelligence in employment. While its core objective, preventing discriminatory outcomes in automated decision-making, is consistent with emerging national and international AI governance frameworks, the bill introduces compliance obligations and liability standards that may significantly affect how employers and service providers use technology to support hiring. Employers and background screening providers should closely monitor this legislation and begin evaluating whether current practices would align with the bill's requirements. In particular, companies may benefit from clarifying the distinction between tools that support hiring through automation and those that make or influence employment decisions directly. As artificial intelligence becomes more deeply embedded in employment practices, clear policy guidance, particularly around accountability and role-specific obligations, will remain essential to balancing fairness, innovation, and operational integrity.


The Verge
4 days ago
- The Verge
Instagram enshitification.
Posted Jul 22, 2025 at 11:20 AM UTC Instagram enshitification. Reportedly in testing since June, I've now been served three of these unskippable 'Ad break' ads over the last two days. It's jarring, and has accelerated my desire to quit the platform that's increasingly less fun and flooded with AI slop.
Yahoo
4 days ago
- Yahoo
X Restricts Emoji Use in Promoted Posts
This story was originally published on Social Media Today. To receive daily news and insights, subscribe to our free daily Social Media Today newsletter. After recently outlawing hashtags in promoted posts, X is now taking aim at another aesthetic element, with the platform implementing new restrictions on ads that contain more than one emoji. Because, reasons. As X explains in the above notification (highlighted by social media expert Matt Navarra): 'Effective shortly, ads with more than one emoji in the ad copy or creative will have a lower quality score and may experience higher pricing" So X doesn't want you to add emojis to ads, in addition to hashtags. Because Elon thinks that they look bad, and doesn't like them cluttering up his X feed. Or something like that, anyway. The exceptions, as X notes, are promotions in Japan and Korea, which will not see any penalties even if they exceed this new emoji limit. But any other ads that include more than one emoji will be restricted to some degree. Which seems weird, right? X's U.S. ad sales have dipped almost 60% on pre-Elon levels, and it's been struggling to win back more ad partners to get its core revenue stream back on track. But now, it's imposing more restrictions on what people can include in their ads, for seemingly no other reason than Elon just doesn't like the look of them. Like, more restrictions is probably bad for business in this respect. Right? Whatever the reason, if you are looking to use X ads, you now have a new consideration to keep in mind, to ensure you're maximizing your ad reach and resonance in the app. No hashtags, no more than one emoji. I mean, it could, theoretically at least, make for better-looking ads, sure, but I'm also sure that plenty of advertisers have seen success in the past with promotions that run counter to these rules. Maybe X has data that shows these ads don't perform as well, or maybe Elon just noticed an ad with too many emojis and made a call. We'll likely never know for sure. Recommended Reading X Announces New Brand Safety Measures, in Partnership with Integral Ad Science Error in retrieving data Sign in to access your portfolio Error in retrieving data Error in retrieving data Error in retrieving data Error in retrieving data