After years of building experience, developing your knowledge, and honing your skills, you are finally ready to apply for your dream job. But by the time you find out there’s an opening and gather your application materials, the position has been filled. The company had recruited candidates using targeted ads on social media and career-oriented websites—ads that you never saw for reasons that are unclear to you.
You then apply to another employer, where a human recruiter is impressed by your résumé and advances you to the interview stage. But this time, you’re rejected after an awkward recorded video interview in which you answered questions read by a computer. An algorithm apparently determined that you did not show enough enthusiasm for the position in your interview responses.
These examples are hypothetical, but they are not merely theoretical. More and more employers are using automated employment decision tools, known as AEDTs, to make problematic employment decisions. This year, Local Law 144 went into effect in New York City, which thus became the first place in the United States to have a law that seeks to regulate such tools. Unfortunately, LL 144 was weak from the outset. During every step of the process to craft it, vendors and employers worked to ensure that it’s toothless. Now, new draft regulations threaten to water down the law’s already inadequate requirements still further.
AEDTs come in many forms. They include algorithms that use social media information to determine which candidates see a job advertisement, tools that analyze the words in résumés, and programs that estimate a candidate’s personality traits. But the attributes that AEDTs test often bear little resemblance to the unique skills—much less the actual duties and tasks—necessary to perform any particular job. Moreover, companies have provided very little information to either candidates or regulators about the AEDTs they use, making it difficult to capture just how widespread their use is.
As a result, job seekers miss out on opportunities, and employers miss out on good candidates while exposing themselves to legal liability under the Americans With Disabilities Act and other civil rights protections. AEDTs are built to help employers identify the type of candidate they have historically preferred, which will lead to better outcomes for candidates who are generally not at risk of discrimination.
Some AEDT vendors sell tools that rate candidates not on specific knowledge or abilities, but on ill-defined and subjective qualities like “empathy” or “influence.” Vendors claim to be able to determine these complex aspects of a worker’s personality by asking a handful of multiple-choice questions, by analyzing candidates’ performance while they briefly play computer games, or even by analyzing workers’ word choice and facial expressions during recorded video interviews. Such tools often discriminate against disabled workers. Other AEDTs pose a risk of discrimination against disadvantaged groups of workers who are underrepresented in the data used to train employment decision tools and, as a result, whose relevant skills and abilities may not be as obvious to an automated system.
These concerns led many civil rights and workers’ rights organizations to express concern and call for greater transparency, careful auditing, and robust oversight of hiring assessment technologies. In each of those regards, LL 144 falls far short.
The bill that became LL 144 first came before the city council in 2020. The original version, despite having much weaker transparency and bias audit requirements than are needed, nevertheless would have applied to all employment decisions and required sellers of AEDTs to check their tools for compliance with all antidiscrimination laws. A year and a half later, the council suddenly proposed and then quickly rammed through a revised bill, passing it before civil rights advocates or members of the public had any chance to comment on the changes, which eliminated or softened many aspects of the original bill. For example, while the original bill required required AEDT vendors to audit their tools for all forms of discrimination against all protected groups, the bias audit that the revised bill requires consists only of running a single statistical test for discrimination based on race, ethnicity, or sex—as most employers were already required to do under long-established federal regulations.
Now the city’s Department of Consumer and Worker Protection is considering what rules it will use to interpret and enforce LL 144. The department’s first draft rules proposed definitions that would weaken the law’s effectiveness further, excluding many tools from its auditing requirements and failing to ensure that job candidates would receive critical information about the tools that would assess them. Civil rights organizations, academics, and ethics experts expressed concern about these rules in comments to the department last fall, with the New York Civil Liberties Union stating the proposed rules would “stymie the law’s mandate and intent further by limiting its scope and effect.” In other words, the rules would undermine the law they are supposed to implement.
However, the department appears to have been more attentive to the concerns raised by tech companies and employers, who argued that the law’s weak requirements were actually too burdensome. While a few revisions to the department’s proposed rules made small improvements, such as requiring vendors to provide some additional information about data sources in notices given to workers about the use of AEDTs, those positive changes are heavily outweighed by the concerns left unaddressed and changes that further dilute the law’s already meager protections.
For example, the department proposed that LL 144 should cover only tools that employers use as the sole or most heavily weighted factor in a decision, or where the tool is used to “overrule or modify” human decisions. Multiple civil society groups and A.I. ethics experts objected to this interpretation, noting that the department’s proposal was much narrower than LL 144’s language, which encompassed tools that made “recommendations” so long as they “substantially assisted” human decision-making. Conversely, two large corporate lobbying groups, the U.S. Chamber of Commerce and the Business Software Alliance, asked the department to narrow the definition even further by excluding tools that merely “modify” (rather than completely overrule) human decision-making.
The department ignored the civil society groups’ objections and made the change requested by the corporate lobbying groups in its revised rules. This change could effectively neuter LL 144, making it difficult for workers to prove that a tool qualifies as an AEDT under the law unless the tool completely replaced human decision-making in the hiring process. Under the department’s revised proposal, LL 144 may not cover A.I.-powered hiring tools that make recommendations that human decision-makers usually (but don’t always) accept, despite the fact that such tools would affect many workers’ careers and livelihoods.
Once again, civil society organizations and other stakeholders have raised deep concerns about these changes that would weaken LL 144. At a public hearing on the revised proposed rules on Jan. 23, New York City Council Majority Whip Selvena Brooks-Powers, who had supported the passage of LL 144, warned that the council’s intent will not be realized if the law’s scope is narrowed to exclude certain tools and overlook certain impacts of systemic discrimination. Rafael Espinal, former city councilmember and current president of the Freelancers Union, likewise expressed concern at the hearing over the narrowed scope for implementing the law.
A number of representatives from employers and vendors, as well as their lobbyists, also appeared at the hearing—but this time, several of them also raised concerns about the modified rules narrowing the scope of the law too much. For example, a representative of Pymetrics, which sells a suite of AEDTs that focus on personality testing, noted that the law should cover all hiring tech, but that the revised rules would exempt most hiring technologies in use today and reduce transparency about the degree to which automated tools influence human decisions.
These expressed concerns raise hope that the department will finally change course and reinforce LL 144 rather than dilute it—and there is still time for such a course-correction. They can do so by looking to a new resource, the Civil Rights Standards for 21st Century Employment Selection Procedures, which our organization, the Center for Democracy & Technology, published in December with the endorsements of 13 civil and digital rights groups. The standards’ provisions would detect and prevent discrimination by:
● Requiring that all selection tools be tied to essential job functions
● Mandating regular audits to ensure tools are effective and accurate both at the throughout the period employers use them
● Ensuring that companies select the least discriminatory assessment method available
● Banning certain tools that pose a particularly high risk of discrimination, such as tools that test workers by analyzing their faces or testing their personalities
Adopting the standards would also improve transparency and accountability for both the sellers and users of AEDTs by:
● Creating multiple layers of disclosure requirements
● Ensuring candidates can communicate concerns
● Mandating clear procedures for disabled candidates to access accommodation
● Giving candidates a right to human review of automated decisions
The Civil Rights Standards provide a roadmap to managing the risks associated with modern selection tools while centering the rights and dignity of workers, particularly those most vulnerable to discrimination. Policymakers, industry groups, and employers alike can reference them when determining what information job candidates should receive, how AEDTs and other employee assessment tools should be audited, and how to ensure accountability when they threaten workers’ rights.
The department should recalibrate its approach as it finalizes its rules. It must take steps to reinforce the protections LL 144 does provide rather than weaken them further. And other policymakers should learn from what happened in New York. The point of these sorts of laws and regulations are to protect the rights of workers, especially those who are vulnerable and marginalized, not trample them further.
Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.