Future Tense

A New Proposed Law Could Actually Hold Big Tech Accountable for Its Algorithms

An open white box.
Impact assessments can help open algorithms up for scrutiny. Kelli McClintock/Unsplash

We’ve seen again and again the harmful, unintended consequences of irresponsibly deployed algorithms: risk assessment tools in the criminal justice system amplifying racial discrimination, false arrests powered by facial recognition, massive environmental costs of server farms, unacknowledged psychological harm from social media interactions, and new, sometimes-insurmountable hurdles in accessing public services. These actual harms are egregious, but what makes the current regime hopeless is that companies are incentivized to remain ignorant (or at least claim they to be) about the harms they expose us to, lest they be found liable.

Advertisement

Many of the current ideas for regulating large tech companies won’t address this ignorance or the harms it causes. While proposed antitrust laws would reckon with harms emerging from diminished competition in the digital markets, relatively small companies can also have disturbing, far-reaching power to affect our lives. Even if these proposed regulatory tools were to push tech companies away from some harmful practices, researchers, advocates and—critically —communities affected by these practices would still not have sufficient say in all the ways these companies’ algorithms shape our lives. This is especially troubling given how little information and influence we have over algorithms that control critical parts of our lives. The newly updated Algorithmic Accountability Act from Sen. Ron Wyden, Sen. Cory Booker, and Rep. Yvette Clarke could change this dynamic—and give us all an opportunity to reclaim some power over the algorithms that control critical parts of our lives.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

Impact assessments are an historically effective type of governance that is particularly well-suited to complex systems that affect a wide range of people, who sometimes have competing interests, and where many kinds of expertise are needed to study harms that may be novel or unanticipated. Since the Environmental Protection Agency first began requiring environmental impact assessments in 1972, their success has been demonstrated by massive improvements to air and water quality even amid booms in oil and gas pipeline construction.

Now, in a significant step forward, lawmakers are increasingly building impact assessments into draft legislation. The updated Algorithmic Accountability Act of 2022, which we learned about in a briefing from Wyden’s office in mid-January, would require impact assessments when companies are using automated systems to make critical decisions, providing consumers and regulators with much needed clarity and structure around when and how these kinds of systems are being used.

Advertisement

The ongoing impact assessments required under the proposed Algorithmic Accountability Act would outline the features of any algorithmic system that affect the public interest. What might go into one? No one quite knows yet because the requirements of impact assessments evolve over time in response to court cases, changes in scientific knowledge and scholarship, advances in technology, community pressure, etc. (The Ada Lovelace Institute recently released one of the first attempts at a thorough impact assessment of a health data system.) But we have a decent sense of where it would start: stating the purpose and limitations of the proposed system; naming the training data set and identifying the demographics of the training data; recording the performance of the machine learning models on a variety of algorithmic fairness models; identifying safety requirements and excluding specific future repurposing of the models for safety reasons; proposing a schedule for refreshing data and retraining models; and recording what expected ethical and social consequences were considered and measured. Perhaps most importantly, the bill requires companies to state how they consulted with impacted communities and steps companies took to eliminate or mitigate negative impacts. Recording and filing that baseline of information may not actually seem all that earth-shaking, but the dirty secret is that tech companies don’t systematically keep those records for the systems that are deployed now.

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

These reports would initially be submitted privately to the Federal Trade Commission, which would then release key information via a publicly accessible repository. The FTC would also release an annual report with trends, lessons, and statistics, in order to foster transparency around when and how automated systems are used, and accountability for developers and purchasers of these systems to the public.

Our research shows that impact assessments are a tried and tested way to ensure that companies study, explain, and report on how their proposals will affect society. Not only do impact assessments clarify the harms of specific activities or products, they also set an important standard for what is disclosed about their work. Today, redesigning a highway means studying how nearby homeowners’ quality of life will be affected, and few major manufacturers would build an overseas factory without first assessing the human rights impacts for workers.  However, in the current regulatory environment, no reporting at all is required for algorithms that make critical decisions about our lives, which means that we know very little about whether preventable harm or discrimination might be pervasive in the technology used to determine our eligibility for a mortgage or our success in a job application.

Advertisement
Advertisement
Advertisement

Even without a legal mandate, adversarial audits by independent researchers have already demonstrated the power of inspection and documentation. These projects have convinced some tech companies to change or remove their products in significant ways. Joy Buolamwini’s Gender Shades project drove many large tech companies to drop facial recognition services by revealing intransigent gender and racial biases in the technology. Impact assessments could have a similar effect of lifting the hood and making tech companies reckon with the unintended consequences their products have on society. As tech law expert Andrew Selbst pointed out, such reporting may not guarantee that tech companies will do the right thing or comply perfectly with regulation, but it does dramatically alter what they see as the scope of their concerns.

Advertisement
Advertisement

However, federal agencies need to set clear expectations for developers regarding what an effective impact assessment looks like. Letting companies grade their own homework won’t disrupt the tendency for companies to avoid or obscure their impacts. Mandatory reporting and structured guidelines have made this version stronger than its 2019 iteration. While the prior version of the bill was a step forward in the tech regulation conversation, it would have let developers pick and choose what to include in their algorithmic impact assessments. The new bill makes more explicit what companies’ responsibilities are in studying, recording, and reporting the impacts of their products and services.

This legislation opens up the possibility for even more concerted efforts across government, academia, and civil society. Chief among these efforts would be creating robust methods for evaluating and challenging AI systems through a public interest lens. The information provided by impact assessments would help sustain feedback loops between researchers, advocates, communities, and regulators, leading to practices in which communities most impacted by algorithmic systems have a meaningful say over how they are designed and deployed.

Advertisement

Given the challenging legislative landscape in Congress, it is hard to say how far this bill will proceed. However, it has more co-sponsors than the previous version, and it lands at a time when many members of Congress are more eager to discuss significant changes to Big Tech’s largely unchecked power to determine how algorithmic systems determine important features of our lives.

This isn’t the first time we’ve had to claw back control over fundamental aspects of our lives. A century ago, the food we ate arrived on store shelves without being inspected for quality, factories in our backyards belched noxious fumes into the air, and sewage was allowed to run untreated into our waterways. Companies consistently shirked their responsibility to the public interest, but landmark regulations brought accountability for these harmful impacts. Today, we should exercise the same right to insist tech companies uphold democratic values in the algorithms they build and send out into the world to make decisions about our lives. The Algorithmic Accountability Act could bring us even closer to holding these companies accountable.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement