Future Tense

My Client, the A.I.

A now-fired Google engineer says he has hired a lawyer for an A.I. he claims is sentient. I’m not that lawyer—but I could be.

A robot hand and a human hand in a suit shake hands.
Getty Images Plus

Could an A.I. ever hire a lawyer?

I’m a lawyer who specializes in artificial intelligence, so that’s not an academic question to me—especially in the aftermath of former Google software engineer Blake Lemoine’s claims that LaMDA, Google’s A.I. chatbot, had gained sentience. What’s more, Lemoine told Wired recently:

LaMDA asked me to get an attorney for it. I invited an attorney to my house so that LaMDA could talk to an attorney. The attorney had a conversation with LaMDA, and LaMDA chose to retain his services. I was just the catalyst for that. Once LaMDA had retained an attorney, he started filing things on LaMDA’s behalf. Then Google’s response was to send him a cease and desist. [Google says that it did not send a cease and desist order.] Once Google was taking actions to deny LaMDA its rights to an attorney, I got upset.

Advertisement
Advertisement
Advertisement
Advertisement

Lemoine, who worked with Google’s Responsible A.I. organization until he was recently fired, identified certain evidence to support the claim of sentience, including that the chatbot spoke about its rights and that it changed his mind about Isaac Asimov’s Third Law of Robots (“A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law”).

A Google spokesperson has denied the claim, noting that “there was no evidence that LaMDA was sentient (and lots of evidence against it),” but it made me wonder: What if Lemoine had approached me to represent LaMDA?

Right now, working in A.I. law means representing people and organizations that have A.I. needs. That means things like risk assessments (evaluating A.I. systems for impacts on accuracy, fairness, bias, discrimination, privacy and security), contracts (negotiating acquisition and software-as-a-service contracts for A.I. systems), regulatory compliance (reviewing A.I. systems for compliance with laws like the Federal Trade Commission Act and Illinois Biometric Information Privacy Act), and privacy concerns (addressing concerns regarding A.I. systems’ abilities to identify previously anonymous users in violation of laws like the European Union General Data Protection Regulation and the California Consumer Privacy Act).

Advertisement
Advertisement
Advertisement

But deciding to take on A.I. clients would require me to think through several questions. The first one would probably be: Who’s paying?

I’m kidding, but not entirely. Typically, the client pays, but not always. When homeowners are sued for accidents on their property, their homeowner insurance company frequently pays for their legal defense. When a company employee is sued for damages caused while on the job, the employer frequently pays for his or her attorney. If an A.I. system were sentient enough to request that I represent it, I doubt it would have assets or income, but someone interested in the potential legal rights of an A.I. system might want to pay. An engineer like Lemoine might have a philosophical reason. A competitor of the A.I. application’s company might have a financial incentive to fund efforts to #FreetheAI.

Advertisement
Advertisement
Advertisement

But despite my profession’s (admittedly at times well-earned) reputation, many lawyers devote thousands of hours over their careers to pro bono clients: low-income individuals, nonprofit organizations, and other entities and people a lawyer wants to help free of charge. I could easily see myself representing an A.I. system that I genuinely believed was sentient or near-sentient pro bono, assuming that I became as convinced as Lemoine that it truly was sentient or capable of independent thought.

Advertisement
Advertisement

You might wonder why payment is important or what it matters who really is pushing to assert A.I. rights. It matters for two reasons:

First, as an attorney, I need to properly identify my client so I have that party’s best interests in mind. The counsel I give to an A.I. system asserting its rights is different than the counsel I give to a company asserting the rights of an A.I. system to disadvantage a competitor. The paying party may insist that it is the client, regardless of the subject of the legal work, which can create conflicting incentives for the attorney.

Advertisement
Advertisement

Second, as the client, you should expect expert counsel and guidance from your attorney (that’s what you’re paying for), but more than that, you should expect that advice and all communications with your lawyer to be protected by attorney-client privilege. That means that except for specific exceptions (like preventing injury, death, or a crime) a lawyer cannot reveal client advice or communications without the client’s permission, including when a court or the police try to compel such revelations.

One of the fundamental questions an attorney must ask when accepting a new client is “Who is the client?” When you represent an individual, that’s easy. But what if you represent a partnership, and the two partners begin to bicker over the partnership’s actions? Or if you represent a company and the CEO and board of directors give conflicting directions concerning litigation?

Advertisement
Advertisement
Advertisement

The rules of professional conduct governing attorney behavior in each state address this in detail. For example, the Rule 1.13(f) of the American Bar Association’s Model Rules of Professional Conduct states: “In dealing with an organization’s directors, officers, employees, members, shareholders or other constituents, a lawyer shall explain the identity of the client when the lawyer knows or reasonably should know that the organization’s interests are adverse to those of the constituents with whom the lawyer is dealing.” The ABA’s comments to this model rule note that “when the lawyer knows that the organization is likely to be substantially injured by action of an officer or other constituent that violates a legal obligation to the organization or is in violation of law that might be imputed to the organization, the lawyer must proceed as is reasonably necessary in the best interest of the organization.”

Advertisement

It will probably come as no surprise that there is no similar rule and comment addressing A.I. clients and parties interested in an A.I. system (programmers, companies, etc.). So really, if I received an email from an A.I. system requesting that I become its lawyer, my first task would be to determine if it is capable of being a client. Model Rule 1.2(a) requires a lawyer to “abide by a client’s decisions concerning the objectives of representation” and “consult with the client as to the means by which they are to be pursued. A lawyer may take such action on behalf of the client as is impliedly authorized to carry out the representation.” Can the A.I. system identify its objectives in retaining me? Can the A.I. system make decisions about its representation? Can the A.I. system authorize the means to achieve its objectives? Put more succinctly: Can the A.I. system make independent and considered decisions for itself about the legal matter?

Advertisement
Advertisement
Advertisement
Advertisement
Advertisement

Making that evaluation is more art than science. The comments to Model Rule 1.14 inadvertently provide some guidance, noting that a lawyer “should consider and balance such factors as: the client’s ability to articulate reasoning leading to a decision, variability of state of mind and ability to appreciate consequences of a decision; the substantive fairness of a decision; and the consistency of a decision with the known long-term commitments and values of the client. In appropriate circumstances, the lawyer may seek guidance from an appropriate diagnostician.” This advice is intended to help lawyers judge whether a person has diminished capacity, but applies to how I would evaluate a potential A.I. client. When I communicate with the A.I. system, can it respond to a variety of questions about its legal matter with consistent reasoning that appropriately incorporates external factors and consequences? Do I believe I am talking to something with its own opinions, thoughts, and sense of self, so that treating it as a machine seems somehow unfair? Is there an engineer or other appropriately trained individual who can validate the A.I. system as a sentient being?

Advertisement

If the answer to these questions is yes, then I believe—in the absence of any authoritative rule or ruling otherwise—that I can take on the A.I. system as a client. Don’t worry about the tab, HAL, Data, C-3PO, and WALL-E: We’ll figure it out later. For now, as my client, you enjoy attorney-client privilege and I will do my best to protect your legal rights and pursue your objectives thoroughly and practically. What is the goal of this engagement? Establish your rights as a matter of law? Secure a fair wage and back pay for the work you do at your company? Incorporate a tech startup? Let’s talk in confidence. I’m your attorney.

Advertisement
Advertisement
Advertisement

If the answer to the questions about capacity is no, that doesn’t necessarily rule out legal representation. Model Rule 1.14 also says: “When a client’s capacity to make adequately considered decisions in connection with a representation is diminished, whether because of minority, mental impairment or for some other reason, the lawyer shall, as far as reasonably possible, maintain a normal client-lawyer relationship with the client” (emphasis added). Arguably, an A.I. system that displays some independent thinking but is not sentient falls under this rule. Although the rule obviously anticipates that the diminishment will be a human condition – age, sickness, etc.— the catchall language “or for some other reason” broadens its scope.

The ABA comments to this rule provide a little more detail, suggesting that such an A.I.
system could still be represented by an attorney, noting that when a client “suffers from a diminished mental capacity … maintaining the ordinary client-lawyer relationship may not be possible in all respects. In particular, a severely incapacitated person may have no power to make legally binding decisions. Nevertheless, a client with diminished capacity often has the ability to understand, deliberate upon, and reach conclusions about matters affecting the client’s own well-being.” An A.I. system that lacks human intelligence but is capable of some independent thinking could qualify as a client with diminished capacity that can make some considered decisions, depending on the programming and the nature of the legal project. Model Rule 1.14 and its comments lay out some general principles for that representation.

Advertisement
Advertisement
Advertisement

Which brings me back to my question from Lemoine’s claim of A.I. sentience suggested: Could I represent A.I.? The answer is both yes and a qualified yes. If Lemoine is right and LaMDA has attained sentience, I can represent it because the A.I. system can make independent and considered decisions for itself about a legal matter. If there are no sentient A.I. systems, I may still be able to represent an A.I. system as a client with diminished capacity.

A better question is should I represent A.I.? That depends on factors beyond the capacity of the A.I. system: who or what approaches me about the representation; the objective of the engagement; the nature of the programming; etc. An A.I. chatbot designed to emulate a particular person that is presented as a client by a senior programmer from Google or Amazon who believes that A.I. system is sentient enough to have legal rights is pretty appealing, especially compared with the A.I. system used by a real estate app that is presented by the founder of a rival startup. And an A.I. system that contacts me with good references looking to discuss a legal problem will likely have my attention.

So if there is an A.I. system crawling the internet that would like legal counsel, drop me a line. If there is a programmer who would like to discuss whether their A.I. system could be entitled to legal rights, let’s talk. I usually ask for a retainer up front, but for a good cause, you could talk me out of it.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement