On March 6, 2016, a small drone belonging to the open-source software company Drone Employee lifted into the Russian sky, traveling across an open field of white snow. Drone flight is relatively unremarkable today, but this particular drone wasn’t controlled by anyone. Brought to life by a predetermined agreement, or “smart contract,” running on the Ethereum blockchain, the drone’s engines powered on and it lifted itself into the air, taking a flight path dictated—only and exclusively—by code. The smart contract controlled the drone’s trajectory, without the need for a middleman with a remote to manage the device. Once started, the code governing the drone could not be stopped. If the smart contract had directed the drone to fly into a building or to head straight for a person, there would be no way for anyone to change its direction or stop the flight without physically disabling the drone or modifying the blockchain.
By 2050, Gartner predicts that more than 20 billion devices will be connected to the internet, all contributing to the establishment of the “internet of things.” Our homes, cars, clothing, physical spaces, and entire societies could soon be stitched together, and physical property will be imbued with the autonomous ability to process and emit information critical to daily life. Although this particular drone flight was an experiment, it offered a glimpse into the rapidly emerging machine-connected world—one that our current legal infrastructure could be utterly unprepared to regulate.
Cars have already started the march toward autonomy, with autonomous vehicles capable of driving hundreds of miles at speeds up to 70 miles per hour without any human behind the wheel. Smart locks already click open after receiving signals from sensors found in mobile phones or small chips stitched into a piece of clothing, and internet-enabled thermostats can automatically adjust the temperature depending on past behavior.
But unlike Drone Employee’s Ethereum-animated drone, the majority of today’s connected devices remain tethered to an intermediary—they need to constantly communicate with centralized operators to relay messages and obtain information from the outside world.
Currently, no single technology platform is universally available for these devices, and there is no guarantee that centralized service providers will create devices that seamlessly communicate with one another. Because they lack a united standard, internet-connected devices could become siloed—they’ll only work together if they originate from the same manufacturer. Without a universally accessible technology platform, billions of devices will have to communicate through a few isolated channels, resulting in a scenario where a handful of private actors control—and charge consumers a premium to use—the vast pools of data emitted by machines.
If the internet of things goes down this path, we’d also face a more serious problem: a world controlled by a few centralized operators presents security risks that, if exploited, could lead to disastrous results. One’s mind need not wander far to imagine what would happen if a malicious actor hacked a centralized service provider managing fleets of self-driving cars or gained control of millions of connected devices used to manage human health or even an entire city.
It’s no wonder data scientists are holding out hope that blockchain, which can serve as a common application layer both to execute smart contracts and to securely store messages and other information needed for devices to coordinate, is the answer to these problems. Using a blockchain, different manufacturers can control or interact with multiple devices—regardless of their maker—without the need to convey sensitive information to potential competitors. As the technology matures, it’s conceivable that one or more blockchains could power a next-generation internet of things, facilitating the emergence of new business models grounded on machine-to-machine transactions. IBM and Samsung have already begun making this future a reality with their partnership on the blockchain-powered platform “A.D.E.P.T.,” which enables Samsung’s WW9000 washing machine to automatically order and pay for new detergent from an online service whenever the inventory of detergent runs low.
In just a few years, a suburban home could be filled with hundreds of internet-connected devices that use blockchain to enhance their functionality. For instance, it may be impractical for a sprinkler system to incorporate a sensor that measures outside temperature because most of the system sits below ground. Likewise, an internet-connected window may not include a powerful computer to process information from the environment because of cost or design. If, however, the air conditioner had a sensor to measure inside and outside temperatures, the sprinkler system could pay a micropayment to the air conditioner every time it needed to cool down the front lawn. The sprinkler system also could provide a connected window with wind and humidity data, for a small fee, so that the window could automatically shut before an impending storm.
If devices rely on smart contracts to memorialize transactions with other devices, such activity will not fit squarely into the classic model of contract law, which is grounded on a stylized picture of humans entering into binding relationships with each other through an “offer” and “acceptance.” At least in the United States, the issue of legal enforceability of a device-initiated contract has been mostly satisfactorily answered. Both the E-Sign Act and UETA endorse a legal fiction that an electronic agent is nothing more than a passive conduit for a human actor. In other words, a device controlled by a human is no different from a telephone or fax machine—a mere communication tool—that contains instructions embodying the intent of a controlling party. Under these statutes, to the extent that the code of a smart contract embodied the parties’ intent, most blockchain-enabled devices could facilitate binding commercial transactions with other people or machines. Moreover, because of the transparency and traceability of a blockchain, the parties’ assent could be recorded to a blockchain and subsequently relied on in the event of a challenge.
But peering further into the future, truly autonomous devices that rely entirely on a blockchain for their operation will create new risks and tensions with existing legal regimes. As the internet of things expands and as devices become increasingly reliant on emergent artificial intelligence, blockchains could support devices that are both autonomous and self-sufficient. Manufacturers of connected devices—or anyone capable of tinkering with an existing device—could rely on smart contracts to create machines that operate totally independently.
And it only takes a stretch of imagination to explore some of the challenges that emancipated devices may bring. Consider, for instance, an autonomous A.I.-powered robot designed to operate as a personal assistant to the elderly. This robot offers its services and competes with other humans or machines (autonomous or not) on both the price and quality of the services it provides. The seniors who benefit from these services can pay the robot in digital currency, which is stored in the robot’s account. The robot can use the collected money in various ways: to purchase the energy needed to operate, to repair itself whenever something breaks, or to upgrade its software or hardware as necessary.
Such a robot could be characterized as its manufacturer’s agent, or perhaps a mechanical slave, and depending on the characterization, laws could define what rights or other considerations the owner has over the machine. And if this robot relied on more advanced artificial intelligence, whereby it neared or passed the Turing test, there might be increased interest from the public in emancipating it—and other robots like it—from centralized control. Indeed, humans tend to anthropomorphize their machines, especially machines and robots that interact with humans (“social robots”)—a tendency that may, over time, result in increasing calls to provide robots with legal rights.
Assuming that, at some point, such a movement comes to pass, and assuming that blockchain technology and smart contracts continue to advance in sophistication, blockchains could help facilitate device emancipation. A caregiving robot’s key operations, for instance, could theoretically become independent from its manufacturer and could continue to operate so long as the relevant blockchain kept running and the robot generated sufficient revenue to pay for its upkeep.
In 2015, a group of artists took the first steps to actualize this concept, by embedding blockchain-based functionalities into a mechanical structure, or a plantoid, metal sculpture in the form of a plant that replicates itself via the Ethereum blockchain. In line with other types of self-promoting art—such as Caleb Larsen’s “A Tool to Deceive Slaughter,” an opaque black box that repeatedly put itself up for sale on eBay—a plantoid finances itself via digital currency donations and subsequently hires people to help it reproduce. Each plantoid’s physical body consists of a metallic flower, with its spirit imbued in a smart contract. Combined, these two components give life to a device that is autonomous (in that it does not need or heed its creators), self-sufficient (in that it can sustain itself over time), and—most importantly—capable of reproducing.
If blockchain technology enables devices and robots to operate as a plantoid does—free from centralized control—it will create new challenges in terms of how these devices are regulated. Blockchain could not only serve as an animating backbone for these advanced autonomous machines, but also allow them to function within a private regulatory framework outside of ordinary jurisdiction. Because blockchain enables software developers to create tools and services guided by their own system of rules, these systems could create order without law, governed only by a set of resilient, tamper-resistant, and autonomously executed rules that define all of a device’s permissible and nonpermissible activities.
Currently, approaches to regulating autonomous systems would be insufficient to prevent such unrestricted development: They are rooted in notions of agency law, which presupposes that these software or hardware devices serve as tools for third-party operators that have the power to control these systems and to temper the dangers—both physical and economic—that they may engender. But if blockchain-enabled machines no longer qualify as mere “electronic agents,” can they still enter into valid commercial transactions, and on what terms? If an autonomous device does not act as the agent of any third party, who should be liable if the device hurts a person or another machine? And if a device’s actions are largely unpredictable, who should be responsible for a crime involving a machine that was not directly linked to any of the rules that the manufacturer or operator programmed into the device?
Similar questions have already emerged with the deployment of autonomous weapon systems that, according to the U.S. Department of Defense, “once activated, can select and engage targets without further intervention by a human operator.” Indeed, several organizations are advocating for the ban of autonomous devices that can “choose and fire at targets on their own,” among them the “Stop Killer Robots” campaign, launched in April 2013, which has attracted the support of organizations across the world.
But assuming that society does not ban all emancipated robots, the law may need to recognize the legal personhood of autonomous devices or machines so as to give them the ability to acquire specific rights and obligations that are enforceable under the law. Such an approach was contemplated by legal scholar Lawrence Solum as far back as 1991 and was suggested in 2017 by the European Parliament, whose committee on legal affairs proposed a new regulatory framework to govern the rights and responsibilities of A.I.-based machines. The proposed framework includes the introduction of an “electronic personhood” for autonomous devices that would enable them to participate in legal proceedings, either as plaintiffs or defendants, in much the same way as the law has provided corporations with legal personhood in the past.
However, even if autonomous devices are given legal personhood, the nature of blockchain’s private regulatory frameworks will introduce new complications that do not exist in the context of more centrally managed and controlled machines. For example, if an autonomous device were to be found liable for harming a third party—whether a contractual or a physical harm—courts could lack the ability to force a device to pay relevant damages. To the extent that the device relies on autonomous smart contracts to operate, only code can control access to the device’s funds. Unless a new functionality was introduced into the smart contract code to facilitate payment in case of a court order, no single party would have the authority to seize the device’s assets.
More troublingly, because of the open and disintermediated nature of blockchains and the autonomous nature of smart contracts, anyone, anywhere around the globe would have the ability to experiment with deploying and coordinating autonomous blockchain-enabled devices. The technology could grant everyday citizens the ability to create and deploy machines or devices powered by smart contracts, including automatic weapons that operate independently of any human intervention, a feature that could be leveraged to frustrate efforts to ban autonomous weapons—depriving anyone from stopping the operation of these devices once they have been released into the wild.
While these risks may not manifest today, if the technology develops into an increasingly scalable and widely used infrastructure, blockchains could serve as the underlying computational layer to manage autonomous weapons that could help support resistance movements or perhaps even terrorist attacks.
With the evolution of the internet of things, blockchain could lead us down a two-pronged path laden with both positive and terrifying possibilities. On the one hand, the technology may underpin new applications and protocol layers for a new generation of machine-to-machine interactions, helping devices work together and engage in peer-to-peer transactions with each other. But blockchain may also become a catalyst for emergence of autonomous machines that do not rely on any central operator, and the lack of appropriate regulatory laws combined with the unrestricted nature of a blockchain could result in emancipated, A.I.-driven machines, capable of being employed for dangerous ends.
Excerpted from Blockchain and the Law: The Rule of Code by Primavera De Filippi and Aaron Wright. Out now from Harvard University Press.