Future Tense

Elon Musk Wants to Hack Your Brain

How will the FDA manage that?

Photo illustration of a side by side of Elon Musk and a brain with digital wire imagery.
Photo illustration by Slate. Photos by liulolo/iStock/Getty Images Plus and Mike Windle/Getty Images for Weinstein Carnegie Philanthropic Group.

In July, Elon Musk made a highly anticipated announcement about his secretive brain-computer interface company, Neuralink. The big reveal: a new set of thin electrodes in “threads” and a “sewing machine for the brain” designed to implant the electrodes through small holes in the skull. Musk envisions using brain-computer interfaces, or BCIs, to control smartphones—imagine using an app or typing a text message without moving your fingers—but technologies like Neuralink’s could also become groundbreaking tools for medicine and surgery.

Advertisement

The company laid out plans to test its tech in humans with authorization from the Food and Drug Administration next year. Musk is certainly known for pushing aggressive deadlines for his technologies, but this rushed timeline may signal that he might not really understand the complexity of the FDA process. And it’s not just Musk who is getting into this space. Facebook just announced that it wants to create mind-reading technology to control virtual reality, while Braintree founder Bryan Johnson’s BCI company Kernel explicitly aims to advance human cognition. A sector trying to “move fast and break things” might not have the patience to comply with FDA regulations. Instead, the clash between the FDA’s legal lag and an overzealous BCI industry could lead to real patient harm and damage the agency’s ability to oversee cutting-edge technologies.

Advertisement
Advertisement
Advertisement
Advertisement

In the past, our limited understanding of the brain’s electrical system has made managing neurological disease and trauma difficult. But utilizing the power of artificial intelligence, BCIs could potentially diagnose and treat complex neurological conditions. If BCI technology works as hoped, it could grant patients with motor impairments the chance to walk, stimulate breakthroughs in treating mental illness, or reverse degenerative brain diseases such as Parkinson’s. “Revolutionary” would be an understatement.

On the other hand, using BCIs in patients or ostensibly healthy people could pose some serious concerns. Inserting objects into the brain, even via a harmless-sounding “sewing machine,” creates obvious safety issues like damaging neural tissue or causing an immune reaction or infection in the brain. BCIs also raise privacy and security issues because of their digital component. Who will get to collect and use the data created by patients’ brains? How will BCIs be protected from cyberattacks? Will they be available only to the wealthy? Might patients become dependent on companies who implant them?

Advertisement

Musk didn’t get into any of those questions, but they certainly will be on the minds of regulators. An invasive brain machine requiring surgery—again, even if you call it a “sewing machine”—would go through the most stringent approval protocols at the FDA, requiring many studies to satisfy the rigorous premarket approval process before going to market. Before then, Neuralink will need to get an investigational device exemption to test its device, but the company’s secrecy about the state of the technology makes it difficult to know when it will be ripe enough to apply for testing approval. Meanwhile, the FDA has been suffering from the pacing problem, whereby new technologies rocket past the rules governing them. With innovations from stem cells to artificial intelligence, the agency has struggled to figure out how to fit new technologies into existing regulations. For example, the FDA in 2017 argued its regulations for drugs also cover stem cell treatments and has been locked in bitter legal battles with outraged stem cell clinics ever since.

Advertisement
Advertisement
Advertisement

Technology companies hoping to move quickly in this space are now pressuring the FDA to speed up the process. While Silicon Valley has long been interested in health care, it hasn’t always been a perfect fit. Tech titans in the past have seemingly been unprepared for the vast regulatory oversight, approval scheme, and interdisciplinary approach required for successful health care projects. For instance, Google Health was a personal health data centralization system that failed for a number of reasons, including data privacy concerns and poor collaboration with doctors and insurers. However, artificial intelligence seems to have sparked a new interest in Silicon Valley’s health aspirations, especially now in BCIs. A.I. allows medical devices like BCIs to learn from each patient they treat and analyze that information to better treat the next patient, improving themselves with each passing day.

Advertisement

Unanswered questions about which FDA rules would apply to BCI software and whether developers are protected from lawsuits (as other implantable device developers are) could push BCI companies to develop their devices in places U.S. rules and lawsuits cannot reach—and where fewer protections might be available for test subjects. Medical tourism, safety and consent concerns, data protection, and access for only the wealthy could quickly follow.

Advertisement
Advertisement

With these concerns looming, the FDA issued a draft guidance on BCIs earlier this year, signaling its interest in managing the technology. The draft suggests that developers should use rigorous laboratory testing to identify potential risks and avoid them in later clinical testing in patients. However, the digital component of BCIs using artificial intelligence creates other regulatory challenges because such software can adapt as it learns from patients, continuously becoming a new device with each update. In addition to the draft guidance, the FDA issued a white paper earlier this year acknowledging the need for more adaptive rules on constantly changing software to promote both patient safety and innovation, but it hasn’t reached an official policy. The process of acknowledging a regulatory issue, accumulating opinions, and formally making a policy decision is time-consuming in and of itself.

Advertisement
Advertisement

It will be paramount for industry and regulators to work together to properly manage such a novel medical device as BCIs. The FDA’s pilot Pre-Cert program may offer one pathway forward. It focuses regulatory attention not on particular products but on the companies and developers making them. Once the FDA deems that a company is responsible and using safe practices to develop software, it does not need approval for each individual product. In one option for managing BCIs, companies could go through clinical trials for the safety of the physical device, whereas the software could be addressed through more flexible programs like Pre-Cert.

With Neuralink and other BCI companies pressing forward, the time has come for both regulators to take the technology seriously and industry to place patient protection at the center of innovation. Rather than uncontrollably accelerate innovation, the BCI industry will need to make a good faith effort to work with device regulators to protect patients. In turn, the FDA may need to pursue more flexible programs for speedy, yet safe innovation. Failure to compromise could instead lead to patient harm or exploitation under the guise of progress.

Future Tense is a partnership of Slate, New America, and Arizona State University that examines emerging technologies, public policy, and society.

Advertisement