In recent years, there’s been increasing awareness of a problem across many scientific fields—the problem of reproducibility. Can experiments be repeated (or “reproduced”) to arrive at the same result? Evidence is piling up that the answer, all too often, is no. This makes it difficult to know which results we can confidently rely on, and which are spurious.
The data outlining the problem is compelling: In 2015, the Reproducibility Project, an undertaking led by the Center for Open Science, conducted a study with hundreds of psychologists and found that fewer than half of 100 studies from high-ranking psychology journals could be reproduced (see a critical comment and reply for more). A study published by scientists at the biotechnology company Amgen in 2012 claimed that of 53 landmark preclinical research studies, only six could be successfully reproduced. Recently, in an editorial in Science Translational Medicine, the executive vice president and chief medical officer at the pharmaceutical company Merck detailed and lamented the huge costs of relying on such a “shaky platform.”
What’s to blame for this huge problem? Causes range from the difficulty of recreating experiments that were never described in close detail to a scientific system that incentivizes researchers to cherry-pick results for publication, says Elizabeth Iorns, co-founder of Science Exchange, a platform that enables researchers to order experiments at contract labs. Iorns is also one of several team members working on the Reproducibility Project in cancer biology.
Resolving all the issues around reproducibility will likely require us to grapple with fundamental challenges in how we conduct scientific research. But there are smaller steps we can take to chip away at the causes of irreproducibility—for instance, by capturing more precise information about how experiments are conducted so they can be more easily reproduced. And what better way to capture precise information than by using robots? Biomedical research has been shifting to using robots in labs for decades, particularly for experiments that require the same steps to be carried out many times over with precision (one example is a polymerase chain reaction, a technique used to multiply a piece of DNA into many copies.) Research labs can either purchase the robotic machines that do this themselves, or they can outsource to contract research organizations that have the machines. And more recently “robotic cloud labs” have gone even further by offering a service in which researchers can send their protocols directly to a robot to carry out their experiment. Beyond making lab work more precise and efficient, researchers and lab founders alike believe that the automation could make their experiments more easily reproducible.
There are currently two robotic cloud labs of this kind: Transcriptic, founded in 2012, and Emerald Cloud Lab, which grew out of Emerald Therapeutics, founded in 2010; both labs have raised venture capital funding and are based in the San Francisco Bay area. Here’s how it works: Researchers set up their protocols directly using online platforms. The experiments are then carried out by robots, which work around the clock, and the data are sent back to the researchers. Transcriptic’s focus is molecular biology, and its customers include a range of researchers from academia, startup companies, and big pharmaceutical companies. ECL has a greater range of protocols it can run within the broader life sciences (it also currently has a wait list and isn’t taking on new customers). Both labs are able to offer a cost-effective solution for labs that either can’t afford those kinds of capital investments or would rather outsource to others.
So, what are the advantages of using a service like this for reproducibility? Transcriptic’s COO, Yvonne Linney, noted that when researchers conduct the lab work themselves, the materials and methods that they typically share about their studies “are not very detailed and … there’s a lot of potential human interpretation.” Many protocols are written with imprecise instructions like “incubate overnight,” says Conny Scheitz, Transcriptic’s lead staff scientist. But in a protocol run on robotic equipment, she said, you are forced to state exactly how long to incubate down to the second. Another side benefit of having to tell a robot exactly what to do? This creates a precise record of the experiment.
Brian Frezza, one of two founders of ECL, explained that his experiences as a researcher convinced him of how beneficial these automated instructions could be. When doing his own work, he always strived to be detail-oriented, but when he tried to run some of his experiments in a protocol for ECL, he found that there were a “staggering number of questions” about the method and materials that he had left unanswered. He attributed the gaps to the fact that he had a picture in his head that he hadn’t fully spelled out. “The robot doesn’t understand ambiguity at all,” he says. Another benefit—it will follow the instructions exactly the same way each time.
This record and precision makes it easier for protocols to be shared so experiments may be retested. Iorns described the difficulty that researchers working on the Reproducibility Project in cancer biology have had in tracking down the details of materials and methods in original studies. Often academic researchers had moved on from the original lab and were difficult to contact. Even if they could be found, many kept records in nonelectronic lab books and weren’t able to provide details. By shifting to electronic systems such as a robotic cloud lab’s platform, researchers would have a way of tracking the fine-grained details and providing them later on. (Iorns also works part time for Y Combinator, which is an investor in Transcriptic.) The robots also capture details on the experimental environment such as the temperature and humidity of the room where the experiment is conducted, which can influence results, says Scheitz. Plus, the robotic machines are calibrated and serviced regularly, which is also tracked and noted.
Switching from having students in his lab conduct a tricky experiment to using Transcriptic’s robots helped Justin Siegel, a professor at UC–Davis, to save both time and money. His team used Transcriptic to create a large data set of mutant enzymes for its study, which was published in PLOS One. “The technique that we use to generate mutant enzymes is a little finicky, and sometimes it would take students two or three attempts to get it right.” Once the team switched to using robots, the experiment worked the first time through, saving on the costly materials and time. By outsourcing some protocols to robots, he added, students are able to spend their time thinking of how to analyze data and design the next experiment, rather than just “turning the crank.” Siegel said that his team shared the protocol on Transcriptic’s website, where it will enable other researchers to reproduce their experiment and, if they choose, to contribute their data to a database that’s in the works.
Robotic cloud labs are still in the early stages. While there are a certain number of protocols that researchers can easily run through Transcriptic’s user interface, new experiments take more upfront investment because the protocols need to be written in the programming language Python. Another big reproducibility issue that robotic cloud labs don’t address: to successfully run the same experiment in a different lab could still be very challenging due to differences in materials or machines. So the labs are far from a panacea. Yet they provide an aid in making experiments more precise and reproducible. And by shifting the merely mechanical work to robots, the labs allow researchers to spend time doing what they really want to be doing—their research.
This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.