The prefix “neuro” is attached to an increasing number of other terms these days. There are people conducting research in neuroeconomics, neuroethics, and of course neuroscience, the broad research field that covers everything from the study of chemical receptors on individual nerve cells to the workings of the entire human brain en masse. The neuro-neologism that has perhaps made the biggest impact outside of the academic world, though, is neuromarketing.
Marketers and advertising agencies have always conducted research, and they have adopted methodology and concepts from academic psychology for decades. Traditional marketing research typically involves questionnaires, focus groups, or in-depth interviews, and it is usually aimed at answering questions related to product development, advertising, or evaluating potential new markets. This seems eminently reasonable; if you have a new product that you want to launch in a specific market, why not ask the people in that market what they think of it?
But in the 1990s, some people on the fringes of the marketing world got very excited about new technology. Functional magnetic resonance imaging and other tools suddenly made it possible to visualise the workings of the human brain in unprecedented detail and precision. With an objective window into the working of the brain (in theory, anyway), marketers could neatly bypass many of the problems associated with asking subjects overt questions. A number of fMRI studies were conducted that explicitly investigated particular products and advertising, with often quite interesting and informative results. Unfortunately, fMRI scanners are bulky, very expensive, and not remotely portable. In addition, the participant has to lie supine in a narrow tube, watching images on a screen while trying to ignore the God-awful banging and clanking noise the scanner makes. To really commercialize neuromarketing, something much more portable, user-friendly, and above all cheaper was needed. It was recently found in a much older brain-recording technology: electroencephalography, or EEG.
It’s been known since the late 19th century that the brain’s activity gives off electrical signals, and the first recording of that activity in humans was in 1924. EEG works by attaching recording electrodes to the scalp. It is used in clinical settings to diagnose epilepsy and monitor coma patients, and until recently (when better technologies like MRI came along) it was also used for diagnosing tumors and strokes. EEG is also a popular neuroscience research method and can provide very detailed information about the brain’s activity while the participant performs some kind of laboratory task. EEG is only good for sensing the activity of the surface of the brain; activity lower down is just too far away from the electrodes on the scalp to get reliable data. Recording neural activity through the skull is like listening to an argument in the apartment below yours by pressing your ear against the floor; you might be able to hear some muffled voices, and maybe even some of the louder details, but you’ve no hope at all of hearing what’s happening in an apartment five floors below.
Despite the long history of EEG, the marketing world was slow to embrace it as a research method. This is because for most of its history it suffered from the same problems as fMRI: reliance on expensive, bulky equipment that was difficult and time-consuming to set up and run properly. Modern research-grade EEG systems can use up to 256 separate electrodes, and fixing them to a subject’s head using conductive gel (in order to get the best possible connection) is a messy business that can take several hours. However, technical advances in the past 10 years have changed the game; in 2007 a company called NeuroSky released the first consumer EEG device that used dry-sensor technology, removing the need to smear conductive gel in your participant’s hair. Cheap and easy wireless technology, the high power densities of lithium-ion batteries, and advances in computer technology have driven the cost ever downward and the user-friendliness upward. There are now more than 10 cheap (around $100 to $200) consumer-grade EEG headsets available. Some of them use just a single sensor, while the more sophisticated ones use up to 14 and also incorporate gyroscopic head-motion detectors and additional sensors that record the muscle movements of the face. Finally, here was the brain-imaging technology marketers had been waiting for: cheap, easy to use, and so portable that it could even be used in environments like shopping malls.
The marketing world rapidly embraced this new technology. There are now around 100 companies worldwide that offer some form of neuromarketing services, many of them using these EEG devices, and their clients are some of the largest consumer businesses in the world. Perhaps the best-known neuromarketing company is NeuroFocus, founded in 2005 by A.K. Pradeep and now a subsidiary of the Neilsen Co., a behemoth in the market-research world. Pradeep has a background in engineering and business consulting and has recently released a book titled The Buying Brain: Secrets for Selling to the Subconscious Mind. Other current big hitters in the field are NeuroSpire (recently founded by 22-year-old self-proclaimed wunderkind Jake Stauch) and Emotiv (developer of the EPOC EEG hardware and software suites that claim to monitor emotional and cognitive states in real time). These companies all use off-the-shelf hardware; their intellectual capital is largely based on their (proprietary and closely guarded) analysis techniques that claim to derive useful measures from the collected EEG data: measures related to attention, engagement, frustration, and supposedly even buying potential.
The EEG neuromarketers make big claims based on their research techniques. NeuroFocus says it measures the “neurological iconic signature” and “deep subconscious response” related to a tested product. NeuroSpire asserts that its technology allows you to “peer into the subconscious mind of the consumer.” Can they deliver on these promises?
The short answer is: It’s doubtful. There are several reasons for this, some of them technical, with (at least) one deeper, conceptual issue. First, the data that can be acquired from the cheap, plastic EEG systems with dry contacts are often pretty poor. There’s a reason why academic researchers spend hours applying 128- or 256-electrode systems to their participants’ heads with conductive gel; it’s the only way to record really high-quality EEG data. The dry-contact systems with one sensor or a few can record only the most basic kind of data, and they do so with much lower fidelity and reliability. Second, the analysis of EEG data is complex and only minimally amenable to a one-size-fits-all approach. The signal-to-noise ratio in EEG data is poor, so sophisticated filtering and analysis techniques are required to pull out the weak signal from the background noise. Because of this poor signal-to-noise property, large numbers of repetitive trials are required in an experiment, and large numbers of subjects are also needed to demonstrate a reliable effect.
Because neuromarketing companies don’t provide the key details of the analysis techniques they use, it’s hard to evaluate them objectively. However, they seem to take a highly automated approach, essentially plugging the raw data into a black box of algorithms that spits out a neatly processed answer at the other end. Such an approach must involve making a large number of assumptions and some fancy-analysis footwork to make something coherent out of the poor-quality data. In general the same applies to getting information out of a data set as to getting information out of a human: If you torture it long enough, it’ll tell you everything you want to know, but information extracted under torture is highly unreliable. In addition, marketing-related studies are not well-suited to the kind of repetition that’s required to boost the useful signal and reduce noise; the same product or TV commercial can be presented only a few times before the participant becomes very bored indeed and therefore ceases to have any kind of meaningful reaction.
These technical concerns can (in theory) be addressed by higher-quality equipment, analysis techniques, and experimental design, so even if current use of the technology falls short, these issues aren’t necessarily fatal to the whole enterprise. However, a third issue goes much deeper and is based on a subtle but important logical objection: the use of reverse inference.
The standard logical inference in experiments is based on the manipulation of a carefully selected factor across two conditions, one experimental and one a control condition. A difference in the data between the two conditions suggests that the manipulated factor caused the difference. For instance, in one experimental condition, participants might be asked to pay close attention to one aspect of a
stimulus (the direction a dot moves in or the precise orientation of a line, for example), and in another (control) condition be given no particular instructions about the stimulus. If a reliable difference between those two conditions is seen in the EEG signal, it can be inferred that the increased attention in the first condition (relative to the second) caused that difference. This hopefully tells us something useful about how the brain works, namely that attention causes some particular change in the EEG signal.
Reverse inference goes the other way, from the brain data to a cognitive or emotional process. An experimenter might observe a change in the EEG signal and infer that this means the participant is paying more attention. Unfortunately, logically this doesn’t work. Nothing has been systematically manipulated or tested, so this is not a safe assumption. The signal change might be because a participant just thought about his boyfriend, or experienced an itch on her foot, or felt hungry, or any number of other possible things. There is no unique brain signature of any particular cognitive or emotional state that can be seen with current technology. Labelling a set of brain data as a signal of attention or anxiety based on previous experimental findings is similar to saying “tomatoes are red, this apple is red, therefore this apple is a tomato.” It’s plainly nonsense.
Neuromarketing studies are plagued by reverse inference. Do any of these doubts about the usefulness of this entire enterprise really matter? Judging by the current success of the marketing of EEG neuromarketing, it seems not. The clients who employ these researchers are rarely interested in the subtleties of what the results might mean; they care about loose concepts like “engagement” or “emotion” related to their products or TV commercials, and this is what the neuromarketers claim to deliver: science-y-looking graphs with wiggly lines that show (putatively) when people are pleased by a commercial and when they’re bored. Whether the data have any real value is a very debatable point, with some academic researchers questioning their usefulness and the practitioners staunchly defending their research techniques. There is currently very little data about whether EEG-derived measures actually have any effect in the real world (e.g., predict anything at all about buying decisions), and the tendency of these companies to keep their methods secret also hampers serious evaluation.
Despite the new user-friendly EEG technology, performing brain research is still a difficult endeavour. The challenge (as it has always been) is to perform well-designed experiments that are as unambiguous in their interpretation and conclusions as possible. This is not a trivial matter, and will be true no matter what new technologies are available for studying the brain. In the mad rush to commercialize the new EEG technology, the neuromarketing researchers are currently gleefully painting over the logical and technical cracks in their methods with glossy results graphs and 3-D pie charts. Those considering using these new research methods for their latest advertising campaign would do well to heed the classic commercial advice: caveat emptor.