In his critique of my recent ** Slate** article about the problem of overusing technology in math education, Paul J. Karafiol begins by setting up two straw men: that I believe American math students do worse today than in the past, and that American students lag behind those elsewhere.

Let’s take the longitudinal question first. Exams and grading standards change over the years, so it’s difficult to make a meaningful comparison between today’s calculus exam and 1997’s. There are better and worse ways to do this, but it’s a less than worthwhile debate in my mind. The question is how well we are doing compared to how well we could be, not how we compare to the past. Karafiol’s cheery gloss on this comparison is as beside the point as those who use other numbers to augur doom.

Along those lines, comparing the United States with Singapore or Finland or South Korea also never struck me as consequential. They are all much smaller countries, with very different cultures. We are not, in any case, competing with them, for reasons I’ve gone into elsewhere. I just don’t think you learn a whole lot by comparing our average test scores with theirs. In any case, parts of the United States like Massachusetts tend to do well on international comparisons, while others like Mississippi do badly. The reasons for this have little to do with curricular differences and everything to do with demographic fault lines well beyond the scope of the present debate.

In any case, Karafiol’s parable in which his calculus students’ calculators do the “heavy lifting” proves only that his students have good calculators, not that they learned anything. It reminds me of this Tom Toles cartoon.

Karafiol makes the point that technology is neither good nor bad inherently—how we use it determines its value. This is true enough, in the trivial sense that it isn’t guns but rather people who kill people. The question is the effect of technological tools as they are used in practice in American classrooms today. And though different tools—graphing calculators, interactive whiteboards, a slew of software packages—are of course different, they share an underlying commonality I try to get at in the piece. Could any one of these in principle be used in a virtuous way? Sure. (Speaking of software generally, not of particular packages, some of which are asinine to the bone.) But the relevant question is how such technologies are in fact used, just as the relevant question with regards to guns is not how they might be used in principle, but the violence that they, in the real world, enable.

Others criticized the piece for neglecting a body of empirical evidence that “proves” the efficacy of some technology or other.

There wasn’t room in the piece to make a comprehensive listing of the many empirical studies from the education literature that I read in the course of researching the article, nor will I do so now. What I will do is make an observation.

The standard sort of thing that is done is to take two sets of students, a control and a treatment group. Use your technology with the treatment group but not the control group. Find a difference in performance greater than that which could have arisen from chance. Pronounce the technology effective.

But the fact that the result was statistically significant in a mechanistic sense does not mean that it is meaningful. The number of confounding factors is large and potential for researcher bias is profound. I pointed out a couple of obvious examples of such bias in the piece. There are many more. This is why clinical trials in medicine are closely regulated by the FDA and, when possible, done in double-blind fashion. It’s nearly impossible to do a double-blind study of different pedagogical techniques; the methodological poverty of research into education is by no means confined to research into the use of technology.

Karafiol argues that technology can let students access higher mathematics without getting bogged down in computation. As Evan Weinberg, a physics and math teacher at an international school in China, points out, a prominent proponent of this view is Conrad Wolfram, who runs a software company with his more famous brother Stephen. (Their best-known product is Mathematica). Weinberg points to Wolfram’s TED talk, so let’s take that as a starting point. We’ll put to one side the fact that Wolfram is fundamentally a salesman and instead address the content of his sales pitch. Wolfram shows a video of polygons (squares, then pentagons, hexagons, dodecagons, etc.) making better and better approximations of a circle. This he says, can be used to teach young kids calculus. But the essence of calculus is the proof of its fundamental theorem: that the procedures for determining the area under a curve and a curve’s rate of change are in fact opposites in a precisely determined sense.

Real mathematical understanding lies not in the idea that this is plausible, but in a rigorous proof of it. (In Wolfram’s example, the key idea would be proof of the fact that successive polygons with more and more sides in fact can give you an arbitrarily good approximation of a circle, not a video that shows it looks right.) Computers are neither here nor there in that proof.

The idea that software like Wolfram’s can teach math well is about on par with the idea that raising a family in Second Life will help make you a better parent. It looks similar, sure, but it ain’t the real thing.

Karafiol and I agree that there is a shortfall of excellent teachers teaching math. I say we should focus our endeavors on increasing that number; I wish this discussion were about how best to do that, rather than glitzy distractions from the difficult process of learning.