Congratulations on another outstanding book–original, thought-provoking, rich in data, audacious in aims but nuanced in argumentation. I was convinced by many of the main claims.
First, that both living things and human societies get more complex over time because agents compete better when they team up and specialize in pursuit of a common interest (“non-zero-sumness”), as long as they solve the technological problems of communicating and detecting cheaters. Second, that human nature–in particular, an ability to figure out how the world works and a desire to expand one’s circle of cooperators–put our species on an escalator of cultural and moral progress, culminating in today’s globalization. I see the book’s main achievement as explaining an obvious fact–that organisms and cultures get more complex and cooperative over time–in sober cause-and-effect terms, without mystical forces, Victorian sentimentality, or Western chauvinism.
But I’m not as convinced by your next suggestion–that the cosmos has, in some sense, the “goal,” “end,” “purpose,” or “destiny” of producing complex life, intelligent species, societies, and global cooperation. One attributes a “goal” to an entity only if it has a feedback mechanism that makes the entity approach the goal despite obstacles or perturbations. Granted, natural selection is a feedback process with a kind of “goal,” and so is human striving. But do the two have the same goal, and is that goal an increase of complexity in the service of cooperation?
Here is an alternative, in which nonzero-sum interactions are just one of many handy things to have, rather than the destiny of life on earth:
1. Natural selection has the “goal” of enhancing replication, period. An increase in complexity and cooperation is just one of many subgoals that help organisms attain that ultimate goal. Other subgoals include increases in size, speed, motor coordination, weaponry, energy efficiency, perceptual acuity, parental care, and so on. All have increased over evolutionary time, but none is the “natural end” of the evolutionary process. Would anyone single out lethal weaponry as “highly likely” or our “destiny,” just because weapons have become more lethal over organic and human history?
2. A species with humanlike intelligence was no more “in the cards” than a species with an elephantlike trunk–both are just handy biological gadgets. (Of course, given enough time, humanlike intelligence is near-certain to evolve; but given enough time, anything with nonzero probability is near-certain to evolve, including an elephantlike trunk.) A brain with the intelligence necessary for cooperation and specialization is metabolically expensive and biomechanically hazardous, and evolves only when the evolutionary precursor and current ecosystem make the benefits exceed the costs. Most lineages (e.g., of plants) never got smart, and all lineages of animals on earth except ours were stuck well beneath the subgenius level.
Perhaps (as I speculated in How the Mind Works) the outsize brain of Homo sapiens evolved because our ancestors lived in groups, hunted, had hands, and saw in color and stereo. Perhaps without that rare conjunction, big brains aren’t worth the cost, and don’t evolve.
3. Humans do not directly seek wider cooperation and more complex societal organization; they care only about comfort, sex, family, friendship, knowledge, pride, being entertained, and so on. An increase in social complexity is just one way of getting more of these good things. So the similarity in the histories of organisms and societies is a coincidence, not a single process:
Complexification often helps organisms reproduce better, and it often helps humans become happier, so we see it in both of their histories. But other things make people happy and don’t help replicators reproduce–for example, music, humor, and dance–and we see them improve only in human history; plants and animals haven’t gotten funnier or more musical or better at dancing. Only by cherry-picking trends that are similar across organic history and human history can one claim that the two processes are fundamentally the same.
4. As a result of (3), global cooperation and moral progress will not increase toward some theoretical maximum or Teilhardesque Omega Point, but will level off at a point where the pleasures resulting from global cooperation (having more stuff than you had before) are balanced by the pleasures resulting from non-cooperation (having more stuff than your neighbors, or the warm glow of ethnic chauvinism).
This picture is compatible with the modest proposal that organisms and societies get smarter and more cooperative over time, but denies the more ambitious proposal that those developments are somehow inherent to the nature of things. I’d be curious to see how you would defend the stronger position.