This essay is about a complicated question, and will likely seem a little afield for new readers, but it’s actually in continuity with a couple subjects that’ve long interested me: “the benefits of intentional self-disadvantage in competitive contexts” (what I conceptualized as “foolishness” in this essay), and “the dynamics of moral innovations (in Balaji Srinivasan’s sense) that animate new societies.”
Ultimately what we’ll be doing between this essay and the next is just connecting the dots between the moral innovations I explicated within Holy Fools and two other essays I think are really helpful: Scott Alexander’s The Early Christian Strategy and Alex Danco’s Innovation takes magic, and that magic is gift culture. The ultimate goal is to see if we can’t expand the degrees of freedom available for solving coordination problems by questioning whether the rational move in a competitive context is always the best move.
I’ve wanted to explore these ideas in more detail for probably ten years, so, three quick/early qualifiers:
One, even if you’re not into religious stuff, just squint and notice how similar a new religion looks to a new society (in Balaji’s sense), a new company, or a new movement. Hopefully that’s a way into the topic without the baggage (positive or negative) that can frequently come with talking about something like Christianity. It is, after all, the largest religion in the world (currently between 29 - 33% of the world's population), that once started as just another new thing in a competitive social environment—so, something went right for it, and it’s worth exploring what that “something” is (especially since the underlying ideas could potentially incredibly generative and useful for your particular thing).
Second, this is essentially one long essay that I’ve broken into two parts (this one and another one coming very soon)—so, for expediency’s sake, I’ll hit the highlights of the three essays I referenced to make my points but I basically can’t rehash the totality of each. So if you want a deep-dive into these topics, it’s worth reading all three essays (1, 2, 3, in that order) for maximum context.
Third and lastly, in Holy Fools, I wrote this:
I suspect that self-disadvantaging meaningful sacrifices in competitive contexts unlock possibilities for the game that would normally be impossible by self-interested actors acting rationally (or as-rationally-as-possible, given what we know about behavioral economics). But I’ll revisit that in another essay.
This is that essay!
Anyway, enough adieu.
1.
Once upon a time, Scott Alexander wrote a blog post called Meditations on Moloch. Others were inspired by the ideas and continued thinking about them. As a starting point, I’ll recap one explanation of the “Moloch” concept as explained by Liv Boeree on the Bankless podcast.
What is Moloch? Moloch is the name of a force, composed of game theoretic incentives, that can lead people within a competitive system to sacrifice more and more of their other values in order to win. It’s the god of unhealthy competition, and also the god of coordination failure—where if you had a God’s-eye-view and God’s-ability-to-coordinate, then X-[best]-situation would be ideal for everyone involved, but you aren’t God and can’t coordinate everyone, so Y-[worse]-situation is what you have to settle for.
Here's two simple examples to illustrate:
First: imagine you’re sitting down in the stands at a sports game. Everything is great: you can see the field perfectly fine, and all is well in the world. Then some people in the front row stand up to see above the railing partially blocking their view, but then, the crux: they don’t sit back down. Well, now the people sitting behind those people can’t see, so they stand up, which blocks the view of the people behind them, so they stand up. Eventually everyone’s standing and no one’s happy about it (sitting was so much more comfortable!). From a God’s-eye-view and God’s-ability-to-coordinate, then sitting down would be ideal for everyone involved, but you aren’t God and can’t coordinate everyone, so standing up is what you have to settle for.
Second: imagine you’re a woman using Instagram, and say, hypothetically, you want to look as attractive as possible to build an influencer following. You hear about an app called Facetune that, quite literally, works as auto-tune for your face: it makes you look even more attractive, and, the crux—you’ve heard all the other women trying to build influencer followings have already started using it. From a God’s-eye-view and God’s-ability-to-coordinate, then “not auto-tuning your face” would be ideal for everyone involved, but you aren’t God and can’t coordinate everyone, so “every would-be influencer on Instagram looks vaguely edited” is what everyone has to settle for.
Now wouldn’t it be great if everyone at the sports game could coordinate to sit down at same time, or every would-be influencer on Instagram could agree to all stop using Facetune at the same time? It would mean everyone at the sports game could comfortably watch the game again, or unhealthy and edited beauty standards wouldn’t be quite-as-propagated from influencers to younger generations of women, say.
Unfortunately you can’t coordinate that, because everyone is distributed and centralized authority over everyone doesn’t exist—so everyone is worse off because you can’t coordinate simultaneously. Moreover, even if you could coordinate a one-time effective solve, it’s difficult to enforce continued coordination effectively: the people at the front might want to stand again to see above the railing, or some women may seek an advantage against the non-Facetuners by downloading Facetune again.
So Moloch is the game theoretic incentive—experienced by individuals within the game as a “force,” or “temptation,” or “pull”—to keep doing whatever it takes to “win” that game. And here’s the kicker: saying “you should stop using Facetune because everyone using it creates a worse experience” doesn’t work when only one person stops using it—then that person’s just less hot online than everyone else, and they pay the cost rather than everyone paying the cost together.
The key thing to note is that, from a particular angle, Moloch is powered by two basic ingredients: what people want (i.e., desire), and what people are willing to do to get what they desire (i.e., ethics).
If you don’t care about being maximally hot on Instagram, you won’t use Facetune. You’ll still live in a world where everyone else seems marginally hotter than you online, but you don’t really care (i.e., you don’t desire that), so you’re not influenced to unhealthily compete to win the prize (i.e., your ethics don’t degrade to satisfy this desire).
But underlying idea here is what I wrote at the beginning of Holy Fools:
“Winning comes with good things, losing comes with bad things. Why would we want to lose?”
So this is the groundwork for this whole conversation: the desire to win (however that’s defined) means we basically all look at the lay-of-the-land of incentives of whatever “game” we happen to be talking about in a particular and predictable way. That way is, “How do I win?” or at the very least, “How do I not suffer the consequences of losing?” Or, as I wrote in Holy Fools:
“What I mean is that most people, most of the time, have an impulse to filter whether something is a good idea or a bad idea through the qualifying step of, “Will this help me or not?” What’s in it for me? What’s the gain, the profit, the thing I’m getting by doing fill-in-the-blank.”
This is all a necessary prelude to understand where we’re going. So let’s look at a group of people that somehow evaded Moloch while, perhaps inextricably relatedly, looking certifiably insane in the process.
2.
Enter another Scott Alexander essay, titled The Early Christian Strategy:
In 1980, game theorist Robert Axelrod ran a famous Iterated Prisoner’s Dilemma Tournament.
He asked other game theorists to send in their best strategies in the form of “bots”, short pieces of code that took an opponent’s actions as input and returned one of the classic Prisoner’s Dilemma outputs of COOPERATE or DEFECT. For example, you might have a bot that COOPERATES a random 80% of the time, but DEFECTS against another bot that plays DEFECT more than 20% of the time, except on the last round, where it always DEFECTS, or if its opponent plays DEFECT in response to COOPERATE.
In the “tournament”, each bot “encountered” other bots at random for a hundred rounds of Prisoners’ Dilemma; after all the bots had finished their matches, the strategy with the highest total utility won.
To everyone’s surprise, the winner was a super-simple strategy called TIT-FOR-TAT:
Always COOPERATE on the first move.
Then do whatever your opponent did last round.
This was so boring that Axelrod sponsored a second tournament specifically for strategies that could displace TIT-FOR-TAT. When the dust cleared, TIT-FOR-TAT still won - although some strategies could beat it in head-to-head matches, they did worst against each other, and when all the points were added up TIT-FOR-TAT remained on top.
In certain situations, this strategy is dominated by a slight variant, TIT-FOR-TAT-WITH-FORGIVENESS. That is, in situations where a bot can “make mistakes” (eg “my finger slipped”), two copies of TIT-FOR-TAT can get stuck in an eternal DEFECT-DEFECT equilibrium against each other; the forgiveness-enabled version will try cooperating again after a while to see if its opponent follows. Otherwise, it’s still state-of-the-art.
The tournament became famous because - well, you can see how you can sort of round it off to morality. In a wide world of people trying every sort of con, the winning strategy is to be nice to people who help you out and punish people who hurt you. But in some situations, it’s also worth forgiving someone who harmed you once to see if they’ve become a better person. I find the occasional claims to have successfully grounded morality in self-interest to be facile, but you can at least see where they’re coming from here. And pragmatically, this is good, common-sense advice.
For example, compare it to one of the losers in Axelrod’s tournament. COOPERATE-BOT always cooperates. A world full of COOPERATE-BOTS would be near-utopian. But add a single instance of its evil twin, DEFECT-BOT, and it folds immediately. A smart human player, too, will easily defeat COOPERATE-BOT: the human will start by testing its boundaries, find that it has none, and play DEFECT thereafter (whereas a human playing against TIT-FOR-TAT would soon learn not to mess with it). Again, all of this seems natural and common-sensical. Infinitely-trusting people, who will always be nice to everyone no matter what, are easily exploited by the first sociopath to come around. You don’t want to be a sociopath yourself, but prudence dictates being less-than-infinitely nice, and reserving your good nature for people who deserve it.
Reality is more complicated than a game theory tournament. In Iterated Prisoners’ Dilemma, everyone can either benefit you or harm you an equal amount. In the real world, we have edge cases like poor people, who haven’t done anything evil but may not be able to reciprocate your generosity. Does TIT-FOR-TAT help the poor? Stand up for the downtrodden? Care for the sick? Domain error; the question never comes up.
Still, even if you can’t solve every moral problem, it’s at least suggestive that, in those domains where the question comes up, you should be TIT-FOR-TAT and not COOPERATE-BOT.
This is why I’m so fascinated by the early Christians. They played the doomed COOPERATE-BOT strategy and took over the world.
So, as a preface, Iterated Prisoner’s Dilemma is apparently not a canonical coordination problem, but repeated play does introduce coordination-like elements enabling conditional cooperation. Apparently that’s why some sources discuss the Iterated Prisoner's Dilemma under the broader umbrella of coordination and collective action problems.
My point is that Iterated Prisoners Dilemma isn’t 100% the domain of Moloch—of pure coordination problems driving unhealthy competition—but it’s still interesting, and still useful to get us back eventually to Moloch. Because early Christians were in the uniquely peculiar position of facing tremendous Moloch-like pressure to at least TIT-FOR-TAT (at a minimum), yet were taught and correspondingly behaved exceedingly like COOPERATE-BOTS.
The trouble with a statement like that is most people don’t have the slightest clue about the context or actual behavior of the earliest Christians (nor, as you may be feeling, why they have anything to do with solving Moloch-type coordination problems). So to even remotely begin to unpack Scott Alexander’s “why” question at the end, we first have to understand something about a type of counter-intuitive action—something I like to call “foolishness,” but a particular type of foolishness native to this exact conversation.
3.
Liz Boeree says there’s two things needed for Moloch to thrive:
a poorly designed competitive system that turns into an ethical race to bottom
a minimum number of people within that system to lack wisdom to want to win that game to begin with
Let’s ignore #1 for now and just assume every system is relatively poorly-designed. So what about #2: a critical mass of people without wisdom to opt-out of the game?
That element opens the door to what’s most important and interesting about this conversation, and is an excellent moment to introduce the core concept from Holy Fools: an unintuitive and different concept of foolishness.
Normally usage of the word “foolishness” means something like “unintentional stupidity.” But when I wrote about foolishness in Holy Fools, what I tried to sketch is something more like “intentional anti-optimization to win, whether by defaulting on the worthiness of the game or the apparent means of winning, within a competitive context governed by game theoretic forces.” It can also mean something like “willingness to incur relatively-clear costs to advance relatively-unclear means or ends.” Or even, put maximally-simply, just something like “intentional stupidity.”
Recall Moloch is essentially game theoretic pressure powered by a desire to win shaping ethics, without people having the wisdom to opt-out or try something different. Liv Boeree says it’s wisdom to opt-out of certain games. But I would posit it doesn’t feel like wisdom most of the time—instead, it usually seems to feel quite painful and counterintuitive. That’s why I prefer repurposing the term foolishness instead of calling what we’re describing wisdom: because it feels foolish, given the incentives you’re faced with.
So between these understandings of this sense of “foolishness,” and with some of the context of Moloch we’ve already sketched, you can start to see a conceptual picture of the type of counter-intuitive action I’m talking about.
As an aside, Moloch-like incentives define competitive intuition, in some ways—so, just like a sole person sitting down at the sports game among everyone standing up, it’s usually pretty clear what the competitive move could or should be (i.e., stand up). So the counter-intuitive action is often clarified first by some strategic sense for how to win whatever game you happen to be talking about (or at least what would contribute to winning, if identified).
Anyway, moving along: Scott Alexander is a rare writer in a lot of ways, but not least is his accurate understanding of early Christianity as extraordinarily strange, in a way that’s basically unknown by the modern observer (who in the US might just think Trumpism = Christians, or whatever). In other words, Scott Alexander gets that the early Christians were legitimately foolish in the sense I’m redefining the word to describe—which in their case, means they really were basically “COOPERATE-BOTS in a limited domain” (a phrase he coins later in his essay). By that I mean each of their varieties of foolishness were foolish in its own ethical category, where a TIT-FOR-TAT could’ve been their play, yet they went COOPERATE-BOT instead.
To illustrate, let me first just liberally quote Scott Alexander’s examples, then I’ll reiterate mine from Holy Fools. First, Scott’s:
“[From the Bible’s] Matthew [Chapter] 5:
‘You have heard that it was said, ‘Love your neighbor and hate your enemy.’ But I tell you, love your enemies and pray for those who persecute you . . . If you love those who love you, what reward will you get? Are not even the tax collectors doing that? And if you greet only your own people, what are you doing more than others? Do not even pagans do that?’
Talk is cheap, but The Rise Of Christianity suggests the early Christians pulled it off. For example, even though pagan institutions would not help indigent Christians, Christians tried to give charity to Christian and pagan alike, even going so far as to help nurse pagans during the plague (when nursing a victim conferred a high risk of contagion and death). Even Emperor Julian, an enemy of Christianity, admitted it lived up to its own standards: ‘When the poor happened to be neglected and overlooked by the priests, the impious Galileans observed this and devoted themselves to benevolence . . . [they] support not only their poor, but ours as well, [when] everyone can see that our people lack aid from us.’
In 1 Corinthians 6, Paul is asked whether it is acceptable for one Christian to pursue a lawsuit against another Christian in a pagan court. He answers:
‘The very fact that you have lawsuits among you means you have been completely defeated already. Why not rather be wronged? Why not rather be cheated?’
We get a similar picture from the stories of the martyrs. Many of them prayed for the Romans while the Romans were in the process of torturing and killing them; Polycarp even cooked them a meal.
If the Christians had merely been TIT-FOR-TAT, it would be easy to tell a story of their victory. The Roman Empire was corrupt and decadent to the core. People were looking for a community they could trust. Christianity offered access to a better class of friends who wouldn’t immediately rob or betray you when your guard was down. By providing a superior alternative to the low-trust pagan world, it was irresistible on a purely rational economic basis.
But this story sounds more worthy of the mystery cults. Mystery cults are a great structure for mutual aid; we see this today in groups like the Freemasons (cf. Backscratcher Clubs). Everybody knows who’s on the inside (and needs to be mutually aided) and who’s on the outside (and can be ignored). The initiatory structure holds off freeloaders and makes sure the people on the inside are of approximately equal rank (so that you get as many benefits as you give) and can be held accountable if they don’t contribute.
Since Christianity did better than the mystery cults, there must have been some reason that COOPERATE-BOT beat TIT-FOR-TAT in the particular environment of Roman religion, defying all normal game theoretic logic.
His whole framing perfectly echoes philosopher Kierkegaard’s comments on early Christianity: “Take any words in the New Testament and forget everything except pledging yourself to act accordingly. My God, you will say, if I do that my whole life will be ruined. How would I ever get on in the world?”
In Holy Fools I unpack several examples of these “life-ruining” teachings at length, but here’s the quick versions for the sake of expediency here (mapped, even if squinting a bit, against TIT-FOR-TAT business-as-usual vs. COOPERATE-BOT-type foolishness):
Forgiveness
“The imperative to forgive is directly opposed to the ability to preserve power: thus, wisely (by the world’s standards), we avoid forgiveness in competitive contexts, lest we fail to “make an example” of our counterparty or we “encourage similar behavior” from other competitors. Let’s note that these objections aren’t wrong, per se – that’s the nature of the game – yet obedience to Jesus’ teaching here would disadvantage us (possibly even cause us to lose). In other words, they’d make us foolish – and thus they’re not to be followed.”
TIT-FOR-TAT would’ve punished those who warranted punishment. COOPERATE-BOTS forgive.
Generosity
“Notice how both the early church and the rich young ruler had property – they had assets, of the real estate asset class: yet the early church “sold the orchard” while the rich young ruler couldn’t bring himself to do so (for without financial assets what power do you really have, even in a pre-capitalistic economy like that of the Roman empire?). Radical generosity involves “selling the orchard,” but can a more foolish move be imagined in a capitalistic society? For the most part the vast majority of the world since time immemorial is born without resources to begin with – if you’re not impoverished, then to what end is it to return to poverty, to before assets and capital were accumulated, before the safety and comfort provided by wealth, back to the vulnerability of lack? It’s sheer foolishness by usual standards.”
TIT-FOR-TAT would’ve minimally looked like recognizing everyone outside their tribe collected their money and used it to benefit only themselves. COOPERATE-BOT looks like generosity with Christians and non-Christians (referenced by Scott Alexander), and elsewhere in Jesus’ teachings it looks like generosity to the point of destitution oneself (referenced here).
Enemy-Love
“Consider the case of one indigenous population faced with the mortal threat of Spanish colonialists from the book Nonviolence: The History of a Dangerous Idea by Mark Kurlansky:
‘In 1542 de las Casas claimed to have seen the indigenous population of the Caribbean island of Hispaniola reduced from three million to two hundred survivors. The reason, he said, that this extermination was possible was that ‘of all the infinite universes of humanity, these people are the most guileless, the most devoid of wickedness and duplicity, the most obedient and faithful to their native masters and to the Spanish Christians whom they serve… Yet into this sheepfold, into the land of meek outcasts there came some Spaniards who immediately behaved like ravening wild beasts…’
In the face of “ravening wild beasts,” how does one interpret and express Jesus’ teaching about loving one’s enemies in Matthew 5?…
Recall the Olympics commissioner bemoaning the “suicide” of the competitors in the badminton tournament. Is there a more suicidal approach than loving one’s enemies in a competitive context? It’s utter foolishness to expect to love one’s enemies in the midst of a competition, whether mortal (as in this case) or merely a game with stakes.”
TIT-FOR-TAT fights back to defend oneself against “ravening wild beasts” seeking to conquer you. COOPERATE-BOT looks like quite literally cooking meals for those intending to publicly burn them in an arena (Scott Alexander’s reference).
Anti-prestige: good in secret
“But there’s something in “practicing one’s righteousness before men to be seen by them” (a line from Jesus earlier in the passage) that Jesus critiques and warns against, something about raising one’s status by virtue. Doing good in secret seems relatively foolish when put in this light. But the foolish path is the path Jesus prescribes.”
TIT-FOR-TAT sees others burnish their reputations through public acts of good and does the same. COOPERATE-BOT doesn’t burnish their reputation through public acts of good, yet still does the acts of good.
Anti-prestige: descent, not ascent
“It’s virtually impossible to become and stay a leader – whether informally through influence and audience, or formally through power and position – without some degree of respect and prestige being offered to you.
Steven Pinker offers the following definition of status:
‘Status is the public knowledge that you possess assets that would allow you to help others if you wished to. The assets may include beauty, irreplaceable talent or expertise, the ear and trust of powerful people, and especially wealth. Status-worthy assets tend to be fungible. Wealth can bring connections and vice versa. Beauty can be parlayed into wealth (through gifts or marriage), can attract the attention of important people, or can draw more suitors than the beautiful one can handle. Asset-holders, then, are not just seen as holders of their assets. They exude an aura or charisma that makes people want to be in their graces. It’s always handy to have people want to be in your graces, so status itself is worth craving.’…
To be fair, anyone who’s held a place of honor knows it feels good, and how it feels special, and how it feels like you’re special. Recognizing that is basically table stakes to understand what’s going on here.
But there’s something about it – the elevation, the transcendence over others, the ascent – that Jesus critiques. He points in the opposite direction, and says to walk that way, not the way of ascent.”
TIT-FOR-TAT seeks high-status. COOPERATE-BOT seems to be authentically self-forgetful about status (the original Holy Fools essay unpacks this a little more clearly in the corresponding section).
Okay, so: the original essay unpacks the specific origin of each teaching, but for now, you can take my word for it that these ideas are concretely taught by the originator of Christianity (Jesus). And they’re super weird, especially in competitive contexts, which basically happens to be the context humankind has always found itself within in some sense (for the security and safety of wealth, for romantic partners, for status and esteem, for safety and peace, etc.).
Just to explicitly connect the dots, note the functional equivalence between Scott Alexander’s COOPERATE-BOT and the notion of “foolishness” we’ve sketched here. Or to tie them together, the Christians were the ones who foolishly COOPERATE-BOT’d among an intensely competitive and difficult historical period. And yet somehow two thousand year later they’re the majority religion on earth.
This gets us basically only up-to-speed seeing what Scott Alexander sees clearly, via what Holy Fools was about unpacking at length. But the implications of the earliest Christians’ foolish example are, I believe, remarkably interesting and far-reaching, in unexpected directions.
4.
Why? Here’s one way: lest we think this is all only relevant to a religious sphere of life, let’s generalize and ask: if COOPERATE-BOT is a particular kind of apparent foolishness, what other kinds of apparent foolishness have yielded important civilizational innovations?
As one example, just port this conversation over into political philosophy. Specifically, put into political clothes the religious versions of COOPERATE-BOT we’ve already looked at and it looks a whole lot like this description of free speech by Scott Alexander:
“Not exactly the same, but maybe rhyming: what about modern liberalism? To the monarchs and dictators of the past, free speech might seem kind of like COOPERATE-BOT in a limited domain: the idea that elites shouldn’t make any forceful/legal effort to protect their ideological and spiritual position must sound almost as crazy as them not making any forceful/legal effort to protect themselves if attacked, or to prevent themselves from getting cheated. It is, in some sense, a unilateral surrender in the war of ideas; fascists and communists will do their best to crush liberalism, but liberals cannot ban discussion of fascism or communism. The fact that this, too, has worked, makes me think early Christianity wasn’t just a one-off, but suggests some larger point.”
Interesting—so COOPERATE-BOT is basically a way of describing a way to operate within a “limited domain.” It’s an ethical posture that’s adaptable to contexts. So what about practical decisions for intellectual or philanthropic movements?
“Since Christianity did better than the mystery cults, there must have been some reason that COOPERATE-BOT beat TIT-FOR-TAT in the particular environment of Roman religion, defying all normal game theoretic logic… Is this a consistent feature of COOPERATE-BOT strategies, or was it just luck?...
…I guess that question cashes out to “if you were involved in a movement, would you recommend COOPERATE-BOT as a strategy today?” The movements I’m actually involved in (rationalists, effective altruists) occasionally have slightly related debates. One of them involves PR: a pragmatist faction wants to stay away from hit-piece-writers, network with friendly journalists to ensure positive coverage, keep our best side forward, and de-emphasize (not deny or lie about) embarrassing bad sides. A COOPERATE-BOT faction thinks that’s what the Pharisees and tax collectors are doing, but that we’re trying to be more epistemically cooperative than everyone else and it’s our responsibility to just dump the exact contents of our brains out to anybody who asks us any question, without regard for the consequences.
There’s a parallel debate in charity funding. A pragmatist faction wants to make sure everything we fund is PR-friendly and won’t make everybody hate us or be incredibly embarrassing if it fails; a COOPERATE-BOT faction thinks we have a moral duty to fund the exact object-level highest-utility projects even if everyone will hate us for it and we’ll never get another penny of funding ever again. I wrote up an allegorical history of this conflict here. I lean towards the pragmatist side of most of these fights, if only because I’ve seen enough PR disasters to know that nobody gives you any slack for having stumbled into them only because of your exceptional moral purity.”
These are concrete modern contexts in which the decision to COOPERATE-BOT vs. operate more rationally, more strategically, comes to the forefront. And with this vivid depiction, we can see the early Christians foolishness even more clearly: if alive today, they’d likely be the ones who “thinks we have a moral duty to fund the exact object-level highest-utility projects even if everyone will hate us for it and we’ll never get another penny of funding ever again,” etc., etc.
So we return to the earliest question we posed alongside some downstream questions as well: first, why apparently-lose instead of optimize your odds for winning by doing the rational or predictably-higher-odds-of-success thing? How does a “loss” alchemize somehow into a “win,” and what does it have to do with coordination problems (of the type that Moloch likes to instigate)? And specifically what types of foolishness are generative (like free speech) vs. intentionally stupid toward an useless/unproductive trajectory?
For this, I think writer Alex Danco opens some incredibly useful conceptual territory, particularly by his unpacking a particular kind of foolishness in the most productive sector of the United States economy: the San Francisco technology industry, and its weird superposition of social vs. financial returns for investors (which solve the coordination problems inherent to financing startups). And from that specific example I think we can generalize to start asking some very interesting questions that gesture at how Christianity eluded Moloch’s pressure to compete in the usual known ways (and what it means for superseding coordination problems today, both known and presently-illegible ones).
> Stay tuned for the next essay in this series.
Love was, is, and will always be the key difference, and it is only a successful strategy because it has the teleologically sovereign power of God, who can raise the dead, behind it. "If we have put our hope in Christ for this life only," Paul concedes, "we should be pitied more than anyone." The reasonable response to any universe other than the one in which an omnipotent, omniscient, and omnibenevolent God covenants with you unto eternal life in glory, as expressed prophetically, in wisdom literature, and in the epistles, is that we should "eat, drink, and be merry, for tomorrow we die". In short, the difference is that the game is not perceived as final in Christianity, making the stakes of losing the game in this life trivial, by comparison.
This was an incredibly compelling and outstanding essay. Some points for further consideration:
1.) Depending on how extensively we define cooperate-bot, I don't know if we can actually make the case that the early Christians exclusively acted in Cooperate-bot fashion as a homogeneous group. The study basically only gives the player 2 options: Trust or punish. But there is a third option, which is generally available in most of life situations, which is to quit playing. For example, there were time periods of persecution where the romans literally went house to house asking if there were Christians in the house. Some were open and said "yes" before being dragged to the coliseum. Others lied (or at least withheld crucial information from the romans). We know this, because otherwise every single Christian would have been killed, and the religion would have died off. Were the less forthcoming Christians, "punishing" the pagans? Or were they walking away from the game?
2.) I think it was Robert Gundry (?) who argued that strong tribal boundaries were a major element in the success of Christianity. Regardless, the church had very strong boundaries between those who were considered "in" and those who were considered "out." It was often a long and ardous process to become a Christian. Early christians may have been generous toward the pagans, but they did not consider them as "inner family."
Anyway, I do think you are on track to something really powerful and insightful. I look forward to reading more.