Do you want to put a chip in your child’s brain?
Do you want to merge your consciousness with a machine?
No? Me either. No one actively wants to be inhuman.
Should robots take all the human jobs? *vigorous head shake* We’d love to hang onto those, please and thank you.
And what about those fully autonomous AI weapons systems that act without human direction? Not so popular in the grand scheme of things.
In summary: the trans-human/post-human fully-automated world is not high on our wish list.
So why are we sprinting there as fast as possible?
Quick caveat: yes, some people actually do want to become inhuman, invent Skynet, and bring about the singularity. But they are rare. And weird.
Almost every time a new article appears about scientists linking machines to human brains or building a fighter jet that can operate all on its lonesome, a friend asks me some version of the following:
“Why??? Don’t they understand this is how it starts? Have they never seen a scifi/horror movie before?”
I always give the same answer.
Prisoner’s dilemma is one of the “games” used in game theory to explain human behavior. The traditional description goes something like this:
Two people are arrested for participating in a crime. Both are guilty, but the police don’t have enough evidence to charge either one. The police separate the prisoners and question them individually, offering a reduced charge to each for informing on the other.
If each prisoner were able to stay silent, both would be released without charge. They know this. Staying silent produces the best outcome for all.
But instead of staying silent, they inform on each other. Why?
The prisoners cannot know if their co-conspirator will choose to give them up to the police, and so both choose not to risk losing out on the reduced sentence option by staying silent. They inform on each other because it is the most rational option—given the information they have in the moment.
When applying prisoner’s dilemma to our techno-bio-revolution, it can help to think of it with respect to nuclear weapons:
No one wants nuclear weapons to exist. No one wants multiple actors to have the power to destroy the human race with the press of a few buttons.
And if every state actor chose not to develop nuclear weapons, they wouldn’t exist. The best possible outcome.
Lack of trust prevails. One state actor cannot know—or expect—other states to act in the interest of the community. And if that state exercised restraint and chose not to develop nuclear weapons while others (secretly) did, then it would be at a disadvantage.
So the only rational choice is to build nukes.
Fear—rational fear—drives them toward the more rational but less beneficial outcome. (and yes, maybe nukes actually prevented the Cold War from going hot, but let’s not get into the weeds here).
So it goes (and will go) with our dystopic techno-bio-revolution.
Example: In May 2016, a bunch of genomic scientists held a secret meeting in Washington to discuss a plan for the synthesis of an entirely artificial human genome. That is, using chemicals to create parent-less human beings from scratch.
Though the group released their plan a few weeks later, the fact that the meeting was held in secret caused much wailing and gnashing of teeth.
Even more ubiquitous was the question “why even do this in the first place?”
Scientists are scientists. By nature, they like to push the boundaries. But I would wager that if pressed, most of those involved in this meeting would prefer that we not start growing fully artificial human beings on the regular.
But if we look at prisoner’s dilemma, the motivations make more sense. These scientists know that no matter how much revulsion society might toward manufacturing humans, it’s going to happen.
Someone will do it. If not them, then another actor—be it a country, company, individual, or collective of academics—is either working on it right now or will be soon. And it may be that this unknown actor doesn’t want it to happen either but is compelled by the same reciprocal, rational fear.
So these scientists in their secret meeting in Washington—I’m guessing most of them rationalized their plan with some version of the following:
This is coming whether we want it or not. So why not get in on the ground floor and exercise some influence?
Those parents who have their first children ten or fifteen years from now won’t necessarily want to modify them with bioelectronics. But they will do so out of fear their children will be left behind.
That autonomous AI fighter jet? We don’t really want it to exist, but we build it out of the (rational) fear that someone else will do so first.
None of us really want to put a chip in our brains. But the first time a company hires a modified human because she is vastly more capable than a purely biological one? It’s off to the races.
I don’t believe most CEOs want their companies to shed human workers. But prisoner’s dilemma says that if there is even the chance their competitors will do so, they have no choice but to do so first.
I’m realizing this post has been waaaaay more serious than my norm.
Quick, have to think of a joke. Thinking…
Joke joke joke…