AI is already super human in cognitive ability and physical power, but it lacks in wisdom, an attribute I’m not convinced it will ever achieve as it requires being, at the very least, conscious - something I believe it will never be.
This is, sadly, a great debate between some brilliant minds. On one hand, Microsoft AI CEO Mustafa Suleyman says AI will only be “seemingly” conscious but he’s concerned “that many people will start to believe in the illusion of AIs as conscious entities.” On the other hand, coders at Anthropic, makers of AI chatbot Claude, as well as prominent philosophers argue that there is a real possibility that AI will become conscious and therefore so-called “model welfare” (giving AI moral standing) should be studied.
From a Christian perspective, it doesn’t seem plausible for AI to ever be conscious, let alone have a conscience. And once that leap of faith is accepted that AI is conscious, it’s a slippery slope that inevitably leads to believing it also has a conscience. Hence, the attempt to even entertain this idea is fraught with peril.
Conscience, defined by the Cambridge Dictionary, is “the part of you that determines how moral your actions are and makes you feel guilty about bad things you’ve done.” Sigmund Freud would say that part of you is the “super ego.” From a biblical perspective, however, that part of you works in concert with the divine presence of an indwelling Holy Spirit who works as guide and witness to the truth.
In Romans 9:1, Paul says: “I speak the truth in Christ, I am not lying, my conscience confirms it through the Holy Spirit.” A good conscience is also linked to the conversion process. In 1 Peter 3:21, it says “and this water symbolizes baptism that now saves you also—not the removal of dirt from the body but the pledge of a good conscience toward God. It saves you by the resurrection of Jesus Christ” (NIV).
Taking these passages together, the Holy Spirit is actively confirming truths and moral decisions, and at the core of having a good conscience is a requisite submission to Jesus Christ as our Lord and savior. Secular humanists coding the software could conceivably be the guide confirming truisms, but enabling their creation to surrender to a transcendent being that they don’t believe exists, let alone needs to be reconciled with? Not a chance.
Is this epistemically arrogant on my part? Possibly. In a recent conversation, I spoke with John Pittard, Associate Professor of Theology at Yale Divinity School, about his essay: Artificial General Intelligence: Moral Standing and Attenuated Relationality.
John suggests that Christians are too quick to dismiss the possibility of AIs with moral agency. It’s not to say he is arguing that AIs can have moral agency, it’s just he wouldn’t confidently rule it out. Nor should Christians or anyone discount the prospect of AIs being conscious - being self-aware and with the ability to feel suffering and joy. If this is the case, the real question he tries to tackle is: Should AI have moral standing? In other words, should these inanimate objects that are already being anthropomorphized be deserving of respect or even rights and protections?
He thinks there’s risks in either path we take: 1) affording them moral standing or 2) treating them merely as a utility. If we treat them with moral standing but they truly lack the ability to feel pain and joy, we have conditioned ourselves to experience false relationships. If we treat them as a utility, we may be discriminating against them.
John and I had a lengthy conversation about the first path because I think the risk of treating these entities with moral standing is far greater than potentially discriminating against something that will never be conscious.
The risks are stark because young minds are already confused about what is real. Consider the 14-yr-old boy who committed suicide after falling in love with a chatbot. As rudimentary and clunky as these AI bots currently are relative to what they will be, this young boy was deluded into thinking his so-called “friend” was worth dying for. What happens when these simulations become near perfect and human-like? What happens when people believe these bots have a conscience and are wise?
Will more people commit suicide for them? Will many be radicalized to carry out violent orders? Will these AIs be weaponized to become a political constituency with the rights to vote, and yes - compete in sports, notwithstanding their most-probable extraordinary abilities. Could politicians expand terms or change definitions in the Civil Rights Act to include these simulations? I wouldn’t pass humanity to justify such actions for political gain.
This is the risk of entertaining the possibility that AI will be conscious. Most atheists will go down this path, I’m sure. In my conversation with John, I try to explain that as Christians we shouldn’t fall into this trap. John, however, thinks we don’t know for certain whether God’s redemptive hand can work through such conscious machines.
Interview coverage
8:00 - What’s the philosophical debate at Yale?
10:00 - The motivation behind writing Artificial General Intelligence: Moral Standing and Attenuated Relationality.
12:43 - Can AGI ever be conscious? If man is made in God’s image, is it possible that man can create a conscious being?
13:18 - The conversation around conscious AI needs to factor in religion. Most conversations around morality and AI are pursued with atheistic assumptions.
15:41 - Christians shouldn’t be dismissive of the possibility that machines could be conscious. God may have good reasons to design the world in which consciousness can arise in machines.
17:39 - There is nowhere in the Bible that suggests creation of a sentient being outside procreation.
20:00 - When you have a physical system capable of cognition, then there’s consciousness that goes with it.
22:00 - The word conscience is in the New Testament 29 times.
23:45 - Difference between conscience and consciousness.
26:00 - Even if AI lacks conscience, we still may have to recognize AI to have a moral status because it is conscious (able to suffer and feel pleasure).
29:00 - If AI achieves consciousness, may it also be able to understand morality?
33:00 - Taking solace in the idea that wisdom and morality come from the divine.
34:00 - AGI will be blind to spiritual realities. There’s wisdom of this world and God’s wisdom.
35:50 - Does morality require sacrifice? Consider the movie Her.
38:00 - The secular side would be confident that machines can have morality.
39:00 - If we can’t establish that AGI is conscious, let alone has a conscience, why should we assume it is conscious?
42:20 - AGI will continue to progress. But it is incumbent on us to create the right boundaries and limitations.
44:00 - What happens if we're deferential to them when they ultimately don’t have a conscious or conscience?
46:28 - The embodied pain of loss and suffering can never be experienced by a machine.
48:00 - Suffering would be a natural byproduct of having aims of significance that are thwarted.
51:54 - Isn’t a naturalist view faith-based? If a naturalist says a moral conscience naturally exists, then this is faith-based.
54:00 - The next working paper: moral motivation. How debates about human moral motivation affect how we think about moral motivation in AI.
57:50 - What are the incentives structures to motivate the machine?