By Mark McQuain
Many philosophers argue that attaining the threshold of self-awareness is more valuable in determining a human’s right-to-life than simply being a living member of the human race. They require a human being attain self-awareness (reaching so-called full “personhood”) before granting unrestricted right-to-life for that particular human being. Lacking observable self-awareness relegates one to non-personhood status, and, though fully human, potentially restricted right-to-life status. The philosophic argument seems to be that only self-aware things suffer harm, or at least, do so to a more meaningfully significant degree than non-self-aware things.
Consider the following thought experiment. I finally designed a computer with sufficient complexity, memory, external sensors and computational power (or whatever) that, at some point subsequent to turning the power on, it becomes self-aware. The memory is volatile, meaning that the memory cannot hold its contents without power. The self-awareness, and any memory of that self-awareness, exists only so long as the power remains on. If subsequently powered off and then powered on again, the computer has no prior memory of being self-aware (because the memory is volatile and is completely erased and unrecoverable with loss of power) so becomes newly self-aware, with new external sensory input and new memory history. The longer the power remains on during any such power cycle, the more memory or history of its current self-awareness the computer accumulates. The computer’s hardware is bulletproof and is essentially unaffected by applying or disconnecting the power.
In this thought experiment, do the acts of turning the computer’s power on, allowing the computer to become self-aware, and then turning the power off harm anything?
By stipulation of the thought experiment, the computer’s hardware is unaffected by these events so no harm has occurred to the physical computer. Also, by stipulation, subsequently turning the computer’s power on again results in the computer becoming newly self-aware, with absolutely no memory of its previous period of self-awareness. The prior self-awareness is neither presently aware nor even in existence – it existed only during the prior power cycle. Perhaps as the designer, I may be harmed if I miss interacting with the computer as it was during its first self-awareness. The same perhaps goes for any other similar self-aware computer that had constant power during the experiment and witnessed the power cycling of the first computer.
But, what about the first computer? Was that computer harmed when I turned the power off? If so, what, exactly, was harmed? Following power-off, the computer has no self-awareness to be self-aware of any harm. The self-awareness no longer exists and that same self-awareness cannot exist in the future. Non-existent things cannot be harmed. Looking for some measure of group harm by assessing any harm experienced by other self-aware computers witnessing the event appears to be a problem of infinite regress (“It’s turtles all the way down”), as their self-awareness of the first computer’s self-awareness is also transient and becomes instantly non-existent when they power off. We will ignore the designer for the purpose of this experiment.
Assume now that the initial computer is a human brain. Some consider the physical brain a single-power-cycle, self-aware computer. For most humans, at some point after conception, we become self-aware, though philosophers disagree and cannot define the exact threshold for self-awareness. We can lose that self-awareness to physical brain injury or disease. Most believe the self-awareness certainly ceases with physical death, that is, it is volatile like the self-aware computer in my thought experiment, since, after death, there is no longer a functioning physical brain to sustain that self-awareness.
But if the thought experiment holds, requiring human beings the threshold of self-awareness before granting so-called personhood privileges such as unrestricted right-to-life is a meaningless threshold with regard to harm if that self-awareness is volatile and therefore not sustained in some manner after death. For self-awareness to be the determinant of harm in a living being, it must be non-volitile, meaning it sustains beyond death. However, if the self-awareness is sustained after death, then it is sustained in a non-physical manner (since the physical brain is obviously dead by definition of death). If self-awareness exists non-physically, might it also exist more fully than we can appreciate in a premature, a diseased, or an injured human brain prior to death?