Disclaimer

Many of my essays are quite old. They were, in effect, written by a person who no longer exists in that my views, beliefs, and overall philosophy have grown and evolved over the years. Consequently, if I were to write on the same topics again, the resulting essays might differ significantly from their current versions. Rather than edit my essays to remain contemporary with my views, I have chosen to preserve them as a record of my past inclinations and writing style. Thank you for understanding.

March 2014

Response to Susan Schneider's The Philosophy of 'Her'

This is an op-ed article published by H+ Magazine.

In a March 2nd, 2014 New York times article, Susan Schneider used the recent film 'Her' to analyze some of the philosophical challenges to the concept of mind-uploading, whereby a person's mind is transferred from a brain to some sort of computer. She presented some common (although admittedly fascinating) hypothetical thought-experiments, drawing conclusions to match. Below, I offer alternative explanations for these scenarios.

Schneider claims right off the bat, of Samantha the life-like artificial intelligence portrayed in the film, "few humans would want to join Samantha, for to upload your brain to a computer would be to forfeit your consciousness." She is putting the cart before the horse. The attitude of the public should follow from the public itself, namely in the form of a survey, whereas Schneider's claim comes across as a presumption that the publicly has already accepted her thesis and therefore must consequently agree with her. Let's not get ahead of ourselves.

Schneider writes "...we could never be certain that programs like Samantha were conscious...although you can know that you yourself are conscious, you cannot know for sure that other people are...all you can do is note that other people have brains that are structurally similar to your own and conclude that since you yourself are conscious, others are likely to be conscious as well." She is right on these counts. Namely, we deduce that other humans are conscious in the follow manner:

  1. We recognize that all humans have similar brains.
  2. We accept as a premise the consciousness of our own brain.
  3. Therefore, we grant the assumption of similar consciousness to other human brains.

Given the inherent inaccessibility of subjective states (that we definitively cannot verify the inner consciousness even of other humans, much less Homo, Australopithecines, great apes, mammals, animals, or even computerized intelligences) we must simply move on from this philosophical dead end. To do so, we must decide upon the salient features of a mind. For those who hold that biological human brains are a requirement of consciousness, there is no further debate. They cannot be proven wrong nor can they prove themselves right. So be it. For those who view the *behavior* of brains as the crucial property of minds, we can conceive of systems embedded in other physical substrates that exhibit comparable behavior, and which may therefore be conscious. This is the only launching point for further debate, so it from where I proceed.

Schneider offers a hypothetical scenario in which, to paraphrase, Theodore uploads his mind to a computer via destructive scanning and subsequent software modeling of the scan. She then asks, "would he succeed in transferring himself...Or would he...succeed only in killing himself, leaving behind a computational copy"? She argues, "Ordinary physical objects follow a continuous path through space over time...his mind would not follow a continuous trajectory...his precise brain configuration would be sent to a computer." Although true, the physical brain isn't the important entity here, the mind is. The mind is not a physical object *at all* and therefore properties of physical objects (continual path through space and time) need not apply. The mind is akin to what mathematicians and computer scientists call "information", for brevity a nonrandom pattern of data (consult the venerated Claude Shannon for a thorough treatise). A protest might be that data is non-changing, like a printed document, while the mind evolves over time, but this is a specious distinction. Audio clearly represents a signal changing over time, yet may be "fixed" relative to time via recording in a physical medium (a record, tape, CD, etc.). The data can be embedded either "within" time or "outside" time by converting the time dimension to an additional dimension in space, such as the length of a tape, the groove of a record, etc.).

A spatial dimension literally substitutes for the absent time dimension. Closer to our target of the mind, consider the ongoing physiological development of an organism (a growing plant or animal). Such growth is a pattern of changing information embedded both in space and time (obviously involving additional spatial dimensions over audio, or even video). We understand some biological growth systems in sufficient detail to practically describe them as three-dimensional videos across time, (gene expression, mitosis, meiosis, embryonic development, etc.). These systems present a pattern of information, embedded in a physical substrate, changing its structure (and consequent informational embedding) over time. I believe Schneider has utterly misconceptualized the issue when she speaks of physical objects remaining continuous in space in time in a way that mind-uploading would not. If the mind is a pattern of information embedded in physical matter, then its embedding can be converted "in" and "out" of time or may be transmitted just like any other information. Continuity through space or time is simply irrelevant.

We can now investigate Schneider's additional scenarios with remarkable lucidity. She writes, "[Theodore] could be downloaded to multiple other computers...As a rule, physical objects and living things do not occupy multiple locations at once. It is far more likely that none of the downloads are Theodore, and that he did not upload in the first place." Schneider proposes the popular scenario in which an uploading process yields multiple uploads. She concludes that since physical objects, as well as living things, do not occupy multiple locations, we should dismiss the uploads as failures *on this particular basis*. We needn't concern ourselves with the properties of physical objects, as I explained above, but "living things" require are more subtle analysis. Albeit conventionally physical in form, it is life's *behavior* that is so "life-like", and I previously characterized that behavior as a sequence of physical states (information embedded in physical structure). Consider a typical board game, say chess, clearly a sequence of board "states". Imagine a chess game that, part way through, is duplicated to two new boards with new sets of pieces. The first game is then wiped away entirely and the two "new" games continue in isolation. The earlier portion of their histories are identical, but from the split their states steadily diverge. Schneider would claim that were there a single copy, the "new" game might at least be *considered* a valid continuation, but that the mere presence of two copies prevents us from even entertaining the notion. This strikes me as absurd. I would argue that there are two games, each with equal claim to ownership of the game's identity, or more precisely, that the original game split into two distinct and equal games.

Schneider considers a final scenario: "imagine that the scanning procedure doesn't destroy Theodore's brain, so the original Theodore survives" The coup de grace: Theodore's brain has survived. We copied the intermediate chess game to a new board, but left the first board intact! Surely, the upload cannot possibly be *the* Theodore if his brain still exists. The copied-to chess game must be a *mere* copy of the "valid" original if the copied-from game still exists. But in what meaningful sense is a chess game *identified* by the physical pieces or the board? A chess game is a unique sequence of board states, and many would argue that a mind is a unique sequence of brain states. Neither chess game may make any more reasonable a claim to primacy than the other. Likewise, the uploaded-Theo and the brain-Theo are equal in their claim to Theo's identity. One feels the upload process succeeded and one feels it failed...and they are both *equally* correct. We often grant superiority to the subjective perspective of the brain-Theo in these thought-experiments, but that is in essence, an unadulterated bias, dare I say a prejudice.

So, Schneider concludes, "Humans cannot upload themselves to the digital universe; they can upload only copies of themselves" to which I offer the precise antithesis, "Humans cannot remain in their own brain after uploading; they can only successfully upload and leave a copy behind in their stead." Schneider's insistence that one statement is more valid than the other signifies an irrational favoritism of one subjective perspective over another.

I would really like to hear what people think of this. If you prefer private feedback, you can email me at kwiley@keithwiley.com. Alternatively, the following form and comment section is available.

Comments

Name:
Comment: characters left

(Html tags will be intentionally stripped for security reasons, sorry.)
Verification: = (solve the equation, don't just duplicate the text)

Name:Anonymous Date/Time:2014/11/17 06:54:42 GMT
t

Name:Keith Date/Time:2014/03/27 17:03:52 GMT
Comment test