Software and the Philosophy of Mind

Some ideas I was pondering over lunch today, whilst reading Edward Feser’s Beginner’s Guide to Aquinas. I was at the chapter on Psychology, reading about the immateriality of concepts and the consequent immateriality of the intellect. And I got to thinking about software, because, hey, that’s what I do.

On a materialist account of mind, the brain is something like a computer, and the mind is something like software running on that computer. This is the fundamental principle for folks pursuing Strong AI: we know, they say, that it’s possible; all that’s left is to work out the engineering details. But anyway, I took what I know about software, and started trying to apply it to the mind, on the assumption (yes, I’m playing Devil’s Advocate, here) that the brain is a computer and the mind is simply the software running on it.

In this view, a concept must be something represented in the brain, say in the form of neuronal firing patterns. As such, it’s effectively data: either a program executed directly upon the brain’s hardware, or data operated upon by some program that executes directly upon the brain’s hardware.

Now let’s switch gears, and consider a statement written in a programming language—Tcl, say.

    set pi 3.14159
  • The efficient cause of the statement is the programmer. That would be me.
  • The material cause of the statement is the source code, entered in a file on a computer or written on paper: a sequence of characters.
  • The formal cause of the statement is its syntax: that which dictates how the characters are arranged to be valid Tcl code.
  • The final cause of the statement is its semantics: what it’s supposed to do.

Feser points out that the efficient cause and the final cause are always linked. In this case, the link is obvious. I wrote the line of code because I wanted the final cause: in this case, to assign the value 3.14159 to the variable “pi”.

Now, what gives the statement its semantics? There are two answers to that question.

First, the semantics are defined by the program that executes the statement: the Tcl interpreter. When the interpreter reads executes the statement set pi 3.14159, it does something like the following:

  • The first word of the statement is the command, set.
  • The first argument to set is a variable name.
  • Create the variable if it does not exist.
  • The second argument to set is a value.
  • Assign the value to the variable, replacing any previous value.

Second, I do. I want to express that the value 3.14159 should be assigned to the variable “pi”, and that statement means that operation to me. In other words, Tcl is an intelligible language whether a program that interprets exists or not.

But consider the Tcl interpreter again. It gives the statement its meaning, its semantics. But how does it do this? The Tcl interpreter is itself a computer program. The set command is defined as a function in a programming language called C. Each statement in the function is written in the C language, and has its own semantics; the collection of the statements implement the semantics given above.

The efficient cause of the set is another programmer (a man named John Ousterhout, as it happens); and the semantics are what he intended, but also the semantics of the C language.

The thing to note, here, is that a statement in a programming language, or an entire program, has semantics—meaning—only in the context of an interpreter: the Tcl interpreter, the C compiler, the microprocessor, the mind of the programmer.

One could continue to trace the semantics back along a number of branches. The Tcl interpreter is written in C, but the C code is compiled into machine language; at run time, the machine language is interpreted by the microprocessor. And the machine language has semantics. The C code is given its semantics by the C compiler, which is also a C program that compiles to machine code. There are programs interpreting programs interpreting programs, all of them ultimately running on a piece of hardware; and each program and the hardware itself were all designed and implemented by a human being.

In short, program semantics ultimately come from people.

The efficient cause of the semantics of a program is the programmer who wrote.

The final cause of the semantics is what the programmer wants the program to do.

Now, let’s switch back to the mind, once again from the materialist point of view. A concept is like a statement in a programming language. It has some representation: neuronal firings instead of a bit pattern. And it has meaning, or it isn’t a concept. It has semantics.

So where did the semantics come from?

As we’ve seen in the case of a computer program, the semantics ultimately comes from the programmer—and, though I haven’t developed the idea above, the end user. That is, from people. So the semantics of the program in my head must come from people. That is, from outside.

And yet, the concepts in my head clearly have meaning to me. It’s absurd to think that they don’t.

I’m not sure where to go from here, but it certainly strikes me as absurd that a program can give itself meaning, which implies that I am not a program.

3 Responses to “Software and the Philosophy of Mind”

  1. ginseng says:

    I think, any software have the direct relationship with one’s mind. Because the main building of any software is depend upon one’s logic. So there is correlation ship between software and human minds.

  2. Bob the Chef says:

    I’m interested in the possibility or impossibility of AI, as far as Turing machines are concerned, also. I think we should begin with the observation that software is physical. We should not even talk about software as variables or numbers or what have you, because physically in the computer, there are no variables or number, just as triangles as triangles do not exist. By referring to numbers, we are projecting concepts into the real, where they do not exist. We have two questions to answer really, at least for me:

    1) If the mind is the function of the brain, then what does it mean for the mind to be immaterial? How are mind (immaterial) and brain (material) related?
    2) If the mind is the function of the brain, why cannot some kind of mind be the function of computer hardware?

  3. Will says:

    The question, as you point out, is to what extent is the mind a function of the brain? On a purely physicalist account, it’s entirely a function of the brain…and there’s no reason why you couldn’t build a mind on top of some other kind of hardware. On a Thomistic account, the mind is a function of both the brain and the soul, and there are some aspects of it that cannot be reduced to physical processes.