the conceptual computer


Like all words, 'computer' is an idea, or concept, in the mind of humans.

Like any idea, we all have the innate capacity to form almost exactly the same notion of computer -- even though this meaning is in the mind, and even though we don't understand how the mind works. Aristotle noted that, no matter the language used, we all have, and can lead each other to, the same thoughts. Recent fMRI studies have confirmed Aristotle's quite rational hypothesis. This shouldn't surprise anyone: we are all of the same species, after all. Our obvious commonalities don't reduce the value of our equally obvious idiosyncracies.

Our technical terminology, or 'formal language', while still in the mind, is a bit different. There is an attempt, more or less successful and difficult, to understand how, to explain how, a technical term applies to some phenomenon we perceve in the external world. Often, natural words are recruited for this purpose, engendering much misunderstanding.

The external reality that a technical term attempts to decribe, is ultimately unknown, although we work hard to become enlightened about it. Whatever this reality's status in the natural sciences, the ideas behind the technical description will be quite different from it. We tend to forget this, comfusing the 'map' for the 'territory'.


This site has two goals:

1) discover more about the natural ideas of 'computer', 'programming', etc., as they sit in the brain, composed, in some way, of some kind of elemental ideas or concepts. It's reasonable to hypothesize that such research will lead to a better understanding of what we do when we program and use computers, and make use of other formal tools and artifacts. This is an extension of something I announced on Computing Philosophy with an example psychological experiment, an essay entitled The Biology of Computer Programming.

2) re-examine our technical definitions, which were, after all, not constructed to discover computation in the external natural world, but to allow us to begin to build something in the world, something that we all could agree was a 'computer'.

3) discover how the idea of the computer is related to various innate and constructed perspectives on 'intelligibility' in the sciences.

This is a key part of my initiative to drag computer science towards becoming a natural science, something necessary and probably inevitable, but totally absent at the moment.

Greg Bryant
I. first theory:

We use the method of impression, to ask the informant,
i.e. the subject, what 'if' means in various cirsumstances.
We start with quite natural sentences, and then narrow it
down to the typical computer programming meaning of 'if'.

At this point, we ask them to describe the meaning of 'if'
without using it or any common cognates or any kind of agreed
upon technical implementations.

Typically, this has been interpreted to mean that 'if', like
'not', 'and', and about a dozen other operators, are 'laws of 
thought' in george boole's phrase. 'Elements of cognition' which
can then be arranged to encompass any argument.

But, of course, there is not such calculus. This shouldn't surprise
us. Just because we have no access to the internal definition of 'if',
doesn't mean it is elemental in any way. It simply means that, when
impressed upon the subject, or 'externalized' internally from one 'part'
of the brain to another (or that is the impression), that this 'if'
is _indicative_ of some factor in our thinking. In fact, the fact 
that we cannot introspect on it further does not indicate that it's
elementary, but rather that something very complex is going on.
The idea that we could make a calculus of this is ... let's call
it not fashionable.


II. experiments:

* Discussion on the smallest experiments
description
* unknown experiment 2
description
* unknown experiment 3
description

III. adjusted theory:

IV. adjusted experiments:
V. Some essays

Copeland

Formal machines and real machines: Multiple heads and halting problems