A brief overview of the development of Western Philosophy

Discussions worth keeping around later.
Post Reply
User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Post by Pthagnar »

Sophonce/sapience/time-binding/language/whatever-you-call-the-thing-that-makes-you-special-and-a-snake-not is quite a recent development, as are complex nervous systems really, and as you get close to pointing out, it is quite a contingent one. It evolved, sure, but, to use my own words, "just because an adaptation would be nice to have doesn't mean it either a) can or b) is likely to evolve.". It's a shitty and anthropic kind of argument, really. Each of those situations you give are cases where you go backwards, which is really how this kind of evolutionary science is done in practice: we observe an adaptation, how can we explain it? This is excellent biology, but it does not go at all well in reverse.

In fact, you are explaining why and how mental maps are the "london underground" model that came about for function and not fidelity! Every step you mentioned was a functional *improvement* and so presumably a more faithful representation of whatever noumenoi/Dingen an sich are out there. The examples I gave of scientific discovery showed that there are many things out there, seen and unseen, known and unknown that evolved adaptations, sensu strictu, cannot grasp. And philosophical things, which is what this idea of limitations is all about, provide even more. It's a kind of special pleading why philosophy is so awful -- they're treading in realms that Man Was Not Meant To Tread In and this is why it all sounds so gloopy and meaningless.

Oh well, the theologians are worse.

Incidentally, the examples you give are not really very good examples of Outside Context Problems. Habitat change and the arrival of new species go back to, shit, the Precambrian if you cast the net widely enough. Certainly the Cambrian. To further illustrate the problems with assuming that just because something would be nice means it will arise, I see you point out the worm strategy, which corresponds to r- strategies in r/K selection analysis. This provides an example where the development of a more complex internal representation of the world is not selected for -- spamming is instead. Such spamming r-strategies are favoured by unstable environments, and given how much of the world is unstable over time and space, it is not surprising that in many ways it is the "default" mode of reproduction.

Makerowner
Sanci
Sanci
Posts: 20
Joined: Tue Jul 10, 2007 2:00 am
Location: In the middle of the Canadian Vowel Shift

Post by Makerowner »

Since the thread is already good and derailed...
zompist wrote:
As for the Tolstoy/Dostovevsky thing, I think you miss the point. You said earlier you thought you had a unified, hi-res visual field. The experiment proves you don't. You think you see a page of Tolstoy, when no such page exists. (Reading isn't the issue; you don't need a full visual field to read, as in fact the experiment also proves— all you need to read is that moving fovea.)

What your brain gets is a fuzzy overall picture plus an area in sharp focus. This isn't normally a problem since they coincide— look at any area and you get it in sharp focus. Since there is no way to focus on the area not in focus, we get the illusion that the whole picture is sharp.

(And your analogy doesn't work— the point here is that you are not seeing the "same thing" as other observers and simply interpreting it differently. The experiment messes with "seeing" itself in an unusual way.)
"What I see" != "what my brain receives". "My visual field" doesn't mean "what physiology tells us the eye sends to the brain", it means "what I actually see". And it's pretty clear that I don't see a black spot floating in front of me all the time, even though as we all know there is a blind spot in the retina. The most obvious illustration of this is that we don't see double all the time: I only see one page even though there are two different retinal impressions involved in the process.
Salmoneus's point, if I understand it correctly, is not that the subject of the experiment and an observer see the same thing in the sense of "have the same visual experience and then interpret it differently", but in the sense that the same physical object looks different to different people, which, as he noted, is a rather old discovery. A simpler example than this rather contrived experiment is colour-blindness: that classic picture with the green number in a circle of red dots looks completely different to a colour-blind person than it does to me. I see a number, he doesn't. He doesn't "actually see" the number but think that he doesn't, and I don't "actually not see" the number but think I do; each of us sees something different, that's all.
Hig! Hig! Micel gedeorf ys hyt.
Gea leof, micel gedeorf hit ys, forþam ic neom freoh.
And ic eom getrywe hlaforde minon.

User avatar
Aurora Rossa
Smeric
Smeric
Posts: 1138
Joined: Mon Aug 11, 2003 11:46 am
Location: The vendée of America
Contact:

Post by Aurora Rossa »

So tell us more about this common sense school of philosophy, Salmoneus. How does one make a philosophical movement out of something so straightforward and unphilosophical?

In my opinion, this stuff about the evolution of intelligence belongs in a separate thread.
Image
"There was a particular car I soon came to think of as distinctly St. Louis-ish: a gigantic white S.U.V. with a W. bumper sticker on it for George W. Bush."

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Post by zompist »

Makerowner wrote:
zompist wrote:What your brain gets is a fuzzy overall picture plus an area in sharp focus. This isn't normally a problem since they coincide— look at any area and you get it in sharp focus. Since there is no way to focus on the area not in focus, we get the illusion that the whole picture is sharp.
"What I see" != "what my brain receives". "My visual field" doesn't mean "what physiology tells us the eye sends to the brain", it means "what I actually see".
Why are you so sure these are different? Do you really not believe that the brain is the seat of consciousness? Do you think the brain has a miniature TV set inside it that You, the nonmaterial soul, watch the corrected picture on? (If so, how does this homunculus's vision work?)

If you just mean that the brain itself does some visual processing, of course that's true. The eye does some, the optic nerve does some, the visual cortex does some. Our visual field has effects which can be pinpointed to each of these areas.
And it's pretty clear that I don't see a black spot floating in front of me all the time, even though as we all know there is a blind spot in the retina.
And you can perceive where it is:

http://serendip.brynmawr.edu/bb/blindspot1.html

I don't know why you'd expect something you can't see to appear as black... black is a perception. Are the objects behind your head black?
The most obvious illustration of this is that we don't see double all the time: I only see one page even though there are two different retinal impressions involved in the process.
Of course; the visual cortex integrates the two sets of data it gets.
Salmoneus's point, if I understand it correctly, is not that the subject of the experiment and an observer see the same thing in the sense of "have the same visual experience and then interpret it differently", but in the sense that the same physical object looks different to different people, which, as he noted, is a rather old discovery.
I'm sure it is, but it had nothing to do with my point, which is a demonstration that the uniformity of the visual field is an illusion. The external observer wasn't necessary; even by himself, the subject is seeing something (a page of Tolstoy) that simply isn't there, and he isn't even aware that 'something is wrong'.

To try to make this clearer: imagine being at this device reading the text from Tolstoy. You're halfway through. What is the last word of the text? Your eye hasn't looked there yet, and therefore the device has never put it up on the screen (it's only put up a different word, from the Dostoevsky passage). Do you still maintain that you have a sharp in-focus view of the entire page? If so, how does the brain supply the missing words?

Here's a simpler one: Take a playing card, and hold it up by your ear while looking straight ahead. You won't be able to tell its suit or its color (but you'll easily notice if it's moving). Move it slowly forward... when can you see what color it is, what suit, what number? It'll be surprisingly close to straight ahead.

User avatar
Radius Solis
Smeric
Smeric
Posts: 1248
Joined: Tue Mar 30, 2004 5:40 pm
Location: Si'ahl
Contact:

Post by Radius Solis »

zompist wrote:Here's a simpler one: Take a playing card, and hold it up by your ear while looking straight ahead. You won't be able to tell its suit or its color (but you'll easily notice if it's moving). Move it slowly forward... when can you see what color it is, what suit, what number? It'll be surprisingly close to straight ahead.
Out of interest, I just tried this. I could tell it was diamonds when it was at roughly a 45 degree angle from my direction of sight; but the number on the card, hah! I was looking straight at your word "here's" from the quoted paragraph, and I did not manage to discern that the card was a 9 until it was covering up the word "simpler".

Makerowner
Sanci
Sanci
Posts: 20
Joined: Tue Jul 10, 2007 2:00 am
Location: In the middle of the Canadian Vowel Shift

Post by Makerowner »

zompist wrote:
Makerowner wrote:
zompist wrote:What your brain gets is a fuzzy overall picture plus an area in sharp focus. This isn't normally a problem since they coincide— look at any area and you get it in sharp focus. Since there is no way to focus on the area not in focus, we get the illusion that the whole picture is sharp.
"What I see" != "what my brain receives". "My visual field" doesn't mean "what physiology tells us the eye sends to the brain", it means "what I actually see".
Why are you so sure these are different? Do you really not believe that the brain is the seat of consciousness? Do you think the brain has a miniature TV set inside it that You, the nonmaterial soul, watch the corrected picture on? (If so, how does this homunculus's vision work?)

If you just mean that the brain itself does some visual processing, of course that's true. The eye does some, the optic nerve does some, the visual cortex does some. Our visual field has effects which can be pinpointed to each of these areas.
Of course I'm not talking about a homunculus and I'm not making any kind of silly theory about an immaterial soul. All I mean is that "what I really see" is not something that can be determined by examining the makeup of my eye, optic nerve, brain, etc. ; not because it takes place in some mysterious different realm, but because these are descriptions from two different angles. Just as "what this book is about" can't be explained by descriptions of the letters it's composed of, again not because the book is somewhere else, but because "what the book is about" is a different kind of description.
The physiological explanation given above is a theory; no matter how probable or well established it is, it's always possible that it could turn out to be wrong. It would be wrong if it predicted that we should see things in one way, when we actually see them in another; that is, it's value as a theory is based (partly) on how well it predicts what we actually do see. (And just to make this clear, I'm not using the stupid "it's just a theory" line that Creationists do; what I mean is that how good a theory of perception is is determined (partly) by how well it matches our perceptions. Our perceptions are the data that the theory is supposed to explain; we don't learn what our perceptions are from the theory, but what causes them.)
And it's pretty clear that I don't see a black spot floating in front of me all the time, even though as we all know there is a blind spot in the retina.
And you can perceive where it is:

http://serendip.brynmawr.edu/bb/blindspot1.html

I don't know why you'd expect something you can't see to appear as black... black is a perception. Are the objects behind your head black?


OK black was the wrong word, but in any case I don't see a hole or an empty space in front of me, which your link mentions as well: "What's particularly interesting though is that you don't SEE it. When the spot disappears you still don't SEE a hole. What you see instead is a continuous white field [...]" What I "really see" is continuous; when I compare it to what I "really saw" a minute before I can say that my perception is innaccurate, but precisely the point of saying this is that what I do see doesn't reflect what I've determined by other means is there. The illusion is not that I don't really have a continous visual field but think I do, it's that my continuous visual field includes one area of distortion, which is something I can only determine by comparing different perceptions. And note that to do this I have to also determine whether it's my perception that's illusory, or whether the thing that disappeared from my visual field has actually disappeared. The blind spot is a theoretical object discovered by a rather sophisticated process, not part of my perception. We could just as well say that all objects turn transparent at a certain distance from the human eye, but this explanation obviously conflicts with various other parts of our sciences; my point is that "the blind spot" is a consciously constructed theoretical explanation, not something immediately experienced.
The most obvious illustration of this is that we don't see double all the time: I only see one page even though there are two different retinal impressions involved in the process.
Of course; the visual cortex integrates the two sets of data it gets.
Yes, but that's not the point. The point is that your argument "there is a spot on the retina not sensitive to light, therefore there is a gap in the visual field" would work just as well in this case: "there are two retinas, therefore there are two visual fields". It's obvious that there aren't two visual fields, and this is obvious because "what we really see" is not determined by examining our organs. Physiology explains how our perceptions occur, not what they are. That there are two retinas and one visual field is the problem to be explained, and "the visual cortex integrates the two sets of data it gets" is AFAIK the prevailing explanation.
Salmoneus's point, if I understand it correctly, is not that the subject of the experiment and an observer see the same thing in the sense of "have the same visual experience and then interpret it differently", but in the sense that the same physical object looks different to different people, which, as he noted, is a rather old discovery.
I'm sure it is, but it had nothing to do with my point, which is a demonstration that the uniformity of the visual field is an illusion. The external observer wasn't necessary; even by himself, the subject is seeing something (a page of Tolstoy) that simply isn't there, and he isn't even aware that 'something is wrong'.


This is precisely my point: he's not aware that something is wrong because nothing is wrong, for him. "What he really sees" is a page of Tolstoy; he doesn't "think he sees" a page of Tolstoy, he really does--just as the colour-blind person in my example doesn't think he sees a uniform field of dots, he really does see one. In order to determine that this visual perception is inaccurate, he would have to, eg. watch the experiment being performed on someone else, but in any case it's not that he "really saw" something different, it's that he revises the presumed relation between "what he saw" and "what is really there". An illusory perception is still a perception.
To try to make this clearer: imagine being at this device reading the text from Tolstoy. You're halfway through. What is the last word of the text? Your eye hasn't looked there yet, and therefore the device has never put it up on the screen (it's only put up a different word, from the Dostoevsky passage). Do you still maintain that you have a sharp in-focus view of the entire page? If so, how does the brain supply the missing words?

Here's a simpler one: Take a playing card, and hold it up by your ear while looking straight ahead. You won't be able to tell its suit or its color (but you'll easily notice if it's moving). Move it slowly forward... when can you see what color it is, what suit, what number? It'll be surprisingly close to straight ahead.
I never said anything about sharp or in-focus. It's also obvious that the visual field is blurry except for the focus area, but this is again precisely my point. We don't discover that the edge of the visual field is blurry by examining our organs; rather the study of our organs has as one of its goals to explain why the edge of the visual field is blurry.
Hig! Hig! Micel gedeorf ys hyt.
Gea leof, micel gedeorf hit ys, forþam ic neom freoh.
And ic eom getrywe hlaforde minon.

The Unseen
Sanci
Sanci
Posts: 32
Joined: Thu Jul 10, 2008 2:49 pm

Post by The Unseen »

plz to get back to western philosophiez. kthxbai
[url=http://wiki.penguindeskjob.com/Aptaye]My conlang Aptaye. Check it outttt[/url]

Economic Left/Right: -0.50
Social Libertarian/Authoritarian: -8.97

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

This is philosophy.



I've given up the Zomp argument, mostly. However, I'd like to make one last point on zombies, and one point on experience.


Regarding zombies: you discount theories for being too unlikely to be true. But your notions of unlikeness ('absurdity') are based upon induction. What's more, they are inductions not from the facts, but from your perceptions, which themselves pre-include certain theoretical prejudices. Of course the idea of zombies is absurd - it goes against every intuition about the world that we have! But those intuitions are had precisely because we all assume, amongst other things, that zombies do not exist. So the fact you find it absurd tells us nothing about whether it is true or not, only about what your instincts are. The question then is where those instincts come from - and, as I have argued, their origin contains no sanction adequate to allow them to count as evidence.

Moreover, in discounting the theory, you rather miss the point. Nobody particularly cares whether everybody is a zombie or not - it's a trivial and uninteresting detail. What's important is whether everybody COULD be a zombie or not, which actually tells us things about the nature of our concepts. Those who accept zombies are committed to one set of positions (generally dualism or idealism), while those who deny zombies are likely to take a different set of positions (behavourism, and other sorts of functionalism). The practical feasibility of a Zombieworld is mostly irrelevent - what matters is its conceptual coherence (or lack thereof).

[And, of course, its function as a skeptical argument - even if you do not like the theory, can you really be absolutely and indomitably certain that it is not true?]

----------------------

Regarding perception: yes, everything that Makerowner points out.

But I'd like one little dichotomy further, regarding your Tolstoy example:
- is it or is it not the case that the man who reads the 'imaginary' Tolstoy and the man who reads the 'actual' Dostoevsky with a moving blob of Tolstoy on it see the same thing?

- If they are seeing the same thing, then clearly they are interpreting it differently. This puts it in exactly the same position as the Latin/Italian experiment - we are aware of the same artifact, but extract different signification from what we are aware of. This is, again, a very old observation (Heraclitus makes much of the fact that, for instance, a bowl of water may be warm to one person and cold to another - which is little different from a text being Tolstoy to one person and Dostoevsky to another), and tells us nothing about vision itself. It just tells us that secondary qualities, like hotness, colour, taste, meaning and the like all have subjective factors. Fine - we've known that since at least Locke. This seems like a very convoluted experiment to demonstrate a well-known and not relevant point.

What's more, this is seemingly based on a confusion - between 'what I see' being the content of my perception, and 'what I see' being the object of my perception. If we see the same thing, but interpret differently, it is obviously true that "I do not see what I think I see" - in the sense that the object of my perception may not be what I first conclude it to be. This, again, is an old and uninteresting observation - the possibility of error. It does nothing to demonstrate that the CONTENT of my perception is not what I believe it to be.

- If they are, on the other hand, seeing different things ['you are not "seeing the same thing" as other observers', as you put it], then this just reinforces our point: we see exactly what we think we see. I think I see Tolstoy, and I do in fact see Tolstoy. If I in fact see Dostoevsky with a moving bit of Tolstoy, then we're all seeing the same thing, and the first point applies; it's only when I'm seeing Tolstoy that your argument becomes relevant, but when I'm seeing Tolstoy I'm seeing exactly what I think I see.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

The Unseen
Sanci
Sanci
Posts: 32
Joined: Thu Jul 10, 2008 2:49 pm

Post by The Unseen »

This is philosophy.
I meant the "history of" part but I was too lazy. I agree with Eddy that if this discussion feels like continuing, it should go to a different thread. Obviously you're the OP so you may not want to write anything more until the discussion ends, but I'm just adding my too sense.
[url=http://wiki.penguindeskjob.com/Aptaye]My conlang Aptaye. Check it outttt[/url]

Economic Left/Right: -0.50
Social Libertarian/Authoritarian: -8.97

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Post by zompist »

Makerowner wrote:Of course I'm not talking about a homunculus and I'm not making any kind of silly theory about an immaterial soul. All I mean is that "what I really see" is not something that can be determined by examining the makeup of my eye, optic nerve, brain, etc. ; not because it takes place in some mysterious different realm, but because these are descriptions from two different angles.
A full understanding of the brain (which we're far from having) should precisely determine "what you really see". The brain is a biological machine... if you think there's something that can't eventually be explained by science, then you do still have a theory of an immaterial soul.
It would be wrong if it predicted that we should see things in one way, when we actually see them in another; that is, it's value as a theory is based (partly) on how well it predicts what we actually do see.
Sure— but it works the other way as well. Science may revise what we can claim about our own perceptions.

As Daniel Dennett points out, people are apt to make greater claims for their own perceptions than are warranted. E.g.:
"What he really sees" is a page of Tolstoy; he doesn't "think he sees" a page of Tolstoy, he really does
Please read my description again. If he's seeing a page of Tolstoy, what's the last word on the page? I don't know how to make this clearer: it has never appeared on the screen. If the observer claims to have seen it, he's just wrong.

I'll gladly agree that the subject sees a page of text, and if he reads it he'll read a passage of Tolstoy. It will seem to him that he's seeing a page of Tolstoy, but (before he's read it all) the claim that he really is "seeing a page of Tolstoy" can only be maintained by supposing that his brain, or soul, can see things that aren't there.

Science often has to correct our perceptions in this way— e.g. it tells us that the sun doesn't move in the sky, it's apparent motion. It can be uncomfortable when it starts messing with our perceptions... but then, that bit about apparent motion was once a huge stumbling block too.
I never said anything about sharp or in-focus. It's also obvious that the visual field is blurry except for the focus area, but this is again precisely my point. We don't discover that the edge of the visual field is blurry by examining our organs; rather the study of our organs has as one of its goals to explain why the edge of the visual field is blurry.
Yours may be— check with an eye doctor.

I appreciate that you're trying to modify the claim based on what you know of physiology— but in this case you're going too far. It's not the case that the visual field is blurry— because there's another mechanism that corrects for this, namely movements of the eye that are so fast that you're not even aware of them.

Again, the process of understanding isn't one-way as you depict it. Of course we want explanations for why we see as we do. But we also learn things about the brain that turn out to affect perception. I doubt anyone really noticed the blind spot before the anatomy of the eye raised the problem, and the Tolstoy experiment I described (and how we would perceive it) would have been pure speculation before the eye-tracker was invented.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Post by zompist »

Makerowner wrote:The point is that your argument "there is a spot on the retina not sensitive to light, therefore there is a gap in the visual field"
An additional note to remark that I said no such thing. You can't just look at the eye and expect that everything you find there affects our vision directly and without modification. If that was the case, we'd see upside-down.

What we do find is that many, many aspects of physiology affect our vision in perceptible ways. I find this quite fascinating though obviously not everyone does.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Post by zompist »

Salmoneus wrote:[And, of course, its function as a skeptical argument - even if you do not like the theory, can you really be absolutely and indomitably certain that it is not true?]
I've already answered this. It's not as hard as you might think to be comfortable not making absolute claims about truth. You even suggested a reasonable workaround in the atheism thread (less strictness about "knowledge" depending on context).

As for the eye-tracker experiment, I also answered this question— the subject doesn't see the same thing as the outsider. As I responded to Makerowner, the precise wording here matters. Yes, of course you "see Tolstoy"— no one doubts that.

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

zompist wrote:
Makerowner wrote:Of course I'm not talking about a homunculus and I'm not making any kind of silly theory about an immaterial soul. All I mean is that "what I really see" is not something that can be determined by examining the makeup of my eye, optic nerve, brain, etc. ; not because it takes place in some mysterious different realm, but because these are descriptions from two different angles.
A full understanding of the brain (which we're far from having) should precisely determine "what you really see". The brain is a biological machine... if you think there's something that can't eventually be explained by science, then you do still have a theory of an immaterial soul.
No, he (she?) has a theory of a MIND. No serious scientist or philosopher has ever claimed that science should be able to explain everything. Science provides patterns of material causation of material phenomena. Science, in other words, tells us what we experience - it does not tell us HOW we experience it. It couldn't - that would require a fundamental confusion of vocabularies. In the same way that no amount of 'is' can produce an 'ought', so no number of descriptions of the paths of photons can produce the colour of a caribbean sea. Science will tell us why we see one colour and not another, but it cannot tell us what that colour looks like - it has no words for that in its repertoire. Its not just a gap in our explanations, its a fundamental incommensurability - a gap that we cannot even imagine being filled by anything that looks like physics.

An example: the sensation of Beethoven can never be described by physics. Imagine if it could be: you could look at a couple of sheets of paper, or a couple of hundred sheets of paper, all filled with equations and diagrams, and from reading that physical description you would know what Beethoven sounded like. The idea is ludicrous - there is always a gap. And if one set of descriptions does not give you information that you can get from another set of descriptions, the first set is not the same as the second: the vocabularies are incommensurable.

This doesn't mean that there is anything immaterial going on, save as far as we consider abstract things like functions and interactions to be immaterial. It just means that one set of functions is described in a different vocabulary to another set of functions. I'm unsure why this possibility is so disturbing to scientismists.

However, not only are the vocabularies distinct, but there is also a hierarchy between them: the mental is incorrigible. Imagine your wife screaming in agony - you will likely believe her to be in pain. Further - imagine yourself screaming in agony. Now imagine that science has developed to the point where we have what appears to be a complete description of the neurological process that correlates to pain - not too far-fetched, pain is one of the more simple phenomena. Now let us imagine that there is a disagreement between science and screaming - a scientist observes your wife's C-fibres and says that she is not in pain at all. Do you say to her, "don't worry dear, you aren't really in pain, even if you think you are. Science has demonstrated that you're not feeling any pain at all." When you are screaming in agony, but your C-fibres are serene, do you say to yourself, "oh! I thought I was in pain, but now I find that belief to have been erroneous. It's a good thing that Science Man is here to help us out, otherwise this morning might have been quite unpleasant!"

What most of us would do here is say "I'm sorry, but your theory is clearly wrong - my wife and I are in agonising pain, yet our C-fibres are undisturbed. Clearly, then, disturbance of the C-fibres is not a universal concomitant of the experience of pain".

It is a characteristic of commensurable sets of terms that terms from one can be replaced by terms from the other. Geometry and algebra, for instance, are commensurable to some extent, because problems in one can be rephrased as problems in the other. The same is true between many physical fields, or between different physical or mathematical traditions. This is not true, however, between the mental and the physical: the mental can never be replaced by the physical, because when experiential evidence conflicts with the theories of physics, it is the theories of physics that are shown to be wrong: if your theory does not predict when I experience pain, we say that your theory is wrong, not that I have not been experiencing pain after all. This underlies Maker's point: physics is employed to explain experience; experience is not subject to correction by physics.
It would be wrong if it predicted that we should see things in one way, when we actually see them in another; that is, it's value as a theory is based (partly) on how well it predicts what we actually do see.
Sure— but it works the other way as well. Science may revise what we can claim about our own perceptions.

As Daniel Dennett points out, people are apt to make greater claims for their own perceptions than are warranted. E.g.:
"What he really sees" is a page of Tolstoy; he doesn't "think he sees" a page of Tolstoy, he really does
Please read my description again. If he's seeing a page of Tolstoy, what's the last word on the page? I don't know how to make this clearer: it has never appeared on the screen. If the observer claims to have seen it, he's just wrong.
Let us say that Tolstoy is a zombie, of the non-philosophical kind, who is sitting next to me and writing a book. He writes at about the speed that I read, but has a bit of a head-start on me, so I never catch up to the point where he has not yet written. When I start a page in his book and race him to the end but lose - what is the last word on the page when I begin reading it? Let us grant, even, that I only start reading the book when he is writing the final page (allowing the physical awkwardness of us having to hold the book open on two different pages, hurting our necks in the process) - when I begin, what is the last word in the book? There is none. But I am still reading a book of Tolstoy, and I am still reading a page of Tolstoy.

[Or, if you prefer, say that the 'last word' is the last word that has been written so far, and that in your experiment the last word on the page is the last word of Tolstoy that the machine has projected. Or, say that the last word on the page of Tolstoy is the word of Dostoevsky that is last on that page. All three descriptions are physically equal, and only a matter of how we choose to define our words - which shows how trivial and irrelevant the problem is. None of the three descriptions tell us anything profound about vision]

I'll gladly agree that the subject sees a page of text, and if he reads it he'll read a passage of Tolstoy. It will seem to him that he's seeing a page of Tolstoy, but (before he's read it all) the claim that he really is "seeing a page of Tolstoy" can only be maintained by supposing that his brain, or soul, can see things that aren't there.
Well, that's a claim that nobody can doubt! Have you never had a dream? Other people go further and have hallucinations. We all make mistakes, and many of us have seen optical illusions. We mistake one face for another. But this is all error in the OBJECT of perception, not in the CONTENT of perception - it is trivially true that the object of perception may not be what we believe it to be!
Science often has to correct our perceptions in this way— e.g. it tells us that the sun doesn't move in the sky, it's apparent motion. It can be uncomfortable when it starts messing with our perceptions... but then, that bit about apparent motion was once a huge stumbling block too.
A false analogy. Science has not changed our perception of the sun whatsoever - it still looks to be doing what it looked to be doing. And it is doing exactly what it looks to be doing!

Wittgenstein famously used the same analogy. Somebody was defending those who had held the geocentric model, saying "well, we can't blame them too much - after all, it does LOOK as though the sun goes around the earth!". Wittgenstein replied: "Tell me, what would it look like, if it looked as though it were the earth that went around the sun?"

The point is, of course, that it would look - and does look - exactly the same. All that has changed is our interpretation of what that 'look' amounts to in physical terms. In other words, when we pass from the content of perception to a hypothetical object of perception, it is very possible for us to enter into error, and believe in a different object than exist. But that does not mean that our perceptions had a different content to what we thought - only that that content had a different cause to what we thought.
I never said anything about sharp or in-focus. It's also obvious that the visual field is blurry except for the focus area, but this is again precisely my point. We don't discover that the edge of the visual field is blurry by examining our organs; rather the study of our organs has as one of its goals to explain why the edge of the visual field is blurry.
Yours may be— check with an eye doctor.

I appreciate that you're trying to modify the claim based on what you know of physiology— but in this case you're going too far. It's not the case that the visual field is blurry— because there's another mechanism that corrects for this, namely movements of the eye that are so fast that you're not even aware of them.
Well, the visual field IS blurry around the edges. This is very apparent to everyone, I think -it's called our peripheral vision, and we all understand that it isn't as good. However, if you doubt me, I've recently heard of a good way to demonstrate it: Take a playing card, and hold it up by your ear while looking straight ahead. You won't be able to tell its suit or its color (but you'll easily notice if it's moving). Move it slowly forward... when can you see what color it is, what suit, what number? It'll be surprisingly close to straight ahead.
This demonstrates that our peripheral vision is blurry. Most of us do not need this demonstrated, however.
Again, the process of understanding isn't one-way as you depict it. Of course we want explanations for why we see as we do. But we also learn things about the brain that turn out to affect perception. I doubt anyone really noticed the blind spot before the anatomy of the eye raised the problem, and the Tolstoy experiment I described (and how we would perceive it) would have been pure speculation before the eye-tracker was invented.
But this tells us nothing about perception, only about the mechanical processes underlying it. These are - don't get me wrong - useful to know about, but they are contingent and, in philosophical terms, uninteresting.
What we do find is that many, many aspects of physiology affect our vision in perceptible ways. I find this quite fascinating though obviously not everyone does.
But it is, as you say, a physiological issue - not a philosophical one. The anatomy of our knee-joints affects how we walk, which is very important for our lives, and therefore has an important role in what we are able to experience - likewise, the fact we have no wings and cannot fly has an effect on our vision, in that it makes us a lot less likely to see tall things from above. Similarly, the fact we cannot see through people's clothing has all sorts of cultural and sociological ramifications. These are not, however, strictly philosophical matters, however interesting they may be to the appropriate field of study.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

zompist wrote:
Salmoneus wrote:[And, of course, its function as a skeptical argument - even if you do not like the theory, can you really be absolutely and indomitably certain that it is not true?]
I've already answered this. It's not as hard as you might think to be comfortable not making absolute claims about truth. You even suggested a reasonable workaround in the atheism thread (less strictness about "knowledge" depending on context).
I'm very comfortable not making absolute claims about truth - because I'm a skeptic. What isn't comfortable is not making such claims, and also claiming not to be a skeptic - given that that's virtually the definition.
As for the eye-tracker experiment, I also answered this question— the subject doesn't see the same thing as the outsider. As I responded to Makerowner, the precise wording here matters. Yes, of course you "see Tolstoy"— no one doubts that.
So what do you think I think I see that you think I don't see?
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Post by zompist »

Salmoneus wrote:No, he (she?) has a theory of a MIND. No serious scientist or philosopher has ever claimed that science should be able to explain everything. Science provides patterns of material causation of material phenomena. Science, in other words, tells us what we experience - it does not tell us HOW we experience it.
Such confidence in the unproveable! I'm a bit surprised that after summarizing 2000 years of philosophy, you haven't the slightest doubt that science will never encroach further on this domain.
Do you say to her, "don't worry dear, you aren't really in pain, even if you think you are. Science has demonstrated that you're not feeling any pain at all."
What's the point of a thought experiment that makes, then denies, its own supposition?
Well, the visual field IS blurry around the edges. This is very apparent to everyone,I think -it's called our peripheral vision, and we all understand that it isn't as good. However, if you doubt me, I've recently heard of a good way to demonstrate it: Take a playing card, and hold it up by your ear [...]
Yes, you're a laff riot.

One final factoid-- I took the eye-tracker example from a philosopher. But of course you are the ultimate arbiter of what is interesting and what is philosophy, and I won't trouble your thread any further.

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

zompist wrote:
Salmoneus wrote:No, he (she?) has a theory of a MIND. No serious scientist or philosopher has ever claimed that science should be able to explain everything. Science provides patterns of material causation of material phenomena. Science, in other words, tells us what we experience - it does not tell us HOW we experience it.
Such confidence in the unproveable! I'm a bit surprised that after summarizing 2000 years of philosophy, you haven't the slightest doubt that science will never encroach further on this domain.
Such confidence in the inconceivable! Such blind faith! Though we cannot imagine even any language that would serve the purpose, let alone the words to be said in it, you still have faith that Science Will Know Everything. Why?

I don't see what the history of philosophy has to do with it. The history of philosophy has been, in at least one dimension, the history of the increasing division between science and what is now called philosophy - just as philosophy in the proper sense has abandoned the scientific domain, so too science has abandoned the philosophical domain. No, I don't see any reason why this will change (despite some 20th century philosophical trends, which we'll get to later).
Do you say to her, "don't worry dear, you aren't really in pain, even if you think you are. Science has demonstrated that you're not feeling any pain at all."
What's the point of a thought experiment that makes, then denies, its own supposition?
... have you not seen a thought experiment before? That's how they work. You suppose X, and then you demonstrate that the supposition is incoherent or unsupportable.
Well, the visual field IS blurry around the edges. This is very apparent to everyone,I think -it's called our peripheral vision, and we all understand that it isn't as good. However, if you doubt me, I've recently heard of a good way to demonstrate it: Take a playing card, and hold it up by your ear [...]
Yes, you're a laff riot.
Oh, dearest, if you don't like having your words quoted back to you, don't say them! And if you're this sensitive to irony, analytical philosophy is not the field for you at all. It's considered good style.
One final factoid-- I took the eye-tracker example from a philosopher. But of course you are the ultimate arbiter of what is interesting and what is philosophy, and I won't trouble your thread any further.
Ah yes, I'm the "ultimate arbiter" - the sense of daring to give an opinion. I'm so sorry not to have agreed with you - I suppose that disagreement does make argument impossible.

I'm not surprised you got that from a philosopher (which one, out of interest?). Philosophers say all sorts of things and are rarely worth listening to. Indeed, a great many - perhaps most - modern philosophers talk a great deal of bollocks. I am firmly of the opinion that a lot of what philosophers have taken to talking about is not philosophy at all. Indeed, I think most philosophers are of this opinion - Analytic philosophers think that Continentals aren't doing philosophy, Continentals think Analytics aren't doing philosophy, different trends within each group think the other trends aren't doing philosophy, and there are many philosophers who see themselves as surpassing or sublating the C/A dichotomy - many of whom see all the others as not doing philosophy. Nor is this a new thing - not since Christ has there been any unity in philosophy. Believing that opponants are not doing philosophy is a time-honoured and important standpoint for a philosopher (not that I pretend to be that).
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

Mornche Geddick
Avisaru
Avisaru
Posts: 370
Joined: Wed Mar 30, 2005 4:22 pm
Location: UK

Post by Mornche Geddick »

Fidelity is a function of the senses, Pthug. If senses did not give you a model of your surroundings that was accurate at some level, they would be no use at all. Even a bacterium needs to know where the food source really is. A mutation which caused the glucose*-dependent chemoreceptors to be switched permanently ON or OFF would be quickly selected out of the population. A bacterium that had no glucose receptors, moving in a random walk pattern, would also be out-competed by a wild-type bacterium which did have them. The wild-type bacterium would get to the food faster than the random-walk bacterium.

Your remarks that complex nervous systems are recent developments and they needn't have evolved seems to me to be beside the point, which is whether sense perception is accurate or merely "useful". A mammal has a complex perceptual model. A bacterium has a simple one. Both are accurate in that they are based on the senses and tell the organism what really is out there. Not everything that really is out there, but the mammal senses more than the bacterium does.

Complex brains did not evolve in order to enable the organism to survive earthquakes, volcano eruptions, the Great Depression, World War II, the collapse of Roman civilisation, or global warming. But we will really need all the help our complex brains can give if we are to survive the last.

(Incidentally, what is your benchmark for Outside Context Problems? Extraterrestrial invasion? New species may have been invading new environments ever since the Precambrian, but the rabbits can't remember the Precambrian. It's Outside Context by their standards.)

*Or whichever food source it uses.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Post by Pthagnar »

fffffffffffffffffffffffffffff, are you trying not to get my point and get it at the same time? Accuracy *at some level* is PRECISELY THE POINT OF THE LONDON UNDERGROUND MAP ANALOGY. YOU UNDERSTAND WHAT SALMONEUS AND I WERE TRYING TO SAY, HOORAY!

I have already given several examples of Outside Context Problems -- radiation, meteors, etc.

If you want better examples, turn to philosophy. Salmoneus has been vague, probably because he is aware of how little he knows. I am not thus crippled, so I ask you to consider Morality.

Many otherwise intelligent people believe that the existence of "absolute morality" is a shining sign that God exists. It is not always God -- the general argument is along the lines of "Then what put this idea that such-and-such is *wrong* into your head? It cannot be entirely a matter of just following some arbitrary ethical code!" and so the idea of a more general Moral Law arises.

The basis for it is shaky and for most of history has been entirely metaphysical, which is why the above argument smells bad.

The form that this morality takes is, as the naive/philosophical/theological argument above suggests, very much a kind of thing *out there* -- particular things and situations *feel bad* because moral judgements like this are linked into the same world-navigation system we use to deal with everything.

It takes a particular kind of discipline to "step outside" of that and realise that a thing is bad not because it is a BAD THING, but because it *makes you feel bad* inside. This is an Outside Context Problem that the human world-processing system is not really very good at. The history of religion is, from one viewpoint, a story about dealing with this problem.

There is another other problem, however. This external-morality is good at one thing -- other people. It is more appealing to consider somebody a BAD PERSON because if they are a BAD PERSON you can do things to them to make the BAD go away. Here is the problem: is this an accurate description of how the world works in the same way that your visual field is an accurate description of how the world works?

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

British Philosophy in the Nineteenth Century

Post by Salmoneus »

Philosophy in Britain (which is to say Scotland) did not immediately follow the route of philosophy in Germany – indeed, even key texts by Kant and his successors were not widely available in Scotland for many decades after they were written. Instead, philosophy was ruled by a different way of reacting to Hume: a way that attempted to maintain his general empirical assumptions and programme, while avoiding his sceptical conclusions. This philosophy is known as the Scottish School of Common Sense; its greatest exponents are Thomas Reid and William Hamilton. Broadly, the School sought to attack the Cartesian “way of ideas” that they believed led to Hume – that is, they insisted on an account of perception that did not rely on intervening ‘ideas’. Instead, Reid claimed that the initial sensation led immediately to a cognitive perception through the action of ‘common sense’; Hamilton went further by claiming that sensation and perception were two sides of the same coin, and not separable even in theory. “Common sense”, in this account is a feature of human nature, and cannot be doubted – in this, it is not too distant from Hume’s ‘habits of mind’, except that common sense operates right from the moment of sensation, rather than only operating between ideas once they have been formed. Common sense is also a methodological commitment: because all knowledge comes from sensory perception, which is itself conditioned by common sense, we cannot discount the perceptions of the vulgar, which are just as guided by common sense as those of the intellectual. This does not mean that the majority opinion must always win out, but rather than there must be a ‘dialogue of the vulgar and the educated’. This, in turn, refutes all sceptical and idealist views, which are contrary to common sense. Reid argues that his faculty of perception is a gift from nature, and therefore that he will trust it; and, furthermore, that as reason is likewise a product of nature, it is futile to trust reason if we do not trust perception: “they came both out of the same shop, and were made by the same artist; and if he puts one piece of false ware into my hands, what should hinder him from putting another?” Therefore, those who argue that reason is more trustworthy than perception have no foundation.

The School’s influence limited the exercise of philosophy in Scotland to ‘the science of the mind’ – approximately, the studies of perception and logic. Metaphysics was entirely rejected, and it is likely for this reason that the greatest British philosophy of the nineteenth century was a moral philosophy that, unusually, did not base itself in any metaphysical programme. This system is known as Utilitarianism.

Utilitarianism began with Bentham, but had older routes, stretching back to Hobbes. Its seed can be seen in Hume’s evaluation of character traits by their ‘utility’ to society; Bentham took this idea and made it more systematic. The key principles of Utilitarianism can be stated simply:

1. The only good is pleasure and the only ill is pain.
2. Moral evaluation should be on the basis of ‘utility’ – whether something produces pleasure or pain.
3. The subjects of moral evaluation are not characters, but acts.
4. The utility of an act is not a result of any typological or classificatory features of the act itself, but instead the result of its consequences.
5. All pleasure and pain is commensurable, regardless of whose pleasure or pain it may be.
6. The morally right act, therefore, is the act that produces the greatest pleasure for the greatest number of people.

In making pleasure the sole good, Bentham returns to the hedonism of the Greeks – but his is a universal hedonism. Pleasure is pleasure, whether it is tomorrow or today, and whether it is mine or yours. He also differs from the Greeks in discussing acts and rules, rather than virtues and characters – perhaps because Bentham approaches the question from the angle of political and social reform, and hence through the medium of law.

Bentham’s successor, John Stuart Mill, sought to soften the Utilitarian doctrine by allowing that pleasures differed in quality as well as in quantity. This enabled to us say, for example, that it is better to be Socrates dissatisfied than a fool satisfied, and that it is better to be a human than a sea slug, because humans (and Socrates) have access to higher qualities of pleasure.

Utilitarianism was immensely significant as a radical reforming movement, on issues such as economic reform (where it was a cornerstone of the early Labour movement) and the equality of the sexes (Mill was one of the foremost champions of political and legal rights for women, typified in his work “The Subjection of Women”). However, it faced a large number of criticisms:

- What is the utility to be maximised? It began with a linear scale of pleasure. Mill complicated it with a second dimension, leading to concerns about how the two dimensions could be made commensurable. Others gradually distanced themselves from pleasure, moving to such alternatives as ‘fulfilment of preferences’.

- What is to be assessed? Is it individual acts, or should we instead judge rules of action – that is, should we refuse to perform an act that leads to the greatest good, if we know it violates the rule that in general leads to the greatest good? Or should we instead agree with Hume, and assess character traits – the traits that lead to the greatest good. Even if we agree on seeking the greatest good, the choice of which classes of things we should be judging can be shown to yield substantially different results in some cases.

- Should we not accept the Kantian doctrine of inalienable human rights? Can those rights be explained through Utilitarianism, or should they be ignored – or accepted as an additional constraint?

- Do moral laws have to be publically acknowledged? The later Utilitarian, Sidgewick, observes that in some cases more pleasure may come from a society that believes itself to not follow Utilitarian laws - so is it the duty of the government to conceal its reasons for acting? Likewise, can it be true that an individual may better promote the good if he does not consciously follow Utilitarian principles, but only follows them indirectly while believing himself to follow other maxims? If this is so, does that mean that Utilitarians should never preach Utilitarianism?

- Relatedly, what room is there in this system for autonomy? Because it is based on subjective criteria, it essentially licenses all sorts of government manipulation and duplicity, providing that nobody ever finds out about it. Even if it is believed that in practice the government must be free and open because it cannot afford the risk of not being so and being found out, is it acceptable to believe that this is only a pragmatic issue and not one of fundamental moral value?

- Do Utilitarians maximise the average good, or the total good? If the latter, we must surely increase our population as much as possible – even if we all have only a tiny amount of pleasure, we’d still have more in total than a smaller, happier population. If it’s the average good, however, we should reduce our population to give each a greater share of the resources.

- Is death painful? If not, why is killing people wrong, on the Utilitarian account? In particular, if we avoid the population problem by saying that pain is not just a small amount of pleasure, but is actually negative pleasure, it would seem to follow that mass euthanasia of all those in the slightest degree of pain is the quickest way to improve total, or even average, happiness. The only way to avoid forced euthanasia (other than by introducing additional ‘human rights’ constraints, which are themselves problematic) is to assign death an infinite negative pleasure value – but this would demand that we instantly stop having children, since the best way to minimise death is to minimise birth.

These, and other, concerns, have lead to the extinction of Utilitarianism in its pure form; however, the various answers to these questions together comprise “Consequentialism”, the general doctrine that the right act is that which produces the most good consequences, generally defined in a primarily subjective way. Consequentialism, together with Kantian and Kantian-inspired deontological systems, is one of the two dominant ethical schools today.


-------


Mill, however, was more than a consequentialist. He is best known as a political theorist, the father of modern liberalism – for which he drew on Hegelian/Comtean concepts of social progress and sociology, advising a society that nurtured ‘bold experiments in living’. His political views are enormously influential, and probably now form the core of popular opinion on political issues, but are beyond the scope of these posts.
In epistemology and metaphysics, his most important contributions were negative: the destruction of the School of Common Sense, which thereafter was moribund. His positive contributions, though less influential, were nonetheless prophetic.

Humean scepticism raised two key areas of ignorance: firstly, ignorance of external objects, given that all evidence comes from the senses; and secondly, ignorance of the structure of the world, given that our schematic beliefs appear spontaneous, and are not based upon experience. In the first area, Mill agreed with Kant that these schematic concepts were unavoidable, but he did not derive any metaphysical consequences from this. Instead, he addressed the entire concept of knowledge: the sceptic argued that we could not obtain the certain knowledge Descartes had demanded, and so Mill responded that if such knowledge was never even possibly possible, it could not really be want we wanted after all. Instead, the knowledge we ought to seek respected the schematic necessities into which we were forced. For Mill, “must implies ought” – if we cannot avoid certain forms of reason (such as particularly basic forms of induction), there cannot be anything wrong in not avoiding them. Induction is therefore justified if and when it is inescapable. It might be expected that this would extend to justifying logical reasoning, and to mathematics – but instead, Mill believes that mathematical truths, and even many forms of deduction (such as syllogism) are only known from experience: while “2+2=4” represents a deep and basic fact about the universe, it is still only a fact about the universe, and is thus known from experience, with the possibility of error and correction – we may at any moment be shown to be wrong. Syllogisms, meanwhile, are not forms of deduction at all, as they are merely tautological – what appears to be a conclusion of reason is in fact only an unstated empirical hypothesis contained in the premise.
When it comes to knowledge of objects, however, Mill does not rely on absolution from necessity (in which light it should be added that Mill believed only in verbal necessity – which is to say that nothing in the material world is strictly necessary). Mill follows Berkeley in saving empiricism from scepticism by identifying objects with sensations – but, ingeniously, he identifies them not with actual, present sensations, but with conditional sensations. Object are, as he puts it, “permanent possibilities of sensation” – not the sensation of red, but the conditional relation that if one looks, one will have the sensation of red. Objects are bundles of these conditionals – a chair not only looks in certain ways from certain angles, but supports you if you sit on it, creates heat and smoke if you set fire to it, and creates a certain knocking sound if you hit it. No ‘thing in itself’ is required to support these bundled conditionals – the object is the bundle. This doctrine, known as ‘phenomenalism’, would re-emerge a century later.

At the time, however, Mill succeeded in demonstrating the power of the Humean critique, without convincing many in his own response to it. Instead, he indirectly created a huge wave of interest in Leibniz, Kant and Hegel in Britain, which went on to dominate British philosophy until the First World War.

----

The British Idealism movement had many representatives, but the greatest is usually considered to be Francis Herbert (“F.H.”) Bradley. Like many British Idealists, Bradley believed in monism – the doctrine that only one thing existed, the Absolute, and that individuation was the work of our own minds (others, like James Ward, subscribed to the Leibnizian doctrine of monads). This monism demands idealism, the doctrine that all existence is mental – because everything is one, ‘matter’ must be one with our perception of it, so that our thoughts are substance of reality – but it is an Absolute Idealism, distinct from the subjective idealism of Berkeley, because for Bradley being mental or ideal is not the same as being subjective: there is no one single viewer in Bradley’s system. [We should not that many, including the Absolute Idealists themselves, have argued that even Berkeley was not truly a subjective idealist].

Bradley draws attention to relational propositions, and to three facets of them. Firstly, he observes that if a relation is itself a thing, it cannot relate two terms unless it itself is related in some way to both of them – but that these relations, being things, would then require their own relations, and so on into infinity. Relations cannot, then, be abtract entities in their own right, but must be dependent upon, and possibly internal to, objects (though this last point is debated – his opponents saw him as arguing for internal relations, but his supporters deny this). Furthermore, he believed that all propositions were conditional, not categorical: “the sky is blue” in fact means “if something is the sky, it is blue”. From this, two more important points arise. The first is that our concept of “the sky” is not itself simple, but contains a large number of beliefs – and these beliefs are themselves conditional. Each term in a relation is therefore a bundle of conditionals – not entirely dissimilar from Mill’s possibilities of sensation, although not conceived in strictly empirical terms. The conditionals so bundled themselves relate the term to other terms, themselves conceived of in conditional and relational ways. This has the consequence of undermining the old concept of the ‘idea’, perceived of as a picture or image of something – instead, the ‘idea’ must now be a process, a simulation of all the entangled conditionals, which cannot be distilled in a single stationary image. Furthermore, this simulation will branch out through the interlocked conditionals to encompass the entire world: every statement presupposes an entire model of the world, and helps to define that model. Therefore, even a statement like “the sky is blue” is not simply a statement about an atomic entity, ‘the sky’, but a statement about the entire world. Moreover, because the statements are conditional, it is a statement not only about the entire actual world, but about the entire possible world. It is a statement about the Absolute.

The second major issue is the question of the nature of truth in this world. If a statement is a conditional, what are its conditions? The apodosis is true if the protasis is true – but as the Absolute encompasses all possible worlds, all things are true somewhere within the Absolute. The apodosis must therefore not be ‘true’, but real – it must reflect the world in which we are actually present. This world, however, is not to be identified in an ontological or realist way, but in a coherentist manner – beginning from our own embodied nature in the world, we stretch out our understanding of the world in a coherent fashion, through a series of conditionals, which hold true in different but overlapping sets of possible worlds. The ‘real world’ is the world in which our beliefs most greatly overlap. This, however, is not the whole of the world, but only one subset of the Absolute, in which we happen to dwell; yet our language, conditional as it is, is able to reach beyond the world of our experience. In this way, some propositions are true in more worlds than in others, and are thus more greatly, or more absolutely true; our truths, in grasping not only the actual world but the entire cosmos of possible worlds, therefore approximate more or less greatly to an Absolute Truth that is unknowable, unobtainable, and even inconceivable.


British Idealism broke, explicitly, with the British tradition of empiricism – and yet it still retained perhaps the most British of all its features – its negativity. The British philosophers since Hobbes had always been primarily philosophers of doubt, negation, and minimalism – and though theories like Bradley’s may appear bold and expansive in consequence, they were nonetheless negative and sceptical in origin and execution. It is here, in attacking preconceived notions (such as time, space, individuality, categorical statements, and the correspondence theory of truth), that they have the most lasting impact. The most famous – indeed, almost the only – piece of Idealist work still admitted by Analytics is McTaggart’s argument for the unreality of time. Likewise, Bradley is now best known for his short, critical first work, “Ethical Studies”, in which he surveyed and demolished previous ethical theories – in particular, the chapter “Pleasure for Pleasure’s Sake” is a powerful rejection of utilitarianism that is still sometimes quoted. The chapter “My Station and its Duties” is even more famous, although it its subject is dealt with so sympathetically that careless later readers have often assumed it to reflect his own views, despite the flaws in it that he observes, and despite the fact that it appears only in chapter five of the seven-chapter work. This is not helped by the fact that other Idealists took the chapter as the basis for their own moral theories, to the extent that the moral theory it describes is now known by the title of the chapter.

Bradley’s ideas in ethics clearly owe much to Kant. Like Kant, he believes that morality is both willed freely and yet sometimes reluctant and obedient; like Kant, he believes in an internal moral law which we choose to abide me. For Bradley, this means that our moral duty is self-fulfillment, or self-realisation: the ‘actual’ self (which we see) is to be aligned with the ‘real’ self (which is what we truly are). According to Bradley, however, this real self cannot be seen as some individual, almost solipsist agent, as is often the case in Kantian accounts – instead, Bradley believes that just as the entire world is one Absolute, so too human individuality is an illusion of the actual, and that each man is not distinct, but is only the entire sum of humanity speaking and seeing through one particular reference point. His view can be defended without his metaphysics, however – by both evolution and by nurture, humanity is, he tells us, inherently social. To pursue our narrow, individual goals at the cost of others is not in our real interest, because the well-being of others is also our well-being. Self-realisation is therefore a process of harmonisation and universal love, in which we come to treat others just as we treat ourselves – because there is no difference.

One stage in society on the way to this is the doctrine of “My Station and its Duties”, which holds that each actual society has a number of different stations within it, to which it ascribes certain rights, and gives certain powers over other parts of society, and that these rights and powers in turn engender certain altruistic duties. This, in short, is the morality of the late Victorians – although it should be noted that the Idealists were not conservative at all, whatever later readers may have thought. Bradley goes beyond this model by observing that society itself may be rotten and in need of reform; Bosenquet attempts to deal with the same problems within the framework of ‘my station and its duties’ by including as one of the most fundamental duties of the citizen “the duty of revolution”. Instead, the Idealists held a peculiarly British form of ‘liberal conservativism’ – a doctrine that one the one hand called for strong traditions and obedience to social duties, and yet on the other hand allowed the possibility of change, and opposed all attempts by government or clergy to prevent change. [Another advocate may have been Tolkien, with his ‘anarchist’ conservativism]. The highest motto of the system was the prophetic “Die to Live” – a doctrine that taught that being true to one’s true self could sometimes require (and could only be demonstrated emphatically by) the willingness to sacrifice one’s own life in furtherance of one’s duties to others. The view was widely considered discredited by the use to which it was put in the First World War.




British Idealism did not go anywhere; its effect has been almost entirely negative. Analytic Philosophy began in 1903, and though it defined itself through opposition to Hegel, it defined Hegel through his representative on Earth, Bradley. Where Bradley contributed to Analytic Philosophy, his name and contributions were expunged – in logic, for instance, Bradley was at least as important a figure as Frege, but anything non-Fregean in Bradley was either eliminated or else claimed as an innovation by the Analytics. When Analytics eventually returned to considering sytems similar to Absolute Idealism, they rejected the name, and any trace of Bradleyan contamination. For decades, Analytics were open about their refusal to teach Bradley – his writing was too powerful, they said, for the young to be exposed to it, in case it might provide temptation for them. Only in the last few decades has any interest in Bradley reawoken.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

hwhatting
Smeric
Smeric
Posts: 2315
Joined: Fri Sep 13, 2002 2:49 am
Location: Bonn, Germany

Re: British Philosophy in the Nineteenth Century

Post by hwhatting »

Salmoneus wrote: Instead, he addressed the entire concept of knowledge: the sceptic argued that we could not obtain the certain knowledge Descartes had demanded, and so Mill responded that if such knowledge was never even possibly possible, it could not really be want we wanted after all.
That's similar to what I came up with as response to Humean and similar scepticism - good to know that Mill had the idea before me. ;-)

User avatar
Aurora Rossa
Smeric
Smeric
Posts: 1138
Joined: Mon Aug 11, 2003 11:46 am
Location: The vendée of America
Contact:

Post by Aurora Rossa »

The British Idealism movement had many representatives, but the greatest is usually considered to be Francis Herbert (“F.H.”) Bradley. Like many British Idealists, Bradley believed in monism – the doctrine that only one thing existed, the Absolute, and that individuation was the work of our own minds (others, like James Ward, subscribed to the Leibnizian doctrine of monads). This monism demands idealism, the doctrine that all existence is mental – because everything is one, ‘matter’ must be one with our perception of it, so that our thoughts are substance of reality – but it is an Absolute Idealism, distinct from the subjective idealism of Berkeley, because for Bradley being mental or ideal is not the same as being subjective: there is no one single viewer in Bradley’s system. [We should not that many, including the Absolute Idealists themselves, have argued that even Berkeley was not truly a subjective idealist].
So where did our multiplicity of seemingly separate minds come from if everything really consists of this unified Absolute? And why does this Absolute bother to create perceptions of reality rather than just remaining a unified point?
Image
"There was a particular car I soon came to think of as distinctly St. Louis-ish: a gigantic white S.U.V. with a W. bumper sticker on it for George W. Bush."

Mornche Geddick
Avisaru
Avisaru
Posts: 370
Joined: Wed Mar 30, 2005 4:22 pm
Location: UK

Post by Mornche Geddick »

Pthug, you appeared in your earlier posts to be saying that perception didn't have to be accurate at all - merely "useful". That was what bothered me and I apologise if it wasn't your intention. I, in turn, never intended to say that perception had to show every single thing that was out there. I'm sorry if that was what you thought. If perception shows the organism even one real object (like glucose to the bacterium) it deserves the names "accurate" and "true". To be sure, this limited perception doesn't deserve to be called "comprehensive", but it is at least not false.

If you'll excuse me, I'd rather not get drawn into a digression on morality and do any more damage to Sal's fascinating thread. We've been stuck on Hume for ages and he's now into the 20th century.

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

Don't worry, I'm skipping back to the early 19th in a moment. Probably twice!

That said, I'm a little disappointed/surprised there's been no real comments so far on the 19th century, one of the most exciting and unusual times in philosophy. Kant, Fichte, Schelling, Hegel - they're not exactly intuitive thinkers! [no obscure pun intended]


Anyway, I seem to have written too much on Schopenhaur, so he'll have to have a post to himself - which is a bit odd, as he's not THAT important. But, I did study him, and he's a fairly panoptic thinker, so...

Then it'll be Kierkegaard and Nietzsche.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

Mornche Geddick
Avisaru
Avisaru
Posts: 370
Joined: Wed Mar 30, 2005 4:22 pm
Location: UK

Post by Mornche Geddick »

Sorry about that. I was really interested in Kant. I never knew he was the one who first suggested the Earth's rotation was slowing down, for a start. And I think his fourfold distinction (a priori vs a posteriori and synthetic vs analytic) is very interesting and useful. I have several questions about his philosophy.

1) The noumenal / phenomenal distinction.

The noumenal / phenomenal distinction was, I believe, partly anticipated in Dante's time, when educated people could speak of the difference between knowing a thing in its essence (noumenal, in Kant's term) and in its effects.
Your apprehension draws from some real fact
An inward image, which it shows to you
And by that image doth the soul attract.....

To each substantial form that doth compound
With matter, though distinct from it, there cleaves
Specific virtue, integral, inbound,

Which, save in operation, none perceives;
It's known by its effects
as in the plant,
Life manifests itself by the green leaves.
Now Kant, if I have understood, is saying that we can only ever know anything in its effects - specifically its effects on our minds. He's denying the possibility of knowing anything in its essence (maybe with the exception of the self? see below) You've also mentioned that Kant invented the idea of subjective and objective. Could you explain the terms as he defined them?

Specifically I want to know, does Kant identify them with noumenal and phenomenal or did he make another quadripartite distinction? noumenal objective vs noumenal subjective vs phenomenal objective vs phenomenal subjective?

According to C.S. Lewis, Kant even distinguished between a noumenal and a phenomenal self. Did he imply we could have no knowledge of our self-in-itself? Or could we experience the noumenal self?

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

What Dante is employing is the "way of ideas", which became universal in philosophy with Descartes, but which was present in literature for quite some time - Shakespeare also has such sayings.

The Cartesian idea is that:
a) things exist, outside of ourselves
b) in perceiving things, we create an image in our minds that corresponds to the things in themelves
c) therefore, things are only known through ideas

This, then, creates the skeptical question: how much can we learn from the thing from our ideas of them?

The Kantian idea is very similar. In fact, it's the same. The key is the next stage: from the Cartesian idea, once the skeptic establishes that little is known about the things in themselves, he moves on to establish that little can be known about the world. But Kant says that lots can be known about the world: because 'the world' resides in our ideas, not in things in themselves. There IS a noumenal world, but it isn't what we talk about, nor what we want nor need to know about: our world is the apparent world. We cannot argue that this apparent world is wrong, because it is grounded in necessity - we cannot have an apparent world that differs in certain regards from our own, therefore we are secure in those features of our ideas.


----

I think, though, that Dante may be speaking about something different. Dante is employing an Aristotelian conception of the world, in which there is both matter and, compounded with it, 'substantial form', each form having specific natures. These natures express themselves through their effects on the material: the fact that something is alive is known through the green leaves that issue from it, not directly. The 'effects' and 'operation' that he is talking about look to me to be actual physical effects, rather than perceptions.

-----

I'm not sure it's wrong, but it might be misleading to say that Kant talks about the effects things have on our minds. The key is that Kantian minds do not passively sit back and be acted on by objects - they go out and they create worlds, and structure information. Our perception of a table is not just an effect on us, it is something we have created - we have imposed categories like space, time, number and causation upon it.

Yes, Kant thought we were both noumenal and phenomenal. I don't know whether he thought we could KNOW the noumenal self, but the noumenal self certainly acts. If you read again, this is the basis of Kantian ethics - and, by extension, of the Fichtean system.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

Post Reply