A brief overview of the development of Western Philosophy

Discussions worth keeping around later.
Mornche Geddick
Avisaru
Avisaru
Posts: 370
Joined: Wed Mar 30, 2005 4:22 pm
Location: UK

Post by Mornche Geddick »

Salmoneus wrote:Firstly, we normally call these modes of self-consciousness subjective and objective - but the idea was invented by Kant, who came after Hume, so these resources were not really available at the time.
Actually, "contemplation" could be either subjective or objective, depending on what it is directed to. (This subject probably deserves a thread to itself. Or maybe a book.)
Secondly, from one instance of runner+running, you cannot rationally extrapolate to the claim that there is never running without a runner. That universal claim can't be justified by any number of particular data points.
I'm a scientist. Justifying universal claims by accumulation of data is what I do. :)
In other words, you can be certain that running entails a runner - as a claim about the English language. But you can't know that the English language, on this point, correctly reflects the nature of the world.
On the contrary, it's only my language that even allows me to concieve of running without a runner. If I try to imagine it in my head, I get pictures of phantom legs without bodies (running + runners) or just the sound of people running (no running and no runners). Language frees me from picture thinking and allows me to separate the verb from its subject.

User avatar
brandrinn
Avisaru
Avisaru
Posts: 575
Joined: Sat Sep 18, 2004 10:59 pm
Location: Seoul
Contact:

Post by brandrinn »

Mornche Geddick wrote:
Salmoneus wrote:Secondly, from one instance of runner+running, you cannot rationally extrapolate to the claim that there is never running without a runner. That universal claim can't be justified deductively by any number of particular data points.
I'm a scientist. Justifying universal claims inductively by accumulation of data is what I do. :)
Fixed.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Post by zompist »

By the way, Oliver Sacks in The Man Who Mistook His Wife for a Hat, chapter 2, expresses, much better than I have, my objection to Hume's shivered consciousness. He describes a man with Korsakov's syndrome, which leaves him with no long-term memories past a certain date... everything he sees, he forgets in a few seconds. Sacks calls him a "Humean being", whose moments of consciousness, without memory, are eternally disconnected.

Mornche Geddick
Avisaru
Avisaru
Posts: 370
Joined: Wed Mar 30, 2005 4:22 pm
Location: UK

Post by Mornche Geddick »

I've read that book too. He also feared that fate for someone with very severe Tourette's, whose psyche could be drowned by continual chaotic inpulses coming every second.

Thanks for the fix, brandrinn. I hope Sal does a post on Karl Popper.

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

Zompist: but that example only hammers home the Humean point. If the unitary self-concept is so contingent that one man may have it, and another man, with a slightly different brain, may not have it, surely this undermines the idea that the concept gives us clear and certain knowledge about an ontologically distinct entity - the Self?

Firstly, how do we know that our brains, and not the Korsakov's brain, correctly represent the world? It could be argued that our brains have evolved to do so, but actually correctly representing the world, as I've said, is never necessary in evolutionary terms, and on occassion may even be counter-productive.

Secondly, rather than imagining that Korsakov's removes the ability to perceive the actually-exiting Self, why can we not instead imagine that the neurotypical brain merely creates the concept and alleged perception of a non-existing Self?


-------

Brandrinn: well, that's the whole point: that for Hume, induction is not justified. And it's hard to see how it could be: even theories like those of Popper do not claim actual epistemological justification for induction, only pragmatic justification.

To explain my point further:
1. We start off with a piece of knowledge, or several pieces of knowledge
2. We apply reasoning
3. We end up with a piece of knowledge, or several pieces of knowledge, that are no less true than the knowledge with which we began.

This process is epistemologically justified - it preserves and can arguably extend our knowledge of the world, without leading us into error. Inductive "reasoning", on the other hand, cannot do this - because we can apply inductive reasoning to true premises and arrive at false conclusions. The procedure can be justified in other (moral, pragmatic, aesthetic) ways, but not epistemologically. Scientific 'knowledge' is not 'knowledge' in the traditional sense of knowledge about the world, but only knowledge about the coherence of a particular set of beliefs, which may or may not represent anything about the world - it is impossible to tell.

Why does this matter? Because Enlightenment had, at its base, two key assumptions: firstly, that the purpose of philosophy (including all the sciences) was to attain a true representation of the actual nature of the world; and, secondly, that the proper method of philosophy was to begin with what was known, and move from that to all other things that could be known on that basis, making sure never to fall into error.

If we allow induction, one or both of these assumptions must be discarded. The least painful method is to divide 'philosophy' from 'science', and give the latter a pragmatic and error-strewn role as, essentially, the foundation of technology, where it doesn't matter what the scientific theories say about the world, or even if they 'say' anything at all, so long as the models are consistent; the former can be retained in its pure, representational, error-avoiding mode. This is what Kant did - with the caveat that certain forms of "induction" were not actually inductive at all, and could be epistemologically justified within philosophy. Fichte, Schelling and Hegel progressively eroded the distinction between science and philosophy as they found contradictions in Kant's account. Later, Analytic Philosophy established a new division; and that too has since been questioned. Much of modern philosophy - from 'naturalisation' in Angloamerica to 'postmodernism' on the Continent - is an attempt to destroy philosophy itself, conceived of in Kantian isolation. Because it is philosophy that has carried the Enlightenment weltanschaung, this attack entails the final abandonment of the Enlightenment. This is a traumatic process, and its potential ramifications are severe. The process begins with Hume.

--------------------------

I'm thinking of doing omething on philosophers of science, yes - the problem is, I'm not sure how to fit them in. They seem to have been rather insular, and disengaged from either major philosophical tradition. I'll probably have a go, though. To be honest, the whole 20th century is a bit of a mess when it comes to schematisation.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

hwhatting
Smeric
Smeric
Posts: 2315
Joined: Fri Sep 13, 2002 2:49 am
Location: Bonn, Germany

Post by hwhatting »

Salmoneus wrote: Brandrinn: well, that's the whole point: that for Hume, induction is not justified. And it's hard to see how it could be: even theories like those of Popper do not claim actual epistemological justification for induction, only pragmatic justification.

To explain my point further:
1. We start off with a piece of knowledge, or several pieces of knowledge
2. We apply reasoning
3. We end up with a piece of knowledge, or several pieces of knowledge, that are no less true than the knowledge with which we began.

This process is epistemologically justified - it preserves and can arguably extend our knowledge of the world, without leading us into error. Inductive "reasoning", on the other hand, cannot do this - because we can apply inductive reasoning to true premises and arrive at false conclusions. The procedure can be justified in other (moral, pragmatic, aesthetic) ways, but not epistemologically. Scientific 'knowledge' is not 'knowledge' in the traditional sense of knowledge about the world, but only knowledge about the coherence of a particular set of beliefs, which may or may not represent anything about the world - it is impossible to tell.

Why does this matter? Because Enlightenment had, at its base, two key assumptions: firstly, that the purpose of philosophy (including all the sciences) was to attain a true representation of the actual nature of the world; and, secondly, that the proper method of philosophy was to begin with what was known, and move from that to all other things that could be known on that basis, making sure never to fall into error.
I think the important point is here that was so difficult to accept (for philosophers and laymen alike) is that there is no incontrovertible truth, no firm ground to stand on. All that we have is pragmatic reasoning along the lines of "this seems to work", but we can never be sure that any assumption that seems to work is actually correct. If I get it right, Hume is not saying "there is no self", he is just saying "we cannot be certain that there is a self". OTOH, if there can be no absolute certainty on anything at all, absolute certainty becomes practically irrelevant and pragmatically (aesthetically / morally) based knowledge becomes the goal and standard.

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

That's certainly one interpretation - although accepting Hume's point doesn't have to lead to that particular conclusion, as there are other options.

Moreover, what "becomes the goal and standard" means is itself not simple. The Kantian attitude to the phenomenal and the Critical Rationalist attitude to the scientific are not identical.

Also, even if we reject absolute truth in some areas, that doesn't mean we reject them in all areas - a lot of people who don't believe in objective truth when it comes to science do believe in it when it comes to, say, time, or number theory, or logic. And then there is the great post-Enlightenment problem: if there is no certainty of knowledge, what standard should we use for judging arguments? In some areas, such as science, reasonable answers may have been given, but in other areas, such as morality, logic, and most everyday statements, it seems less clear.

[Also: I think Hume may even be skeptical about whether our belief in the self CAN be true or false. There seems to be a sense in which such claims are more meaningless than false, because they cannot be cashed in in terms of actual experiential evidence. But I leave that to a Hume expert]
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

German Idealism II.

Post by Salmoneus »

The culmination of the German Idealist project comes with Hegel. I’m not going to go on at length about Hegel – he’s a difficult philosopher who is the subject of considerable controversy. All I will do is attempt to offer a vague idea.

The whole of German Idealism may be seen as an attempt to answer certain problems found with Kant. Perhaps the two greatest problems are the Thing-in-Itself, which we can know nothing about, and the distinction between sensible and conceptual – the former, we are given, and the latter we produce ourselves. In this model, the human mind is responsible for the form and structure of consciousness, but not for its contents, which instead are given from outside. However, this scheme rests upon our ability to divide cleanly between content and form – and, just as the Cartesian dualism of mind and body lead to problems over how the two could interact, so too the Kantian dualism of content and form lead to problems over how the two could be divided. Already in Fichte, we see how ‘formal’ characteristics require particular content – the formal concept of self-limitation requires particular concepts of content, such as other minds, and external bodies. Thus the line is blurred – if the existence of other minds is presupposed by our formal concepts, and is therefore equally as indispensible as our formal concepts, yet at the same time as inescapable as them, is this existence not itself a formal concept, part of our schematization of the world? Perhaps we simply cannot avoid believing in other minds – in which case, we should reconsider whether our belief in them is really on the grounds of experiential evidence, rather than an innate schematic feature.

In Fichte, this blurring is peripheral, because for Fichte the prerequisites of self-consciousness are mostly negative – all that is required is the existence of a check against the self, rather than the particular positive details of that check. Hegel begins on this level, but follows Fichte’s abortive lead by considering not only the action of the Self on the Other, but their interaction. By seeing the self not as an independent thing but as the product of an interaction between Self and Other, he brings far more of the content of the Other into the requirements of consciousness – in other words, he erodes the Kantian duality by showing how much of the content of our experience is just as necessary as the form of our experience, and how the content plays a role in shaping the form just as much as vice versa – until ultimately all content is form, and form content, and the whole structure of the world is knowable through conceptualisation, rather than through sensation. At the same time, by following Schelling’s work on the impossibility of extracting the Object from the Subject, and the productive faculties of ‘the Absolute’, which Hegel conceives of in mental terms, he does away with all need for the Thing-in-Itself.

The key part of this process is Hegel’s pursuit of what Schelling called ‘a history of consciousness’ – a history that Hegel makes both finite (perceived in space and time) and infinite (existing beyond such human concepts). Like Schelling, Hegel treats self-consciousness as something that has to be attained, not started from, and that therefore appears in many different stages, each inadequate in its own way. Each stage is undermined by its inherent contradictions, which are resolved, or ‘sublated’, by the shift to a new stage. This shift has certain conceptual prerequisites; and crucially, because this history is also being played out in finite, ‘real world’ history, it has certain physical prerequisites as well – but these two sets of prerequisites are not actually distinct, only different ways of seeing the same thing. The history of the world is therefore only a way of conceiving of the rise to self-consciousness of the Absolute; and, since this conception of self-consciousness is itself a part of self-consciousness (because the Absolute, in us, cannot conceive of itself without the formal structure of our experience), history is both the conception of, and the reality of, a struggle for self-consciousness. This process of a series of stages, each of which is undermined by its inherent contradictions, Hegel called a ‘dialectic’. Hegel conceives of this dialectic as encompassing the entire evolution of the world, and of man and his societies within it.

Four other things should be observed. Firstly, Hegel follows the later work of Fichte in turning from the individual mind to the mind of the nation, and finally of humanity as a whole. Consciousness as the consciousness of multiple, interacting, competing individuals is only one stage of the dialectic, riven by its own contradictions, which will be resolved in a new form of consciousness, a new type of state, in which there is a consciousness that is not particular.

Secondly, an important feature of Hegel’s dialectic is the concept of the master/slave relationship: when two consciousnesses encounter each other, they lose their pre-eminence, and struggle to re-impose it by mastering the Other. They engage in a fight to the death – one wins, through its lesser fear of death, and a master/slave relation is established. Because the self-consciousness of each is dependent on their recognition of, and also by, the Other, each is only imperfectly and dependently self-conscious. This is a deficiency that is rectified through the dialectic: the relationship between the two passes through a number of different forms, stages of which are represented in history by different types of state, and by different types of religion, until both Master and Slave are able to exist no longer in a state of dominance, and hence dependence, but rather in equality, and hence mutual freedom. One driving force in this dialectic is the fact that the Slave, by being forced into labour, comes to master his environment, and achieve greater self-consciousness, while the Master can do nothing except through the Slave, upon whom he becomes entirely, childishly, dependent – this balance of power conflicts with the ostensible power of the Master over the Slave, and this contradiction fuels the evolution to later forms of relationship. The highest form of evolution known to Hegel was the political system of Prussia in the 1830s, and the form of Christianity practiced there – a state of near-perfect reason, equality and liberty; accordingly it was there that Hegel was created to spread his message, for Hegel represents the latest, and perhaps the highest, possible form of consciousness of the World-Spirit.

Thirdly, it is important to note not only that the Slave gains his greater self-actualisation ultimately from his greater fear of death, but that in fact Death can always be seen as the ultimate instantiation of the Other, by which we are checked but over which we seek to re-establish pre-eminence, and that Death is in particular the Master to our Slave, whose callous demands compel us to labour and action, and hence to self-actualisation. For Hegel, then, being-for-ourselves is in the end dependent upon being-toward-death.

Finally, in his belief that history was one telling of the story of our struggle for self-consciousness, Hegel came to portray our own, individual struggles, as represented in the works of the philosophers, as only a part of the overall struggle, just as we as individuals are only a part of the World-Spirit. Vitally, this implied that each philosopher was limited, and likewise gifted, only the stand-point of their own particular point in the history of the World-Spirit – no articulation of belief could ever reach beyond its historical situation to claim an abstract and absolute truth (except, perhaps, Hegel’s own philosophy – the question seems debated by scholars), and none could ever be understood except from its particular historical position. The radical conclusion, if it is accepted that the self-actualisation of the World-Spirit will never end in our finite perception of the world, is that we cannot fairly judge philosophers by their closeness to the truth, but only to the form and limits of truth as it could be known in their own time.

Hegel’s work has been hugely influential. If Kant set the question for the age, Hegel provided the definitive answer. Subsequent philosophy has largely been an attempt to avoid Hegel’s answer: sometimes, as in Marxism, Positivism, Existentialism and Phenomenology, by cutting out certain elements of Hegel, while retaining many facets of his presuppositions and terminology; other times, as in Analytic Philosophy, by essentially anathematizing all Hegel’s works and seeing what work can be done without drawing upon him in any way – which is a large part of the current contempt in which the Analytic tradition holds the more Hegelian Continental tradition.

Because of the divisive impact of Hegel, philosophy from this point on becomes rather harder to narrate chronologically, as different traditions explored different responses. For this reason, the rest of this post will be somewhat ahistorical, regarding some of the more important direct descendents of Hegel, even when they became influential later than some of the anti-Hegelians I will come to in subsequent posts.





One approach to Hegel was to strip out what was considered most objectionable, or most counter-intuitive, which was generally believed to be his metaphysical content (which, indeed, some Hegel scholars now deny ever existed). An influential example of this approach was the work of August Comte. Comte accepted the evolutionary account of Hegel: humanity, he believed, was progressing through different stages of knowledge, which corresponded to different stages of social organisation, and which approached but never reached an absolute truth. The stages of knowledge, Comte identified with the various sciences, which he believed were developed in a precise order: that which was most alien and distant to humanity was understood first, and later sciences progressively brought knowledge closer to our own experience: hence, science begins with such studies as mathematics and astronomy, and eventually reaches biology and then psychology. At each stage, knowledge was built – could only be built – on a systematic system of experiential, and specifically sensory, observations; but Comte also accepted the Idealist doctrine that no experience was ever pure sensation, but always required an element of pre-existing theory. Like Hegel, he saw this Kantian constraint in historicist terms, through the concept of a feedback between knowledge and theory: experience gives us knowledge, which gives us new theories, which provide new experiences, which provide new knowledge, and so forth.

Importantly for Comte, however, this feedback was not purely intellectual, but was also practical – like Hegel, he believed that particular social states were pre-requisites for different types of knowledge, and thus that the advance of science required that science reconstitute society. The 19th century was particularly vital, because it was then that the final science (or possibly penultimate science, on some accounts) would be developed: a science Comte called ‘sociology’. Sociology, the study of society through the method of systematic observations, was to be the great organising force of the age – and it would re-organise not only society, but all knowledge: sociology was the queen of all the sciences, and scientific knowledge would gradually be reassessed in light of sociology, begin with a re-assessment of biology in sociological terms. A striking example of this future biology is given when Comte redefines the nature of the brain as being “that organ through which the dead have influence upon the living”.

Finally, through sociology, humanity could learn to scientifically reconstruct society, which for Comte necessitated a reform of religion – instead of the old Religion of God, there would be a new religion, the Religion of Humanity. Religion was essential, he believed, because it was a form of harmony and coherence in society – religion is to society what health is to the body. Comte’s Religion of Humanity (drawing on Kantian ideals) largely sought to retain the rituals of Catholicism while removing God from it, and instead adding liturgical feast-days for various dead heroes that represented the greatest in humanity. The core of this religion, and the core of everything, had to be love, expressed through worship, as love was the underlying principle of humanity – “we tire of thinking, and even of acting; we never tire of loving”. Doctrine and ritual were secondary to this, and could be constructed in whatever way would best enable loving worship; yet in the personal sphere, the heart had to be strictly regulated through the mind, because the heart was blind, and left to itself would quickly fall into incoherence; religion could therefore play the role in society that stoicism had played for the individual – guiding the heart in a coherent and harmonious way. The same role can be more generally played by science itself, as the supreme moral regulator – both for individuals and for society, through sociology and the scientific religion.

There is no doubt that Comte has been hugely influential: as a simple example, it is from him that we get our words “sociology” (the scientific study of society), “positivism” (the method of science by which knowledge is based on systematic observations and progressively improves itself), and “altruism”. He is also a father of scientific unity – his belief that all sciences, including religion and philosophy, are ultimately part of a single unified and coherent science that includes all valuable human knowledge. In the late 19th century, he was widely seen as a model for social and political reform, and he inspired a great many reformatory and revolutionary figures: Ataturk’s secularisation of Turkey, for instance, was founded upon Comtean ideals, and the Brazilian national motto is directly stolen from Comte’s famous slogan, “Order and Progress”. Yet the first world war ended the Comtean consensus – a belief in continual human progress no longer seemed feasible in the face of the scale of the bloodshed, and new advances in physics undermined the Comtean belief that science had progressed to sociology having completed physics. Though positivism has survived and prospered, it is a neopositivism that confines itself to science, and abandons many of the sociological and historicist claims of Comte.


However, there can be no doubt that the greatest follower of Hegel, in terms of political impact, has been one Karl Marx. The dual Marxist was first to ignore the Hegelian/Comtean focus on knowledge, and then to see economics as more important than politics. The Hegelian dialectic of social relations can thus be seen entirely in terms of economic systems, in which entire classes of people are the participants: each economic system is built on a power imbalance between Master and Slave, through which the Master exploits the Slave, but this exploitation contains internal contradictions which result in a shift to a new system of production. Crucially, Marx also did not believe, unlike many of Hegel’s students, that Prussia was the perfect state, nor capitalism the perfect economic system – he believed that capitalism was inherently exploitative, and that economic history would continue to progress until capitalism collapsed under the weight of its own contradictions and a new system of “communism” was ushered in by historical inevitability. Communism was to be the perfect economic system in which no exploitation existed, and all were equal – an economic equivalent of Hegel’s “Absolute Knowledge”.

Although Marx’s economic theories are now discredited, and the era of his chief political representative, Marxism-Leninism, has passed, Marxism itself remains hugely influential – in political and sociological theory, in the development of new systems of philosophy articulated by the previously marginalised and exploited (feminist philosophy, post-colonialist philosophy), and in particular in the Continental tradition, where the materialist dialectic of Marx has been remarried to the Hegelian insistence on the interrelationship between social and epistemological realities.


NEXT: British Philosophy in the 19th Century: Common Sense, Utilitarianism, Mill, and British Idealism
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Post by zompist »

Salmoneus wrote:Zompist: but that example only hammers home the Humean point. If the unitary self-concept is so contingent that one man may have it, and another man, with a slightly different brain, may not have it, surely this undermines the idea that the concept gives us clear and certain knowledge about an ontologically distinct entity - the Self?
How could it? Does knowledge of cataracts make us doubt the existence of eyes? Biology generally finds that we learn more about a function by studying its deficits.
Firstly, how do we know that our brains, and not the Korsakov's brain, correctly represent the world?
If there's a self, that doesn't immediately establish a world— not even for Descartes, who needed to pull in God to get over that step.
It could be argued that our brains have evolved to do so, but actually correctly representing the world, as I've said, is never necessary in evolutionary terms, and on occassion may even be counter-productive.
Of course. But evolution is likely to give primates very very good knowledge about some things (our immediate spatial surroundings, whether our bodies are in good shape), and quite good knowledge about others (our own consciousness, our social relationships, what things are healthy to eat, etc.).

To be clearer, I think evolution gave us a sense of self, because it's useful for a primate to have: the self is the central point in the web of social and moral connections; it's also the heightened alertness that allows us to solve problems (how to get that banana hanging from the ceiling). Frustratingly for us as philosophers, evolution didn't give us good tools to contemplate rather than enjoy consciousness, to use Lewis's terms, any more than the eye is designed to see the retina.
Secondly, rather than imagining that Korsakov's removes the ability to perceive the actually-exiting Self, why can we not instead imagine that the neurotypical brain merely creates the concept and alleged perception of a non-existing Self?
Korsakov's is caused by physical degradation of the brain. Destroying bits of your brain is likely to remove functions, not to add them.

On the other hand I'm not really bothered if you want to call the self an illusion, especially if you do so in order not to reify it as The Soul. The brain gives us other useful fictions— e.g. we seem to perceive a hi-res, uninterrupted visual field in front of us, when it's clear from neurology that it's interrupted by a huge blind spot and the only hi-res area is a tiny moving region.

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

zompist wrote:
Salmoneus wrote:Zompist: but that example only hammers home the Humean point. If the unitary self-concept is so contingent that one man may have it, and another man, with a slightly different brain, may not have it, surely this undermines the idea that the concept gives us clear and certain knowledge about an ontologically distinct entity - the Self?
How could it? Does knowledge of cataracts make us doubt the existence of eyes? Biology generally finds that we learn more about a function by studying its deficits.
Which is exactly the point. Because we know that, for example, the lens in your eye can be deformed, and that if it is deformed we do not see the same way, this causes us to doubt the assumption that the way we see the world is precisely correct. Of course, in the case of lenses, we can go and see how light is distorted by different types of lenses, relative to how light is outside the eye, and find the least distortive lens. We cannot compare thought in the brain with thought outside the brain, so we cannot find the least distorting brain structure. Surely the existence of multiple possible brain structures that display the world in different ways is a prima facie reason to doubt the assumption that my particular brain structure portrays the world in a perfectly undistorted way?
Firstly, how do we know that our brains, and not the Korsakov's brain, correctly represent the world?
If there's a self, that doesn't immediately establish a world— not even for Descartes, who needed to pull in God to get over that step.
No, he believed in a world all along - God was only needed for knowledge of the *external*, *extended* world. The Self is still part of the world, in that it's (in traditional accounts) something we can say true and false things about and have there be a difference between the two.
It could be argued that our brains have evolved to do so, but actually correctly representing the world, as I've said, is never necessary in evolutionary terms, and on occassion may even be counter-productive.
Of course. But evolution is likely to give primates very very good knowledge about some things (our immediate spatial surroundings, whether our bodies are in good shape), and quite good knowledge about others (our own consciousness, our social relationships, what things are healthy to eat, etc.).
I can no reason to think that this knowledge must be 'good' knowledge of the world, rather than 'useful' knowledge that is structurally analogous to certain salient features of the world.

An example: the Tube map. The tube map is not a good map of London. It requires two dimensions, but the 2d location of points on the map does not represent their location in the real two dimensions (hence people always believing Elephant and Castle is a long way south, because it's low down on the map, even though it's actually north of Victoria, half a map higher than it). Yet, if you give the map to somebody who travels primarily by tube, it can be very useful to him - more useful, perhaps, than a high-resolution printout of googlemaps and a book with all the tube, train and bus times in it. Certainly easier to give them, and easier for them to carry.

So, in a functional sense, the tube map is the best map to have - it would be the sort of map favoured by evolution. And yet, not only does it miss out a lot of information, even with the information it does have it is universally and unpredictably distorting - even the relative location of tube stations cannot be known from the map.

I don't see why evolution would not have given us exactly this sort of synoptic perception of the world - useful for the sort of tasks we typically need it for (like knowing which line to take and where to change), but hopeless when we go beyond that. Indeed, while I agree that it is likely to be USEFUL for certain spheres, note that usefulness does not imply fidelity about all factors in that field - it just has to faithfully represent *paths of action*, not external factors (like the actual relative position of stations).
To be clearer, I think evolution gave us a sense of self, because it's useful for a primate to have: the self is the central point in the web of social and moral connections; it's also the heightened alertness that allows us to solve problems (how to get that banana hanging from the ceiling). Frustratingly for us as philosophers, evolution didn't give us good tools to contemplate rather than enjoy consciousness, to use Lewis's terms, any more than the eye is designed to see the retina.
And yet surely when you start making claims about the ontological status and pre-requisites of consciousness (such as the existence of a unified self), this is "contemplatory"?

And I agree that it's useful for primates to have this idea - which is surely evidence against the fact that the idea must be true. If it is a useful idea, we would have it whether or not it were true - so the fact we have it is no evidence for its truth.
Secondly, rather than imagining that Korsakov's removes the ability to perceive the actually-exiting Self, why can we not instead imagine that the neurotypical brain merely creates the concept and alleged perception of a non-existing Self?
Korsakov's is caused by physical degradation of the brain. Destroying bits of your brain is likely to remove functions, not to add them.
Yes. But "creating a useful but non-world-faithful idea" is just as much a 'function' as "creating an idea that is faithful to the world", so that point is moot. Indeed, many people would say the second requires less work by the brain (as the idea 'comes from outside' in some way), so that the former would be more vulnerable to brain damage.

The question is not whether functions are lost or gained, but WHICH functions are gained, and I do not see that it is self-evident which it must be.
On the other hand I'm not really bothered if you want to call the self an illusion, especially if you do so in order not to reify it as The Soul. The brain gives us other useful fictions— e.g. we seem to perceive a hi-res, uninterrupted visual field in front of us, when it's clear from neurology that it's interrupted by a huge blind spot and the only hi-res area is a tiny moving region.
Exactly - so why can't the unity of the self be one such fiction?* Also, I don't understand you - "the self exists" IS 'reifying' it. The whole point of it is the claim that the self is a real thing. If you don't reify it, you deny that it's a thing - that's what reification means. And I'm not sure what your distinction is between 'self' and 'soul' - the difference chiefly seems to be one of register and time period.


*I note you're being Humean here. Hume says our perceptions should be accountable to our sensations - most people hold them accountable to reality instead. So most people don't think the hi-res visual field is a 'fiction', but rather the truth - the neurology just indicates that the brain has to work to restore the true information from the encoded information acquired through the senses.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

Makerowner
Sanci
Sanci
Posts: 20
Joined: Tue Jul 10, 2007 2:00 am
Location: In the middle of the Canadian Vowel Shift

Post by Makerowner »

One thing I'd like to mention about Hegel is that "Absolute Knowledge" is his concern only in the Phenomenology of Spirit; in his later works, eg. the Philosophy of History, the end point is not perfect knowledge but (as Salmoneus mentioned) perfect freedom. --Though of course, what Hegel means by freedom is quite different from what we mean in phrases like "freedom of religion", "freedom of the press", etc. So when he says that 19th-century Prussia is (almost) the height of freedom, pointing out its restricted suffrage or state censorship is not a direct counterargument (though it isn't entirely irrelevant either--nothing is entirely irrelevant to anything else in Hegel). There's a very important chapter in the Phenomenology on the "law of the heart" and the "beautiful soul" where he shows how our kind of "freedom" (the freedom to "do what you want") is fundamentally unfree.
Another thing is that the Phenomenology (which is widely regarded as his most important work, though not by Hegel himself) was intended as a kind of Bildungsroman in which the reader is the one who receives the education. Hegel's philosophy is fundamentally not about the result, but the process by which we come to know the result. He rejects the idea that truth in philosophy is anything like the answers to "When did Caesar cross the Rubicon? or "How many feet are there in a mile?"--answers that simply erase the question. In philosophy the "questions"--the contradictions each successive stage of history contains within itself--can only be posed from the point of view of the answer, and the "answer" is nothing but the history of the successively posed questions.
Hig! Hig! Micel gedeorf ys hyt.
Gea leof, micel gedeorf hit ys, forþam ic neom freoh.
And ic eom getrywe hlaforde minon.

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Post by zompist »

Salmoneus wrote:Which is exactly the point. Because we know that, for example, the lens in your eye can be deformed, and that if it is deformed we do not see the same way, this causes us to doubt the assumption that the way we see the world is precisely correct. Of course, in the case of lenses, we can go and see how light is distorted by different types of lenses, relative to how light is outside the eye, and find the least distortive lens. We cannot compare thought in the brain with thought outside the brain, so we cannot find the least distorting brain structure. Surely the existence of multiple possible brain structures that display the world in different ways is a prima facie reason to doubt the assumption that my particular brain structure portrays the world in a perfectly undistorted way?
But who is saying that our vision, or any aspect of how we see the world, is "precisely correct" and "perfectly undistorted"?
I can no reason to think that this knowledge must be 'good' knowledge of the world, rather than 'useful' knowledge that is structurally analogous to certain salient features of the world.

An example: the Tube map. The tube map is not a good map of London. It requires two dimensions, but the 2d location of points on the map does not represent their location in the real two dimensions (hence people always believing Elephant and Castle is a long way south, because it's low down on the map, even though it's actually north of Victoria, half a map higher than it). Yet, if you give the map to somebody who travels primarily by tube, it can be very useful to him - more useful, perhaps, than a high-resolution printout of googlemaps and a book with all the tube, train and bus times in it. Certainly easier to give them, and easier for them to carry.
It may not be a good map of London, but it's a good map of the Underground. It enables people to get where they want to go.

(I don't know how you judge that the tube map is "the sort of map favoured by evolution". How do you know our ideas of the world are (in analogy) as geometry-distorting as the tube map, rather than a road map? It's probably fair to say that if a given distortion reduces our fitness, it's subject to evolutionary pressure to minimize that distortion.)
To be clearer, I think evolution gave us a sense of self, because it's useful for a primate to have: the self is the central point in the web of social and moral connections; it's also the heightened alertness that allows us to solve problems (how to get that banana hanging from the ceiling). Frustratingly for us as philosophers, evolution didn't give us good tools to contemplate rather than enjoy consciousness, to use Lewis's terms, any more than the eye is designed to see the retina.
And yet surely when you start making claims about the ontological status and pre-requisites of consciousness (such as the existence of a unified self), this is "contemplatory"?
Yes, didn't I just say that? And because our mental tools aren't evolved to contemplate themselves, introspection (ours or Hume's) isn't that trustworthy as a guide to how our minds operate.
And I agree that it's useful for primates to have this idea - which is surely evidence against the fact that the idea must be true. If it is a useful idea, we would have it whether or not it were true - so the fact we have it is no evidence for its truth.
I don't know what you mean by "evidence" here-- you seem to mean "proof", and who's talking about proofs? And what ideas "must be true"? Didn't I call the self a "useful fiction"? I think you're arguing against someone else's absolutism. I'm skeptical about Hume's arguments against the self; that doesn't mean I somehow think we have objective access to ultimate truth.
Exactly - so why can't the unity of the self be one such fiction?* Also, I don't understand you - "the self exists" IS 'reifying' it. The whole point of it is the claim that the self is a real thing. If you don't reify it, you deny that it's a thing - that's what reification means. And I'm not sure what your distinction is between 'self' and 'soul' - the difference chiefly seems to be one of register and time period.
"Soul" carries a lot more associations, especially dualistic or theistic ones.

Perhaps the hangup here is the word "fiction"? If you take it as meaning "completely false and illusory, made up", then I don't agree that the self is a fiction. But you provided an excellent analogy yourself, the tube map-- something that usefully approximates the world though with errors and distortions. And I provided an analogy myself, the uniform visual field we think we have. I don't think we're far apart here.

Mornche Geddick
Avisaru
Avisaru
Posts: 370
Joined: Wed Mar 30, 2005 4:22 pm
Location: UK

Post by Mornche Geddick »

Zompist, are you sure our sense of self was directly selected for? It seems to me quite possible our sense of self may be a byproduct of our intelligence - that it was not possible to evolve the one without the other.
Salmoneus wrote:I don't see why evolution would not have given us exactly this sort of synoptic perception of the world - useful for the sort of tasks we typically need it for (like knowing which line to take and where to change), but hopeless when we go beyond that.
I do. Because we may need to go beyond our typical tasks at any time, when our environment throws us a totally new and unexpected challenge. Then we will need the most accurate perceptions we can get.

User avatar
Pthagnar
Avisaru
Avisaru
Posts: 702
Joined: Fri Sep 13, 2002 12:45 pm
Location: Hole of Aspiration

Post by Pthagnar »

Mornche Geddick wrote:I do. Because we may need to go beyond our typical tasks at any time, when our environment throws us a totally new and unexpected challenge. Then we will need the most accurate perceptions we can get.
This argument holds for all evolutionary history. It has been ever an advantage for any organism to be able to "go beyond our typical tasks at any time". Wiwaxia, therefore, since it had need of the most accurate perception it could get, got it. So do cycads. So do limpets, etc.

You can see the flaw here -- just because an adaptation would be nice to have doesn't mean it either a) can or b) is likely to evolve. It would be very nice to evolve biological x-ray vision, biological radio communication, biological nuclear reactors and so on, just to name the flashiest of unlikely adaptations.

The other flaw is that you are being too mysterious in describing these "totally new and unexpected challenges". What sort of thing did you have in mind? You are probably right when you say that many adaptations come about due to their ability to provide fitness to individuals in unexpected situations, but it is characteristic of such adaptations that biologists are retroactively able to speculate as to what these causative situations were -- climactic change, the move to a new food source, mutations of dick shape, the arrival of a new top predator species, a third part of the sea becoming purple, the sky turning blue as lapis etc. etc.

This is where the human intelligence you mentioned comes in. By virtue of your possessing of it, you ought to be able to answer the question of what these totally new and unexpected challenges are. If I may suggest an answer, I would say that it is "everything weird that science has discovered" -- no animals can detect gamma radiation (there's evidence that some fungi use it as an energy source, but let it pass) or see bacteria causing diseases (let alone viruses), yet nevertheless we have been able to comprehend and *act to prevent* these "totally new and unexpected" challenges.

Now, you could (but you might not) suggest that since human intelligence evolved, then in a woolly way, "intelligence" is "the most accurate perception we can get". This is sorta true, but it's not a very appealing explanation. It doesn't sound right to say that "humans evolved to see atoms, giving them the ability to tell which food was good to eat" or "humans evolved to see gamma radiation, allowing them to evade it". It is rather like saying "humans evolved to be able to write the Brandenberg Concerto".

It is also possible to suggest possible adaptations by appealing to theoretical science. There are physical phenomena we cannot yet even percieve directly with scientific apparatus. Dark matter (pace jburke) is apparently all around us, but there is no way to detect it. At any moment, demons made out of non-baryonic dark matter may appear and kill us. Being able to see them would be very useful. At any moment, an asteroid may crash into the earth and create another K/T extinction -- it would be very useful to know the position of all bodies likely to cause this so we could, I don't know, dig a hole maybe?

It is even possible to suggest the existence of possible adaptations by appealing to theoretical theoretical science. This is very hard to do sensibly, but by placing yourself 1000 or so years into the past, one can see that it is not inconcievable that the various corrections made to the naive (evolutionary) theories of physics and geometry by Galileo, Einstein, Dirac, Wheeler etc. are just the beginning of a long list.

And this is what you get without even appealing to philosophy!

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Post by zompist »

Mornche Geddick wrote:Zompist, are you sure our sense of self was directly selected for? It seems to me quite possible our sense of self may be a byproduct of our intelligence - that it was not possible to evolve the one without the other.
This is a manifestation of the philosophical zombie. How could we tell if someone lacked consciousness? Personally, I think such an idea can be maintained only by adding more and more absurd claims. E.g., why don't we just ask them? The zombies have to be given the ability to claim to be conscious and feign it in every way. But why would they lie about it— surely every conspiracy in history has at least one traitor? Perhaps they don't know they're zombies— i.e. we're now postulating that some mechanism simulates consciousness so they have the material to correctly feign it, and yet this mechanism somehow isn't consciousness.

Or compare consciousness to other biological abilities. Evolution and mutation work pretty quickly to mess up unneeded functions... our sense of smell, for instance, has deteriorated compared to other mammals. If consciousness wasn't necessary, we'd expect it to often not appear. If it's necessary, then derangements of consciousness should appear as noticeable, disabling handicaps; Oliver Sacks's book is in fact full of examples that this is so.

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

I don't think you really get the zombie concept - your language presupposes that they don't exist. For instance, you ask: "why would they lie about it?", and suggest "perhaps they don't know they're zombies". But of course both lying about something, and knowing anything about themselves are states we attribute to things that aren't zombies. The zombies don't say that they aren't zombies. The zombies don't say anything - zombies just make a series of sounds that have no meaning to them, just as any other non-sapient phenomenon might. WE give those sounds meaning - we believe that those sounds are similar to the sounds we think of as language, and due to this familiarity we INTERPRET their sounds as words with meanings like "I am not a zombie".

It's worth noting that zombies do now exist - in the form of computer programs and certain robots. Now, it's true that current AI isn't good enough to fool us if we look hard, but is it really impossible for an AI to fool a human comprehensively and yet lack genuine sapience? [Yes, some people believe it is impossible - behaviourists and functionalists and so forth. But it seems wrong to just dismiss the possibility]

Personally, I don't believe in zombies - but I think your particular arguments against them beg the question.

--------

Perhaps even more importantly, denying the possibility of zombies is close to denying the ontological reality of consciousness. Yes, you could try to argue that zombies are physically impossible, but that seems very tendentious - most people who deny zombies will therefore deny consciousness, and instead say that consciousness and the self are just a redescription of functional characteristics.

------

On the issue of the necessity of self-concept:

- firstly, let's be clear what we are talking about. We were talking about the self, and now you're talking about consciousness, a very different thing.
- secondly, your argument doesn't hold, because it doesn't distinguish sui generis functions from consequential functions. Yes, it makes no sense to see either consciousness or the self as sui generis and superfluous, because evolution would have done away with them, most likely. However, if there is a function, Function X, which may be met by multiple mechanisms, and one of those mechanisms, Mechanism A, happens to have consciousness as an inevitable side-effect, evolution will not be able to get rid of consciousness, even if it is itself unnecessary, without a path-change to Mechanism B, which evolution is extremely loath to do. Consciousness may therefore be itself unnecessary while still occuring through evolution. There are many examples of such things in evolutionary history, though I can't think of any off-hand. One small example is our own sense of mint - the fact that we gain sensations of coldness from the taste of mint is in no way evolutionarily needed, or even helpful, but is simply a side-effect of the particular mechanism of oral temperature-detection. Or consider that there is a species of funnel-web whose venom is deadly to humans - a complicated evolutionary development, and yet entirely accidental, as the venom is actually designed for earthworms (and only affects earthworms (its prey) and primates (which it would not naturally encounter)). The human-killing function is complex and hence expensive, but is not necessary - but it is retained as the by-product of something that IS necessary.
Likewise, consciousness and self-concept could both be by-products of some other evolutionary function. [I don't think they are, as it happens, but that's irrelevant].


-----

Mornche: Pthug has answered your comment. The fact that a better map would be nice is no reason to think that evolution has given us one. Indeed, there are very few instances where it seems clear that a better map would even be useful - how important is metaphysics, really, to the survival of the species?
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

zompist wrote:
Salmoneus wrote:Which is exactly the point. Because we know that, for example, the lens in your eye can be deformed, and that if it is deformed we do not see the same way, this causes us to doubt the assumption that the way we see the world is precisely correct. Of course, in the case of lenses, we can go and see how light is distorted by different types of lenses, relative to how light is outside the eye, and find the least distortive lens. We cannot compare thought in the brain with thought outside the brain, so we cannot find the least distorting brain structure. Surely the existence of multiple possible brain structures that display the world in different ways is a prima facie reason to doubt the assumption that my particular brain structure portrays the world in a perfectly undistorted way?
But who is saying that our vision, or any aspect of how we see the world, is "precisely correct" and "perfectly undistorted"?
When it's vision of ourselves - everybody! Or, at least, the Enlightenment tradition, according to which the contents of our own minds are entirely perspicuous to us.
I can no reason to think that this knowledge must be 'good' knowledge of the world, rather than 'useful' knowledge that is structurally analogous to certain salient features of the world.

An example: the Tube map. The tube map is not a good map of London. It requires two dimensions, but the 2d location of points on the map does not represent their location in the real two dimensions (hence people always believing Elephant and Castle is a long way south, because it's low down on the map, even though it's actually north of Victoria, half a map higher than it). Yet, if you give the map to somebody who travels primarily by tube, it can be very useful to him - more useful, perhaps, than a high-resolution printout of googlemaps and a book with all the tube, train and bus times in it. Certainly easier to give them, and easier for them to carry.
It may not be a good map of London, but it's a good map of the Underground. It enables people to get where they want to go.
Which is my point. The self-concept is almost certainly useful - it gets us where we want to go. That's no reason to think that the self is as we think we perceive it, or even that the self exists at all.
(I don't know how you judge that the tube map is "the sort of map favoured by evolution". How do you know our ideas of the world are (in analogy) as geometry-distorting as the tube map, rather than a road map?
I don't - but I see no way to claim to know that they AREN'T as distorting.
It's probably fair to say that if a given distortion reduces our fitness, it's subject to evolutionary pressure to minimize that distortion.)
But whether this hypothetical distortion (the self-concept) reduces our fitness is a matter for debate. I see no reason why it must be. Being untrue does not simply mean being damaging to our fitness - or, if it does (as some do claim), then we're operating under an entirely different concept of truth from the one Hume had to deal with (and from that that is current). So, some distortions may not reduce fitness - so would not be subject to evolutionary pressures. Indeed, some distortions may INCREASE fitness - and such convenient untruths would be subject to evolutionary pressure to MAXIMISE (or at least increase) that distortion.

We can't know which it is unless we can compare our fitness with and without the distortion. Since we don't know what the distortion is, or even if it exists, we cannot judge this - so such evolutionary arguments are taken out of the frame.
To be clearer, I think evolution gave us a sense of self, because it's useful for a primate to have: the self is the central point in the web of social and moral connections; it's also the heightened alertness that allows us to solve problems (how to get that banana hanging from the ceiling). Frustratingly for us as philosophers, evolution didn't give us good tools to contemplate rather than enjoy consciousness, to use Lewis's terms, any more than the eye is designed to see the retina.
And yet surely when you start making claims about the ontological status and pre-requisites of consciousness (such as the existence of a unified self), this is "contemplatory"?
Yes, didn't I just say that? And because our mental tools aren't evolved to contemplate themselves, introspection (ours or Hume's) isn't that trustworthy as a guide to how our minds operate.
Well then what are you arguing about? Firstly, you seem to concede the entire point - that Enlightenment introspection is not a sure guide to knowledge. That is, that we cannot know whether the self exists.

Secondly, you beg two major questions:
- firstly, that our mental tools aren't evolved to contemplate themselves. This is tendentious. You yourself have argued that consciousness has been evolutionarily selected for - and surely the essence of consciousness is a consciousness of consciousness? That is, if evolution has evolved to allow us to be conscious, then yes, our mental tools HAVE evolved to contemplate themselves, as that's what consciousness is.
- secondly, that things can only do what they were evolved to do. I repeat the examples of our cold-detectors also detecting mint, and spider venom killing humans as well as earthworms. Even if our mental tools haven't evolved to contemplate themselves, that doesn't mean that they can't do it.
And I agree that it's useful for primates to have this idea - which is surely evidence against the fact that the idea must be true. If it is a useful idea, we would have it whether or not it were true - so the fact we have it is no evidence for its truth.
I don't know what you mean by "evidence" here-- you seem to mean "proof", and who's talking about proofs? And what ideas "must be true"? Didn't I call the self a "useful fiction"? I think you're arguing against someone else's absolutism. I'm skeptical about Hume's arguments against the self; that doesn't mean I somehow think we have objective access to ultimate truth.
No, I mean "evidence". If A does not make B more likely, A is not evidence for B.

Perhaps I've misrelayed Hume's arguments? To summarise: "It is not true that we know that there is an essential self". Being sceptical about this looks like claiming that you do indeed know something that's true. Denying that we do not know is claiming to know!
Exactly - so why can't the unity of the self be one such fiction?* Also, I don't understand you - "the self exists" IS 'reifying' it. The whole point of it is the claim that the self is a real thing. If you don't reify it, you deny that it's a thing - that's what reification means. And I'm not sure what your distinction is between 'self' and 'soul' - the difference chiefly seems to be one of register and time period.
"Soul" carries a lot more associations, especially dualistic or theistic ones.

Perhaps the hangup here is the word "fiction"? If you take it as meaning "completely false and illusory, made up", then I don't agree that the self is a fiction. But you provided an excellent analogy yourself, the tube map-- something that usefully approximates the world though with errors and distortions. And I provided an analogy myself, the uniform visual field we think we have. I don't think we're far apart here.
Well, claiming that the self is real is inherently dualistic - it divides things fundamentally into self and non-self. The problems of Cartesianism and Empiricism follow from this dualism - how can self and non-self interact, and how can the self know the non-self?

I don't understand how you could deny that the tube map is both false (as a map of London) and 'made up'. (though perhaps it may not be 'illusory'). It is pragmatically useful - but from that fact we can deduce nothing about the real nature of London - for all we know, the tube map could be upside-down, it would still work just as well! I'm not sure what sort of 'fiction' you are contrasting the map with - especially as this is a map that exists only in the mind, and not on paper.

[I still don't understand why you think we don't have a uniform visual field. I can see it - it is present in my mind. Best science shows that there is in fact a uniform field of photons coming into my eyes. Arguing that there 'really' isn't such a field because the eye scans actively and not passively seems like arguing that I have no real visual impression of a square because the electrons in my optic nerve are not arranged in a square. The retina, and blind spots and high-intensity spots and whatnot are equivalent to the optic nerve - they are tools to provide an image, not places where an image resides. It's as ludicrous to claim associate 'my real visual impressions' with the pattern of blind spots on a retina as it would be to associate it with the pattern of electrons in the optic nerve.

An analogy: a message is sent in morse code. The transmitter sends an English message, and the receiver gets a message in English, and they're the same. You seem to be saying "but you know, we don't really get an English message, because the telegraphist just received a string of dots and dashes".]



--------------------
------------
-----------------------------


I think this proves how relevant Hume is to our time. Note how people sat back and read about the wildest speculations with barely a "hmm, interesting". But once a philosopher dares to be skeptical about things we still take for granted, there's a flurry of protestations...
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

User avatar
Yiuel Raumbesrairc
Avisaru
Avisaru
Posts: 668
Joined: Thu Jan 20, 2005 11:17 pm
Location: Nyeriborma, Elme, Melomers

Post by Yiuel Raumbesrairc »

Salmoneus wrote:
But who is saying that our vision, or any aspect of how we see the world, is "precisely correct" and "perfectly undistorted"?
When it's vision of ourselves - everybody! Or, at least, the Enlightenment tradition, according to which the contents of our own minds are entirely perspicuous to us.
There are a few psychologists who would disagree with you. Various studies actually seem to show us that we can easily have a highly distorted view of ourselves.

As for the whole thing about personal identity, could it just be that we are just fooling ourselves? That is, that WE are the zombies, and that our identity is just a deceiving construct from all the input we have (either innate or acquired?) Indeed, our own body is "merely" a package of atoms assembled together (and these atoms are themselves packages). Why would "our mind" differ in any way?

I am jokingly oversimplifying here. All those atoms in our bodies are arranged in such a way to form a coherent system. And the system function pretty much as a single entity. But that entity is a construct, no matter how complicated it may seem. So why couldn't the mind itself be such a construct, ultimately deriving from the assembling of all those connections within our brain, working as a single system?
"Ez amnar o amnar e cauč."
- Daneydzaus

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Post by zompist »

Salmoneus wrote:I don't think you really get the zombie concept - your language presupposes that they don't exist. For instance, you ask: "why would they lie about it?", and suggest "perhaps they don't know they're zombies". But of course both lying about something, and knowing anything about themselves are states we attribute to things that aren't zombies. The zombies don't say that they aren't zombies. The zombies don't say anything - zombies just make a series of sounds that have no meaning to them, just as any other non-sapient phenomenon might. WE give those sounds meaning - we believe that those sounds are similar to the sounds we think of as language, and due to this familiarity we INTERPRET their sounds as words with meanings like "I am not a zombie".

It's worth noting that zombies do now exist - in the form of computer programs and certain robots. Now, it's true that current AI isn't good enough to fool us if we look hard, but is it really impossible for an AI to fool a human comprehensively and yet lack genuine sapience? [Yes, some people believe it is impossible - behaviourists and functionalists and so forth. But it seems wrong to just dismiss the possibility]
But I didn't say they couldn't exist; I said that to maintain their existence requires an increasing sequence of absurd claims. That's a heuristic (not a proof) for recognizing pseudo-science.

Your first paragraph is an example-- it seems to amount to saying that the zombies could just randomly and accidentally produce English. Well, Shakespeare could be written by randomly typing chimps, but it's not something Shakespeare scholars need to worry about.

AI is more interesting. But the more comprehensively you want to fool humans, the more complex your AI must be. In programming, we sometimes have to emulate other systems; it's an enormous hassle. It's almost never practical to emulate every last detail; to do so means (e.g.) reproducing minor bugs or inconsistencies in the model.

If there were an AI that could plausibly talk about artefacts of vision, such as optical illusions that depend on the function of the retina, it would have to have enormously complicated internals-- far beyond what's needed for sapience. (After all our brains don't need to simulate odd details about the retina; they're attached to actual retinas.) I don't know how you could be confident that a mechanism more complicated than the brain is not sentient.
However, if there is a function, Function X, which may be met by multiple mechanisms, and one of those mechanisms, Mechanism A, happens to have consciousness as an inevitable side-effect, [...]
Yes, of course-- Gould coined a term for such things, spandrels. I'm sure much of our cognitive function is composed of spandrels; the question is which ones.

But you still need an argument that a particular feature is a spandrel. If consciousness is one, how do we know it's nonadaptive, and what functions that are adaptive did it grow out of?

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Post by zompist »

Salmoneus wrote:Which is my point. The self-concept is almost certainly useful - it gets us where we want to go. That's no reason to think that the self is as we think we perceive it, or even that the self exists at all.
I don't see how you get to the second statements from the first! How can the self be useful yet not exist? And from what viewpoint could the self be different than our perceptions? (I can see different perspectives-- say, a neurologist's-- but why is that viewpoint more privileged than ours?)
We can't know which it is unless we can compare our fitness with and without the distortion. Since we don't know what the distortion is, or even if it exists, we cannot judge this - so such evolutionary arguments are taken out of the frame.
Only, I think, if you can describe what this distortion is and show that it's not suspectible to evolution. It's so vague that I don't know what you're referring to.
Secondly, you beg two major questions:
- firstly, that our mental tools aren't evolved to contemplate themselves. This is tendentious. You yourself have argued that consciousness has been evolutionarily selected for - and surely the essence of consciousness is a consciousness of consciousness? That is, if evolution has evolved to allow us to be conscious, then yes, our mental tools HAVE evolved to contemplate themselves, as that's what consciousness is.
I don't know about that. At the minimum, we need to have a self-model-- e.g. to know where we are in our primate society. Apes seem to have this (they understand their social position, they have concepts of fairness, they can deal with mirrors). But that doesn't mean that we also have a good model of our self-model. I don't see that our tools for introspection need be very good.

(The second point was on spandrels, which I addressed above.)
Perhaps I've misrelayed Hume's arguments? To summarise: "It is not true that we know that there is an essential self". Being sceptical about this looks like claiming that you do indeed know something that's true. Denying that we do not know is claiming to know!
It's really Hume's argument against Descartes that I'm skeptical about. E.g. Hume writes that "we are nothing but a bundle or collection of different sensations, which succeed each other with an inconceivable rapidity, and are in a perpetual flux and movement." It doesn't match my introspection, and as Sacks points out, it seems to best describe a mentally ill person. I don't think that Hume succeeded in demolishing Descartes' cogito (but then I don't think that it establishes all that much anyway if you leave God out of it).
I don't understand how you could deny that the tube map is both false (as a map of London) and 'made up'. (though perhaps it may not be 'illusory'). It is pragmatically useful - but from that fact we can deduce nothing about the real nature of London - for all we know, the tube map could be upside-down, it would still work just as well!
I don't understand how you can not understand it. :) Are you so sure that "we can deduce nothing about the real nature of London"? Take a look at this map of the departments of France generated from nothing but abuttal data (which ones are next to which others):

http://strangemaps.wordpress.com/2009/1 ... uate-data/

Such a map could be made from the tube map, even though the tube map doesn't directly represent geographical location.

I think you're overestimating the weirdness of Hume. What if I'm quite comfortable with absolute truth being quite limited?
I still don't understand why you think we don't have a uniform visual field. I can see it - it is present in my mind. Best science shows that there is in fact a uniform field of photons coming into my eyes. Arguing that there 'really' isn't such a field because the eye scans actively and not passively seems like arguing that I have no real visual impression of a square because the electrons in my optic nerve are not arranged in a square. The retina, and blind spots and high-intensity spots and whatnot are equivalent to the optic nerve - they are tools to provide an image, not places where an image resides. It's as ludicrous to claim associate 'my real visual impressions' with the pattern of blind spots on a retina as it would be to associate it with the pattern of electrons in the optic nerve.
It can easily be proven that you don't see what you think you're seeing.

You have a small hi-res field, the fovea; it moves very rapidly (something we're not conscious of). As the brain gets detail on whatever it wants to, it's fooled into thinking that the whole field is equally detailed.

But you can rig up a computer screen to track the movements of the fovea, and show something different there. E.g. you put a page of Dostoevsky on the screen, and wherever the fovea is, you replace that bit of text with the corresponding bit of a page of Tolstoy. (This is a real experiment, not a speculation.)

What will you see? You'll see a page of Tolstoy. You won't be able to see the Dostoevsky at all... indeed, you won't even be aware that this trick is being played on you. But anyone else watching the screen can see the page of Dostoevsky with a moving distortion on it.

(Again, apologies for derailing your thread... if you like we can move this to a different thread or drop it. I do want to read the rest of your summaries!)

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

zompist wrote:
Salmoneus wrote:I don't think you really get the zombie concept - your language presupposes that they don't exist. For instance, you ask: "why would they lie about it?", and suggest "perhaps they don't know they're zombies". But of course both lying about something, and knowing anything about themselves are states we attribute to things that aren't zombies. The zombies don't say that they aren't zombies. The zombies don't say anything - zombies just make a series of sounds that have no meaning to them, just as any other non-sapient phenomenon might. WE give those sounds meaning - we believe that those sounds are similar to the sounds we think of as language, and due to this familiarity we INTERPRET their sounds as words with meanings like "I am not a zombie".

It's worth noting that zombies do now exist - in the form of computer programs and certain robots. Now, it's true that current AI isn't good enough to fool us if we look hard, but is it really impossible for an AI to fool a human comprehensively and yet lack genuine sapience? [Yes, some people believe it is impossible - behaviourists and functionalists and so forth. But it seems wrong to just dismiss the possibility]
But I didn't say they couldn't exist; I said that to maintain their existence requires an increasing sequence of absurd claims. That's a heuristic (not a proof) for recognizing pseudo-science.
But they aren't pseudo-science, because they don't pretend to be science. Nobody is putting forward a scientific theory based on them, to my knowledge (except those who believe that we are all zombies). So the mores of scientific etiquette are irrelevent here - many things may be true without (yet) being supported through science. Some things (ie much of philosophy) cannot possibly be supported by science.
Your first paragraph is an example-- it seems to amount to saying that the zombies could just randomly and accidentally produce English. Well, Shakespeare could be written by randomly typing chimps, but it's not something Shakespeare scholars need to worry about.
No, that's again begging the question - either zombies act like us because like us they are sentient, or they act like us due to random, accidental chance. Well, it may not be "random" (not that I think 'randomness' is a well-formed concept) and it may be essential. There are certainly theories that support essential zombiedom - for instance, the Fichte/Hegel theories about alternality being prerequisite for consciousness implies that even if we were the only thinking being, we would nonetheless perceive a world that was full of what appeared to be other thinking beings.

Also, you seem to assume that sapience is the norm, and zombiedom is a strange coincidence. It could equally well be that zombiedom is the norm, and that you are the sole random almost-identical mutated exception to it.
AI is more interesting. But the more comprehensively you want to fool humans, the more complex your AI must be. In programming, we sometimes have to emulate other systems; it's an enormous hassle. It's almost never practical to emulate every last detail; to do so means (e.g.) reproducing minor bugs or inconsistencies in the model.
A hassle, yes, but you aren't a god, or a demon, or an inexorable and fundamental law of creation, all of which would be able to do things that are quite difficult for us, without any 'hassle' at all.
If there were an AI that could plausibly talk about artefacts of vision, such as optical illusions that depend on the function of the retina, it would have to have enormously complicated internals-- far beyond what's needed for sapience. (After all our brains don't need to simulate odd details about the retina; they're attached to actual retinas.)
They wouldn't have to simulate them either - they just have to simulate simulating them. That is, an AI doesn't have to experience sight, even in a simulation, to be able to describe it - it just has to be able to experience and replicate human descriptions of it, which is a far easier task.
I don't know how you could be confident that a mechanism more complicated than the brain is not sentient.
[/quote]
Personally, I think it would be sentient - but that just supports the Humean view. Because on the Cartesian view, you've got an increasingly complicated mechanism all the time until suddenly BING! a mind appears attached to the mechanism. Hence, those who believe in the self/soul/mind/whatever will generally believe that AI can never be conscious.

In any case, I'm arguing the sceptical side - so I don't have to be confident. I just have to not see grounds to be confident in your beliefs - anything that introduces doubt or uncertainty plays into the hands of scepticism.
However, if there is a function, Function X, which may be met by multiple mechanisms, and one of those mechanisms, Mechanism A, happens to have consciousness as an inevitable side-effect, [...]
Yes, of course-- Gould coined a term for such things, spandrels. I'm sure much of our cognitive function is composed of spandrels; the question is which ones.

But you still need an argument that a particular feature is a spandrel. If consciousness is one, how do we know it's nonadaptive, and what functions that are adaptive did it grow out of?
I need no such argument! I just need an argument that any feature COULD be a spandrel. That introduces multiple possibilities, and hence undermines certainty and favours doubt.
Last edited by Salmoneus on Sat Nov 14, 2009 1:26 pm, edited 1 time in total.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

zompist wrote:
Salmoneus wrote:Which is my point. The self-concept is almost certainly useful - it gets us where we want to go. That's no reason to think that the self is as we think we perceive it, or even that the self exists at all.
I don't see how you get to the second statements from the first! How can the self be useful yet not exist? And from what viewpoint could the self be different than our perceptions? (I can see different perspectives-- say, a neurologist's-- but why is that viewpoint more privileged than ours?)
I didn't say the self was useful yet did not exist. I said the self-concept could be useful yet the self not exist. The concept of something can be useful even when the thing conceptualised does not exist - many past theories of medicine and physics have been useful, to some extent, even when relying on concepts that were ultimately false.

From what viewpoint could the self be different? Well, firstly the question presupposes that Hume's arguments have won out and the Enlightenment is dead - there is only one Enlightened viewpoint. Secondly - from the same viewpoint that all our other claims about what objects exist are evaluated - the viewpoint of cold, hard physics! Why is this viewpoint more privileged? I don't know, but it seems to be in our society - demons, ghosts, miasmas, animal magnetism and telekinesis have all been believed in fervently, but the viewpoint of science has shouted down such superstitions. What makes the self-concept immune from criticism?
We can't know which it is unless we can compare our fitness with and without the distortion. Since we don't know what the distortion is, or even if it exists, we cannot judge this - so such evolutionary arguments are taken out of the frame.
Only, I think, if you can describe what this distortion is and show that it's not suspectible to evolution. It's so vague that I don't know what you're referring to.
Which merely proves my point. It is so unknown that we cannot possibly prove that it IS susceptible to evolution, or if it is, in which way. Hence, arguments from evolution do not apply.
But, one example: some people believe in the existence of a unitary self, allegedly as a result of instinctive perception. The fact that these instincts have evolved does not prove their truth - it isn't even evidence for their truth. Indeed, if we believe the instinct is useful, the fact it has been given to us by evolution is, if anything, evidence against its truth - because evolution would be likely to provide us with it even if it were false, so we cannot ascribe our having it to its truth. If it were not useful at all for us to believe it, yet we believed it anyway, THAT would be evidence for its truth. The fact that we have such an obvious motive for believing it anyway casts doubts on any claims that we have it only for good reasons.
Secondly, you beg two major questions:
- firstly, that our mental tools aren't evolved to contemplate themselves. This is tendentious. You yourself have argued that consciousness has been evolutionarily selected for - and surely the essence of consciousness is a consciousness of consciousness? That is, if evolution has evolved to allow us to be conscious, then yes, our mental tools HAVE evolved to contemplate themselves, as that's what consciousness is.
I don't know about that. At the minimum, we need to have a self-model-- e.g. to know where we are in our primate society. Apes seem to have this (they understand their social position, they have concepts of fairness, they can deal with mirrors). But that doesn't mean that we also have a good model of our self-model. I don't see that our tools for introspection need be very good.
If our 'self-model' does not involve modelling ourselves as modelling creatures, it is an appallingly inadequate model (incapable, for instance, of dealing with human language). However, if our instinctive faculties ARE that poor, that only adds weight to the sceptical argument against the truth of our instinctive categories. You may say it applies to Hume as well - but "our disinfection process is not perfect; therefore we should use things that have not been disinfected at all" is a poor argument - if our thoughts do not easily come to reflect our natures, that is an argument for MORE work on thinking, and AGAINST the idea of just accepting our instinctive beliefs (in, eg, an essential self).
Perhaps I've misrelayed Hume's arguments? To summarise: "It is not true that we know that there is an essential self". Being sceptical about this looks like claiming that you do indeed know something that's true. Denying that we do not know is claiming to know!
It's really Hume's argument against Descartes that I'm skeptical about. E.g. Hume writes that "we are nothing but a bundle or collection of different sensations, which succeed each other with an inconceivable rapidity, and are in a perpetual flux and movement." It doesn't match my introspection, and as Sacks points out, it seems to best describe a mentally ill person. I don't think that Hume succeeded in demolishing Descartes' cogito (but then I don't think that it establishes all that much anyway if you leave God out of it).
Firstly: I have to wonder whether you and Sacks are simply mad, if that's really your opinion. Hume's description seems far more believable and familiar.
Secondly: Hume's description is not arrived at through introspection, like Descartes', but through reasoning about why introspections to the contrary were unreliable. You seem to be ignoring that argument and insisting 'but it doesn't agree with my introspection!'. This is doubly odd when you have just called into question our powers of introspection.
I don't understand how you could deny that the tube map is both false (as a map of London) and 'made up'. (though perhaps it may not be 'illusory'). It is pragmatically useful - but from that fact we can deduce nothing about the real nature of London - for all we know, the tube map could be upside-down, it would still work just as well!
I don't understand how you can not understand it. :) Are you so sure that "we can deduce nothing about the real nature of London"? Take a look at this map of the departments of France generated from nothing but abuttal data (which ones are next to which others):

http://strangemaps.wordpress.com/2009/1 ... uate-data/

Such a map could be made from the tube map, even though the tube map doesn't directly represent geographical location.
No, it couldn't. That map is based on geography - which things are next to which other things. This is the essence of geography. The tube map is not like that. For instance, given four stations joined by a cross, we can tell absolutely nothing about their relative location. The four stations could be positioned absolutely anywhere on a blank map, and yet be portrayed the same way. [Official tube maps attempt to avoid inversions and the like where possible, but unofficial tube maps may be drawn up to create whatever bizarre illusion you wish]
I think you're overestimating the weirdness of Hume. What if I'm quite comfortable with absolute truth being quite limited?
I think, by mostly phrasing him as a sceptic rather than a logical positivist, I'm probably UNDERestimating his weirdness. As for your question:
1. I wouldn't know what you meant - "absolute" and "limited" are opposites, so your view would be inherently contradictory.
2. I would assume you had already accepted a Humean view.
I still don't understand why you think we don't have a uniform visual field. I can see it - it is present in my mind. Best science shows that there is in fact a uniform field of photons coming into my eyes. Arguing that there 'really' isn't such a field because the eye scans actively and not passively seems like arguing that I have no real visual impression of a square because the electrons in my optic nerve are not arranged in a square. The retina, and blind spots and high-intensity spots and whatnot are equivalent to the optic nerve - they are tools to provide an image, not places where an image resides. It's as ludicrous to claim associate 'my real visual impressions' with the pattern of blind spots on a retina as it would be to associate it with the pattern of electrons in the optic nerve.
It can easily be proven that you don't see what you think you're seeing.

You have a small hi-res field, the fovea; it moves very rapidly (something we're not conscious of). As the brain gets detail on whatever it wants to, it's fooled into thinking that the whole field is equally detailed.

But you can rig up a computer screen to track the movements of the fovea, and show something different there. E.g. you put a page of Dostoevsky on the screen, and wherever the fovea is, you replace that bit of text with the corresponding bit of a page of Tolstoy. (This is a real experiment, not a speculation.)

What will you see? You'll see a page of Tolstoy.
You seem to have proved my point. You say "you'll see a page of Tolstoy". And, indeed, I will believe I see a page of Tolstoy. So, what I see will be exactly what I believe I see.
You won't be able to see the Dostoevsky at all... indeed, you won't even be aware that this trick is being played on you. But anyone else watching the screen can see the page of Dostoevsky with a moving distortion on it.
But we know that we see different things from one another. So this tells us nothing interesting.

An analogous experiment: I show you a sentence which has a meaning in Latin and a different one in Italian. You say "I'm reading Italian". I shout "no, you're reading Latin!", on the basis that I am reading it in Latin. In exactly the same way, I say "I am reading Tolstoy" and you say "you are reading Dostoevsky". But as you admit above, I'm ACTUALLY reading Tolstoy, so I'm correct - just as you're correct in saying you're reading Italian. The fact that I can extract a different meaning from the same physical setup does not imply anything about the veracity of your own sight.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

zompist
Boardlord
Boardlord
Posts: 3368
Joined: Thu Sep 12, 2002 8:26 pm
Location: In the den
Contact:

Post by zompist »

Salmoneus wrote:
zompist wrote:It's almost never practical to emulate every last detail; to do so means (e.g.) reproducing minor bugs or inconsistencies in the model.
A hassle, yes, but you aren't a god, or a demon, or an inexorable and fundamental law of creation, all of which would be able to do things that are quite difficult for us, without any 'hassle' at all.
As I said, the zombie idea can only be supported by making absurd claims. If you need supernatural mechanisms, I'm not worried about it.
That is, an AI doesn't have to experience sight, even in a simulation, to be able to describe it - it just has to be able to experience and replicate human descriptions of it, which is a far easier task.
Why would you think that? Artificial vision systems exist today; convincing AI conversationalists do not.
Personally, I think it would be sentient - but that just supports the Humean view. Because on the Cartesian view, you've got an increasingly complicated mechanism all the time until suddenly BING! a mind appears attached to the mechanism. Hence, those who believe in the self/soul/mind/whatever will generally believe that AI can never be conscious.
I think the sudden sentience idea is silly too. But simply removing the problem by declaring the self non-existent seems like just as much of a copout. (FWIW I think consciousness is a complex biological phenomenon, and like any such thing I expect to find partial or alternative forms of it in evolution or pathology. And I expect AIs to have something analogous— though for the reasons I mentioned I don't think they'll be completely convincing copies of humans.)
In any case, I'm arguing the sceptical side - so I don't have to be confident. I just have to not see grounds to be confident in your beliefs - anything that introduces doubt or uncertainty plays into the hands of scepticism.
Well, fine, I'm skeptical of Hume, so I don't need any more arguments either.
rom what viewpoint could the self be different? Well, firstly the question presupposes that Hume's arguments have won out and the Enlightenment is dead - there is only one Enlightened viewpoint. Secondly - from the same viewpoint that all our other claims about what objects exist are evaluated - the viewpoint of cold, hard physics! Why is this viewpoint more privileged? I don't know, but it seems to be in our society - demons, ghosts, miasmas, animal magnetism and telekinesis have all been believed in fervently, but the viewpoint of science has shouted down such superstitions. What makes the self-concept immune from criticism?
First, "cold, hard physics" has no explanation of consciousness.

Second, "cold, hard physics" has no value system, so it can't be an answer to why any viewpoint is privileged.

And finally, "cold, hard physics" doesn't exist except as mediated by the self. If the self doesn't exist, why should I believe in your physics, cold and hard though it seems to you?

As for the Tolstoy/Dostovevsky thing, I think you miss the point. You said earlier you thought you had a unified, hi-res visual field. The experiment proves you don't. You think you see a page of Tolstoy, when no such page exists. (Reading isn't the issue; you don't need a full visual field to read, as in fact the experiment also proves— all you need to read is that moving fovea.)

What your brain gets is a fuzzy overall picture plus an area in sharp focus. This isn't normally a problem since they coincide— look at any area and you get it in sharp focus. Since there is no way to focus on the area not in focus, we get the illusion that the whole picture is sharp.

(And your analogy doesn't work— the point here is that you are not seeing the "same thing" as other observers and simply interpreting it differently. The experiment messes with "seeing" itself in an unusual way.)

Mornche Geddick
Avisaru
Avisaru
Posts: 370
Joined: Wed Mar 30, 2005 4:22 pm
Location: UK

Post by Mornche Geddick »

Phthug wrote:You can see the flaw here -- just because an adaptation would be nice to have doesn't mean it either a) can or b) is likely to evolve.
Intelligence has evolved. So your point is?

The more complex the brain the better the better the perceptual map. Mammals can see better than worms partly because we have two eyes instead of a photosensitive spot and partly because we have a visual cortex that enables us to construct a perceptual map of our surroundings. This map had better be accurate, because the rabbit would be in big trouble if it took a percept to be another rabbit when it was in fact a weasel. (Worms stay ahead of the game by breeding far more rapidly than a mammal can and laying eggs that can wait for years to hatch, not an option for a mammal.)

Here is one example of an unexpected challenge. Suppose a new creature appears (for example, a species of deer). The rabbits instinctively dash for cover, as they've never seen it before and it might be a predator. But if the deer establish themselves in the rabbits' environment, the rabbits have to learn whether the deer are dangerous or not. It isn't. So they carry on feeding when they see (or smell) one. Here we have some cognitive tools - recognition and memory.

The more intelligent a creature is, the faster it learns and the fewer mistakes it makes. A relatively simple creature learns by trial and error. A creature with a more sophisticated brain is able to look at the deer and compare it with the predators it already knows about. (Has it got fangs? Does it smell of meat? Does it resemble a fox, or a weasel, or a badger, or a cat?) This particular checklist wouldn't help the rabbits to recognise a python. However a still more intelligent mammal can recognise a python it has never seen, because it has been told about them (or read about them in a book, or seen them on TV).

A more permenant and serious example of a new and unexpected environmental challenge is the loss of forest habitat in africa several million years ago that forced a certain ape to move into the savanna where it had to eat new foods, evade new predators, and probably form larger social groups (with the new cognitive and social skill set that entails).

Mornche Geddick
Avisaru
Avisaru
Posts: 370
Joined: Wed Mar 30, 2005 4:22 pm
Location: UK

Post by Mornche Geddick »

zompist wrote:What your brain gets is a fuzzy overall picture plus an area in sharp focus. This isn't normally a problem since they coincide— look at any area and you get it in sharp focus. Since there is no way to focus on the area not in focus, we get the illusion that the whole picture is sharp.
Try staring fixedly at a single point and see what happens to the rest of your visual field.

Post Reply