A brief overview of the development of Western Philosophy

Discussions worth keeping around later.
Post Reply
The Unseen
Sanci
Sanci
Posts: 32
Joined: Thu Jul 10, 2008 2:49 pm

Post by The Unseen »

Eddy wrote:[rant]To be honest, I never understood this idea, since we have a powerful body of experience as well as faith to indicate the third option. If you throw a ball at a pane of glass (assuming you have a big enough ball and typical glass), it will shatter every time. Not one historical record describes a ball flying into a window only to bounce off like a wad of cotton or something. And furthermore, I guarantee that even Hume did not want children playing ballgames in his front yard. He may have claimed that causality did not exist, but he intuitively grasped that a ball kicked into the window would leave a nasty mess.[/rant]
Hume agrees with you. Hume says that it is custom and habit which dictates our concept of causation. He says that we know why the child doesn't hold his finger to the fire of a candle after doing it once and getting burned, but that it cannot be proved by demonstrative logical proof because it is based on circular reasoning and the assumption that the future is like the past. That doesn't mean that kind of knowledge isn't useful or necessary.
[url=http://wiki.penguindeskjob.com/Aptaye]My conlang Aptaye. Check it outttt[/url]

Economic Left/Right: -0.50
Social Libertarian/Authoritarian: -8.97

User avatar
Aurora Rossa
Smeric
Smeric
Posts: 1138
Joined: Mon Aug 11, 2003 11:46 am
Location: The vendée of America
Contact:

Post by Aurora Rossa »

A photon is not a bundle of light - it is a bundle of experimental behaviours that we posit. Whether there is a real photon underlying our model photon is almost by definition subject to debate (after all, we used not to believe in photons, and we may well not believe in them in the future). And even if there are real photons, this tells us nothing about the nature of causation.
My point has been that something concrete and consistent must exist to explain why everyone has the same perception of a photon. That need not require a physical object corresponding to our model of a photon, but I see no way to deny the existence of something behind those perceptions. Are you saying we can't prove the existence of this true reality, though, or just that we can't really describe it with any confidence?
Firstly, no, even a little experience will tell you that balls often don't break glass. It only breaks glass under certain conditions - and we haven't exhaustively defined those conditions, so we don't have any "every time without exception" rules about it.
Obviously when describing the ball breaking glass, I meant with those conditions in mind. Certainly you can kick a ball into 10cm thick bullet proof glass and not break it, but I wasn't talking about that scenario. Likewise we assumed that small fires made with low technology would not lead to global warming and we were right. The problem arose when we assumed that vastly larger fires from billions of people would have no effect either. The fact that we talk about conditions at all, though, suggests that actions have specific outcomes that follow predictable trends (more fires means global warming, thicker glass resists projectiles).
Image
"There was a particular car I soon came to think of as distinctly St. Louis-ish: a gigantic white S.U.V. with a W. bumper sticker on it for George W. Bush."

Makerowner
Sanci
Sanci
Posts: 20
Joined: Tue Jul 10, 2007 2:00 am
Location: In the middle of the Canadian Vowel Shift

Post by Makerowner »

Nietzsche interpretation is probably one of the most contested areas in philosophy, so I don't want go too much into it, but there's one point I particularly want to comment on:
A key positive part of Nietzsche’s worldview is his concept of ‘will to power’; this is an acceptance of the Schopenhauerian idea that subjectivity is will, but changes the object of that will from ‘life’ or ‘survival’ or ‘reproduction’ to ‘power’. The will to power is the chief motivation for all people. That power is valued more than life itself is demonstrated through the figure of the martyr: in Nietzsche’s interpretation, the martyr is somebody who is willing to sacrifice their life in order to demonstrate (and in the process make it so) that their opponants have no power over them. The martyr would rather die than recant, because recanting as a result of threats would place their accusers in a position of power over them: denying that the life that their accusers can take from them is the most important thing to them is denying that their accusers have any significant power.
The will to power is so much more than this! It's absolutely not a psychological concept (and not political either, despite the Nazis' interest in Nietzsche); though it has implications for psychology and politics of course. I can only call it a metaphysical concept. Nietzsche would certainly have objected to its being called metaphysics, but the word has a broader meaning today than it did in his time. What I mean by that term is that it explains what it is for something to be: to have effects. Anything that is, simply is the effect it has on other things, which themselves are nothing but their effects. It's obvious here that calling these things “things” is only a convenient fiction: there are no things, only effects. Nietzsche several times uses the example of lightning: we say "the lightning flashes" as if there were a substance 'lightning' which happens to be flashing at the moment; but all there is is flashing, that we take one time as a thing, then as an effect, and then claim that there's a relation of causality between them.
The effects of these forces are not static results, but constantly expanding processes or forces, each of which, if left unrestricted, would fill the entire universe. But of course, this is impossible: a force is constant expansion, so if there were only one force, it would have nothing to expand into and would thus not exist. There can only be a force if there is also another force, something it exerts its force against and that resists it. These forces are necessarily unequal: two balanced forces would not be expanding, and thus neither would exist. A force only is by its difference with another force—or rather, with all other forces, because a force is not a unit or a thing that can be counted, each force being itself composed of infinitely many forces. Here again, saying that a force is “composed” of other forces is a fiction: there are no unit-forces that are added up to produce higher-level forces: “there are no durable ultimate units, no atoms, no monads: here, too, 'beings' are only introduced by us […] 'units' are nowhere present in the nature of becoming”. The universe is a constantly shifting mass of forces struggling against each other, and this applies to all levels of existence, from the interactions of bare matter, to living organisms, to human beings, to artistic creations, to peoples, etc. But their struggle is not simply for geometrical expansion: in expanding, a force dominates other forces and makes its environment more favourable to its own existence, that is, its continued expansion. Thus a force, by its mere existence, makes itself more forceful, and it has no end, in either sense of the term. Power is not a goal that the will aims at, it's the structure of the will that makes any willing-something at the same time a willing-itself. This reflexive structure of the will is the centrepiece of Nietzsche's whole philosophy.
Because of the self-willing character of the will, the way to understand any will (eg. the “will-to-truth”) is to understand what kind of will that willing favours, or what kind of life is promoted by that willing. Thus the basic form of an interpretation (of a philosophy, a novel, a historical event, a culture) is “who is speaking?”, “what kind of will is being spoken for?”. Interpretation, however, is not one particular action that certain sophisticated human wills do, it's just another side of the will to power. Every interaction of wills (ie. everything) is an interpretation: by acting on something in this particular way, a will takes it to be in this particular way. This is what distinguishes Nietzsche's perspectivism from skepticism in its various flavours: it's not that we only experience "appearances" and don't know what might be behind them, it's that there is nothing behind them: "facts is precisely what there is not, only interpretations".
In psychology, the inherent multiplicity of the will to power means a complete shattering of the self. The wills Nietzsche talks about are not like Leibnitz's monads, where each of us is a monad, under which a certain number of other monads are organized, etc.--for Nietzsche, each of us "is" a shifting balance of forces, and what says "I" is something different each time. "[The] way is clear for new and refined versions of the hypothesis about the soul; in future, concepts such as the 'mortal soul' and the 'soul as the multiplicity of the subject' and the 'soul as the social construct of drives and emotions' will claim their rightful place in science." This leads to what seems like a paradox: "My proposition is: that the will of psychology hitherto is an unjustified generalization, that this will does not exist at all. "The will" as a human psychological faculty does not exist; there is no self or part/function/aspect/etc. of the self that causes our actions. We think here just like in the case of lightning: because we say "I think/will/etc.", we assume that there is a thing called 'I' that is the cause of thinking/willing/etc. " 'There is thinking' [es denkt], but to assert that 'there' is the same thing as that famous old 'I' is, to put it mildly, only an assumption, a hypothesis, and certainly not an 'immediate certainty'. And in the end 'there is thinking' is also going too far: even this 'there' contains an interpretation of the process and is not part of the process itself."


Nietzsche above all philosophers, calls for first-hand exposure. First, because he's one of the greatest writers of Western history and one of the few philosophers who are actually fun to read. But more importantly, because of the difficulties of interpretation: you can't trust any one commentator (or even all of them together) to give you the full picture of his thought, if "full picture" makes sense with Nietzsche.
Hig! Hig! Micel gedeorf ys hyt.
Gea leof, micel gedeorf hit ys, forþam ic neom freoh.
And ic eom getrywe hlaforde minon.

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

I agree with the point about the dissolution of the self - I just couldn't find a place to put it in conveniently. Nietzsche discusses so many things (almost everything) that he's hard to sum up.

Two things on that line of thought: “’the doer’ is merely a fiction added to the deed; the deed is everything”; and, paraphrasing, the only thing that can defeat a desire is another desire. That is, Nietzsche follows Schopenhauer against Kant in claiming that the self is divided and warring, and that there is nothing to play the role of Kantian "reason" and overcome the desires, because even then it is only another desire, the desire to be rational, that has been victorious. There is only the will, and nothing can replace the will - the Schopenhauerian possibility of 'the will turning against itself' is denied (or, at least, transmuted into the autosadistic ascetic urge).

I think it's best to ignore what Makerowner calls "the reflexive structure of the will" - it's not only wholly peripheral to the general direction of Nietzsche's philosophy, it's also directly contradictory to it (what Makerowner has outlined is clearly metaphysics of exactly the sort that Nietzsche ceaseless repudiates and builds his entire philosophy into a weapon against). If the man has occasionally adopted rhetorical positions (or indeed toyed with certain notions in his early unpublished notes) that are contradictory to his philosophy, we should not take him to be seriously defending them. In this case, Makerowner is describing a fairly typical Leibnizian dynamism of the sort made popular by Kant, and hence current in Nietzsche's education, only given a Schopenhauerian inflection; we shouldn't be surprised if he draws upon such cultural imagery from time to time.

In general, metaphysical interpretations of Nietzsche like this should be discarded for two reasons:
- they are put forward by Continentals who are more interesting in appropriating Nietzsche for their own metaphysical obsessions than in genuine scholarship
- they are usually built heavily upon the half-formed, exploratory, inferior, achronological (and usually early) Nachlass, rather than on Nietzsche's serious views as expressed in the works he intended for publication.

Makerowner is quite right that the will to power is a central part of Nietzsche's philosophy - but only in the genuine, ethical/epistemological, dimension, not in any ontological sense. Continentals chose to interpret him differently to justify their own metaphysical excesses.

This is a common problem with Nietzsche, although primarily it is a problem only because Nietzsche has historically been a Continental province. In recent decades, real philosophers have started to address his thought, and so scholarly and informative works are becoming available. Nonetheless, for far too long Nietzsche has been interpreted with Continental freedom - as indeed like any text in the hands of the Continentals - as a blank slate upon which to write their own philosophies. Consequently, above all other philosophers, it's important to read Nietzsche himself before going on to the alleged "commentators". [This will make the gulf between his words and theirs more visible]


EDIT: lest I seem too strongly to be talking out of my arse, I'll say that Nietzsche is the philosopher I feel most at home with - I did a module on him, loved him, and found him a lot less confusing than Wittgenstein (the other philosopher I studied fairly intensively). I've read BoT, Daybreak, Z, BGE, GoM, ToI, and parts of The Antichrist, as well as the earlier "On Truth and Lie in a Non-Moral Sense". I've also read the commentaries by Clark, Kaufman, Schacht, Danto and Deleuze (gods help me) (and even excerpts from Derrida), and the compendiums of essays edited by Janaway, Sedgwick, Solomon, and some others. In putting together these posts I checked the articles at the SEP and IEP (neither of which mention your interpretation of the WtP (which is, after all, a concept which is only mentioned a handful of times in the whole of Nietzsche), although the IEP does give a different 'cosmological' interpretation, while noting the arguments against it). I don't say any of this to show that I'm right, or to say that you've no right to your opinion (I know Nietzsche is supremely pliable to interpretation and wouldn't pretend to have the only possible view of him), but only to perhaps suggest that in being dismissive I'm not just talking out of my arse, but sharing the conclusion of considerable study on the matter. Make of it what you will - I can't pretend to remember enough Nietzsche by heart to argue scripture with you.
Last edited by Salmoneus on Tue Dec 08, 2009 8:10 pm, edited 1 time in total.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

Eddy wrote:Are you saying we can't prove the existence of this true reality, though, or just that we can't really describe it with any confidence?
Niether. We can prove its existence, only not absolutely, but only perspectivally. We can describe it with confidence - why wouldn't we be able to? We would lack confidence if we thought our description had to match some "absolute description" we were seeking to emulate - but if there is no such description, we can be confident however we describe it (which is not to say that all descriptions are equal, of course).
The fact that we talk about conditions at all, though, suggests that actions have specific outcomes that follow predictable trends (more fires means global warming, thicker glass resists projectiles).
Of course. Who denies this?
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Americans!

Post by Salmoneus »

America was once a long way away from the world. It took a long time for news to travel to it, and little ever travelled from it. For most of the early centuries of European settlement there, it played little role in the story of philosophy – it provided only small vignettes, such as Berkeley’s time spent in New England (his imperialism and his educational fervour, demonstrated through his support for and gifts to the young universities of Harvard and Yale made him a heroic figure for American educationalists for centuries to come, and he has both a college at Yale and a university (and city) in California named after him).

It is appropriate that the work Berkeley completed in America, ‘Alciphron’ is a work on religion, for religion was from the first the chief philosophical concern of America. The fervour of sectarian immigrants was amplified by the pioneer dialectic of lawlessness versus lawful piety to produce great waves of religiousity. At the same time, the mercantile dimension of colonisation, combined with long habit of de facto independence from the motherland, and the high degree of self-sufficiency required in an expanding and low-density colony, lead to an ethos of individualism and liberalism. Much of American thought may be seen as the search for a synthesis of these two tendencies.

Early American philosophy, such as it was, was chiefly a reworking of European themes to include greater religious content. The first genuinely innovative American contribution was therefore a form of empiricism, which they named ‘Unitarianism’ – this was a doctrine, emerging from Congregationalism, that sought to reconcile empirical concerns inspired by Locke (a respected figure in America as a result of his political writings) with the authority of the Bible and the irrefutable truth of Christianity. As a result of their emphasis on experience over reason, they abandoned many precepts of traditional organised forms of Christianity – most notably including the doctrine of the Trinity, from which rejection they acquired their popular name. However, the path to a Christian empiricism difficult, with the paradoxes of empiricism seemingly particularly cruel on religious faith: with no appeal to intuitive reason or to the dogma of tradition, the Unitarians had to hang their synthesis on the persuasive power of miracles – but in doing so they went up against Hume’s argument that miracles could never merit belief (because, he said, a miracle is only convincing if it happened, and we can only know that it happened from the testimony of others, and no person is so reliable that we should believe them if they report a miracle, unless we already believe in the possibility of miracles, and hence in God). It would be wrong to say that they lost – the Unitarian synthesis went on to play a part in the development of many new religious traditions, and has influenced how the modern world sees all religion – but attempts to overcome Hume’s argument invariably depended more on faith than on reason, and so have not been influential in philosophy proper.

It was, however, from within this American version of empiricism that an American version of Romanticism sprang, under the name ‘Transcendentalism’, lead by such figures as Ralph Waldo Emerson and Henry David Thoreau. Transcendentalism ought properly to be seen as a mere regional flowering of the Romantic tradition – and yet it was also distinct from the European form in several ways. Although it mirrored the European turn to, and idolisation of, Nature, its conception of Nature was somewhat different – its strong religious content lent the American version of Nature a more benign, orderly and peaceful face, compared with the tempests and bloodshed of atheist European Nature. Much of this may also be a result of the isolation of America – Transcendentalism, as the name suggests, draws predominantly on Kant, and largely lacks the more tumultuous direct influence of Schelling. Moreover, while in Europe the critical works of Schleiermacher and Herder, who addressed Judao-Christian texts as though they were human artifacts rather than divine, lead to an atheist re-evaluation of textual authority, in America, where religion was still so powerful it could not be gainsayed, their works encouraged a theistic re-evaluation of all human artifacts: if the Bible was the work of human hands, and yet was obviously also divinely inspired and authoritative, surely other human works could be likewise informed by divine truth? That is, where the Europeans drew the lesson that revelation had never occurred, the Americans drew the lesson that revelation could continue to be felt in our own times.

Transcendentalism may perhaps be seen as the application of Kant to the problem of religion. How is religion possible if faith is justified neither by experience nor by reason? Why, in the Kantian way! Faith is not justified by experience, because experience is only possible through the mediation of faith. Just as space and time and causality are intuited prior to experience, so too is the existence of God. Strangely, this does not seem to have given the Transcendentalists the idea that God, like time, is not a feature of the thing-in-itself; this we may perhaps blame on its general character. Although Kant was a direct influence, the chief source of Transcendentalism was the second-hand philosophy of the Romantic writers, in particular the poets Coleridge and Wordsworth, and later the historian and essayist, Carlyle, all of whom obviated the unknowable thing-in-itself to the point of nonentity, while stressing the power of the human mind to alter, even control, the lived world. As a result, Transcendentalism always eschewed precise and technical writing in favour of literary flair and religious power. Transcendentalism focused on the immediate intuition of God, which underlay all experience, and which made precise, rational details of doctrine superfluous. At the same time, the perfection of God, who was increasingly imagined in pantheist terms and personified in unspoilt nature, was contrasted to the inadequacy and villainy of man, displayed through the injustice of social relations. Where the European response was emphasise the changeability of society and push for social progress, the primary reaction of the Transcendentalists was to retreat from society into the wilderness.

Unsurprisingly, the pendulum swung – the individualists might not disapprove of the rugged sufficiency of the Transcendental wilderness, but they did disagree with its assumptions about human iniquity. The contrary view was inspired by Comte, Schopenhauer, and proximately the biological theories of Charles Darwin, who proposed an intriguing new mechanism through which the Schellingian/Hegelian evolutionary process may have operated. This new ‘evolutionary philosophy’ is particularly associated with the work of Herbert Spencer, an English Utilitarian and Positivist, once considered the Aristotle of his age, but now largely discredited through the extremism of his American disciples. Spencer’s chief aim was to demonstrate that the evolution posited by Hegel and Comte was the principle law that governed all things in all sciences: to this end, he examined how the facts of physics, biology, psychology, sociology, and morality were all the result of evolutionary processes. In the process, he sought to reconcile otherwise conflicting traditions – for instance, he reconciled the soft and debateable philosophy of psychology with the hard, irrefutable science of phrenology by arguing that over time the originally arbitrary use of parts of the brain for different purposes had caused those parts to evolve specialisations that rendered them unfit for other uses, and hence hardwired different thoughts to different parts of the brain, and giving different people and races different mental capacities.

For Spencer, the entire world was evolving toward a single aim: equilibrium. In some respects, the equilibrium position had already been reached, and so further innovations were to be opposed: in particular, his ‘Man versus the State’, he argued vehemently against women’s suffrage and the ‘socialism’ of wealth redistribution by the government. His evolutionary theories were not dependent on mechanism: primarily, he was a Lamarckian, but after Darwin published his own theories, Spencer reluctantly included them in his own later writings, although he believed that they could not explain the ‘higher’ evolution of culture. Famously, it was Spencer who gave us our most pithy summation of Darwinian selection: “the survival of the fittest”.

Spencer was an Englishman, not an American – and yet his greatest influence was in America, as by that time English philosophy was already drifting into British Idealism. In America, many thinkers latched onto what they saw as the core of Spencer’s thought – social evolution as the basis of morality, the non-existence of any sort of ‘human rights’ that had to be respected, the belief that a commitment to complete equality in law and rights would be sufficient to provide all the moral strictures a society needed, the belief that man progressed through competition, and a repudiation of government interference in the evolutionary process – and amplified them considerably. The result was a libertarian ‘Social Darwinism’, in which untrammelled competition was seen as weeding out the weak and worthless and enabling the biologically (and hence morally) superior men to flourish, and eventually to take over the race; government interference in this process (and indeed all forms of co-operation or other non-competitive behaviour) were seen as unnatural and hence immoral, and inimical to the progression of the species. The Darwinist point of view was put forward most notably by William Sumner, who scorned all non-capitalist thoughts of morality: “Every man and woman in society has one big duty. That is, to take care of his or her own self.” More vehemently, the industrialist and philosophical dilettante Andrew Carnegie explained that: “[The law of competition] is here, we cannot evade it… and while the law may be sometimes hard for the individual, it is best for the race, because it insures the survival of the fittest in every department…the law of competition [is] not only beneficial, but essential to the future progress of the race.” It is this philosophy that underlay the great American Gilded Age, and that to some degree continues to inform the attitude of the rest of the civilised world toward America (and to a lesser extent Britain, where Spencer’s ideas were less influential, but nonetheless significant).

However, like Transcendentalism before it, evolutionary philosophy was an import, not a native birth. It is therefore not for these but for its first (and perhaps only) home-born philosophical movement that America is noted: and that movement is known as ‘pragmatism’.

Pragmatism can probably be traced back to roots in Mill’s theory of phenomenalism: the doctrine that objects are possibilities of sensation. What this idea captures is the notion that what makes a thing X rather than Y is simply how it behaves – what sensations it produces in different circumstances. Central among those circumstances are our own actions – so, to a degree, the defining features are how we are able to treat or use it, and how it reacts to such treatment.

This, in a different inflection, is the core of Pragmatism, as first put forward by C.S. Pierce (also a renowned logician): “Consider what effects, which might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of those effects is the whole of our conception of the object.” This injunction is called the pragmatic maxim, and is meant to dissolve otherwise unsolvable philosophical arguments: many times, conflicting arguments will turn out to only appear to conflict due to an ambiguity in the question (where the two people consider different set of practical consequences to be fundamental to the meaning of words in the question), or else because the two answers have the same practical bearing. A supposed example of the latter is the problem of free will: the claim that we have free will and the claim that we haven’t are thought to have the same practical consequences, so the apparent disagreement is illusory.

This maxim differs from Mill’s phenomenalism in two ways. Firstly, it focuses more upon our own behaviour: to say that a thing is hard, for instance, is to say that we will act in a different way to how we would act if it were soft. This turns the attention away from purely phenomenal criteria into the broader psychological realm. Relatedly, the second and more important difference is that the “objects” of Pierce are not physical objects alone, but all objects of consciousness, including abstract concepts and terms.

In this way, Pragmatism can be seen as an extension of Empiricist methodology to our very ideas. Pierce decries those who think that clarity comes from verbal definitions, arguing that nothing new is ever learned from analysing definitions; instead, his ‘laboratory philosophy’ allows us to test our concepts by clearly stating what effects we require them to have, and then seeing whether those effects obtain.

The most important example of the use of the pragmatic maxim is to truth itself, and the related concept of reality. Pierce asks us to consider what, in practical terms, is meant by “true” or “real”; he concludes that the true is what all investigators will come to believe eventually, by ‘predestinate’ convergence. Here, he takes science as his basis: if a thing is scientifically true, then both Russians and Americans, Christians and atheists, will all come to agree upon it if they study long enough, extensively enough, and honestly enough – those things that are only believed by one individual or group, and cannot be agreed upon by others, are not scientifically true at all, but merely matters of opinion. It is nonsense, in his view, to say that a scientific hypothesis is true but that that no matter how much evidence we assemble we will never be convinced of it – “true”, for him, means precisely the claim that we will all believe it when we have seen the evidence.

This, clearly, is an objective theory of truth, much like those criticised by Nietzsche – the inviolable power of Transcendent Truth shines down upon us, and once we have seen its divine face we will all be powerless to deny it, however base and sinful we may be. However, because Pierce believes that everything is to be considered on the basis of practical effects, he has an actual basis for his claims of objectivity. “Truth” is emphatically not a matter of “correspondence” with some “external” being, but rather a matter of fact about whether in the future we will come to believe something. Pierce has escaped from the real/apparent dichotomy that Nietzsche attacks, but manages to retain objectivism by a Hegelian/Spencerian route: society, and in particular scientific society, advances along in a single direction toward an end point, knowledge of the truth, and what we believe when society has reached that end point is what is true. This does not mean that truth is contingent, because Pierce does not believe we are free to shape where we evolve – the “real” is a slow but inescapable constraint on our beliefs. It should also be noted that this avoids empiricist skepticism: our “model” of the world is where truth now resides, and there is no distinction between a useful model and a true one.

This is taken a step further by the second notable Pragmatist, William James. James observes that what we come to agree on is not necessarily only what is scientifically proven. Our beliefs evolve to meet our needs – but our needs are not necessarily met only through the power to predict accurately. Therefore, science does not provide the only truth. The motivation behind this caveat should be obvious, and indeed James states it openly as the purpose of Pragmatism: to find a way for Americans to keep their religion while accepting empiricism. The solution is the “pragmatic theory of truth” – truth is whatever is ‘useful’ to us. What is useful is not entirely well-defined, but it is associated broadly with our own pleasure and happiness – the ability to predict is part of this, but does not exhaust it. Notably, religious beliefs may also be ‘useful’, if they produce comfort, harmony or stability. This is not to say that we should simply accept traditional religion as unquestionable, but rather than we should no rule it out by definition: which comforting truths are most useful must be discovered through actual historical experience, and there is no surety that, say, the beliefs of Christianity will actually be agreed upon by all reasonable men in the end. If, however, all reasonable men do agree upon Christianity, we cannot call it false on scientific grounds like ‘lack of evidence’, for that it to sneak back in the sort of correspondence theory of truth that Schopenhauer and Nietzsche discredited. We have no warrant for asserting truth or falsity other than our method of inquiry, and whatever that method produces we must accept.

In particular, the Pragmatists urge us to be confident in our own abilities to improve upon our beliefs. James says that in inquiry, we have two opposing motivations: the desire to find the truth, and the fear of falling into error. “Cartesianism” (the entire Enlightenment project) is possessed by an exaggerated fear of error – but why should we be so frightened of error that we fail to find some truths? If we take a more balanced road, it is true that we will fall more often into error – but these errors will surely be corrected later on. Indeed, they cannot but be corrected, as if they are never corrected they cannot be errors.

The third Pragmatist, John Dewey, stressed that “external” objects were the constructs of mind – what really exists is a series of events. Troubled by uncertainty, we seek to find whatever is stable and enduring in these events, and these commonalities we call objects – but we are never able to find a complete and pure stability, as change is constant. This results in what he called “the philosophic fallacy”: the desire for stability produce the desperate assumption that things are stable; then, we set out to discover what the nature of these stable things may be. This is what underlies dualist change/perfection systems from Plato to Descartes to Kant. In all of these systems, the model of inquiry is one of uncovering a deeper or more underlying truth; but Dewey rejects this. He instead takes up a Pragmatist theme going back to Dewey: a belief is only a disposition to act. Dewey’s model of inquiry is one where an organism (a man) faces a problematic situation in its environment, and comes to deproblematise it. What happens when we ‘uncover the truth’ is therefore not a metaphysical event in our own minds, an unclouding of our inner glassy essence, but a real, physical change in the situation obtaining: the organism orients itself toward its situation in a new way. In this light, it is important to reconsider our own reflections: they are not, says Dewey, an insight into our ‘true’ natures, but merely something that we do when we are faced with a problem. Natively, we experience the world in a very simple way, not greatly distant from how we commonly describe things – such notions as “ideas”, “sensations”, “hypotheses”, as well as the whole panoply of rules and laws we assume and methods we adopt, are not a constant feature of experience, but only mediating concepts that we produce when we face a problematic situation, and that fade away again when the problem has been resolved.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

hwhatting
Smeric
Smeric
Posts: 2315
Joined: Fri Sep 13, 2002 2:49 am
Location: Bonn, Germany

Post by hwhatting »

Salmoneus wrote:Why are threefold choices better than twofold choices?
Let me rephrase what I said - often, when one posits only to possible alternatives, that means that one has overlooked a third (fourth, fifth, nth) possibility. So:

Your scenario is my third option. But the question is not "is A the cause of B?", but "is there causation?". We have the concept of causation - but if we have obtained the concept of causation, and the world actually has the sort of causation we think that it had, we're relient on divine providence if we think that we can obtain this basically correct idea by pure chance with no recourse to experience.

But if our concept comes from experience, we face the problem that all we experience is correlations of varying degrees of reliability, so the concept of a causation distinct from correlation cannot be obtained from experience alone.
It can be abstracted. We observe that, in our experience, event Y follows X and we can observe direct impact - e.g., if we drop something, it falls. We haven't had any counter-examples. We call that causation. More complicated cases we can explain - we can break them down into series of direct impacts. We see other events where we cannot find any chain of direct impacts or where Y does not always follow X. We call them correlation. Now, we cannot know whether we have classified all the cases correctly, but we can derive the classification from our experience - we don't need to have seen every possible instance of X, checking whether Y follows, in order to derive the idea of causation, the same way we don't have to see all possible swans to have the idea of a swan or all possible beautiful things to derive the idea of beauty. "Causation" is not different from any other pheneomenon in the world - our identification of them depends on our experience. As long as you don't deny that we can observe objects (or at least certain aspects of objects that are observable to humans) and group similar things together, "causation" is an observation of the world as any other.
Likewise, we have a concept of time, but we do not directly experience time. If our concept of time does not come from experience, where does it come from?
Who says we don't experience time? Time is just the dimension in which we measure change, and we observe change. What we observe, we experience. We observe that certain changes happen while others stay the same, we observe that things change with different speeds, we observe that things change in certain rhythms, we relate these changes to each other and so gain the concept of time.
a) it comes from our way of organising sensory data and is just part of how we see the world (in which case we've no grounds for saying it is also part of the world)
b) it comes from some innate idea that we have that happens to be true - but then who gave us that idea, if not experience?
c) it just arises spontaneously in all of us AND happens to be true - which seems even less believable and more reliant on religious faith.
What I said is probably what you'd subsume under a), but to me "deriving something from experience" and "organisation of sensory data" are the same things. And the way we see the world is an approximation of the world - to use an example of yours, we're trying to draw a map. The map is not the territory, but it's our model of the territory that we use to move around in the world. It describes those part of the world that are important to us. It is important to us to know whether it's an accident that people die when they eat that berry (they choked on it or they accidentally had a heart attack while eating it) or whether there is a link (the berry is poisonous, or at least poisonous when you drink milk after eating it). The latter is what we call causation - a link between event X and event Y on our mental map that tells us something about what we should do or avoid. We are part of the world, the world impacts us, we impact the world, objects in the world impact each other. That's at last a good working assumption unless somebody proves to me that I live in a phantasy and my actions don't have consequences and the events I think I experience either don't exist or don't affect me. So the question of whether causation exists in the world is in the same category as, e.g., whether poisons exist - a class of things defined by the property of killing those who eat (or breathe or drink or touch) them. As long as causation is a concept that is useful in organising our mental maps and succesfully correlating it to the world, it is a property of the world, as is time, space, life, beauty, justice, and all our other mental constructs that are abstracted from our observations.

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

But our description is just repeating Hume, which makes disagreeing with him difficult. "We observe that, in our experience, event Y follows X and we can observe direct impact - e.g., if we drop something, it falls. We haven't had any counter-examples. We call that causation." - this is almost exactly what Hume says. But this isn't "abstracting" from sensations, because nowhere in my sensation of, say, letting go, or my sensation of the sound of a vase smashing, is there the idea of causation. Causation is a name we have given to certain correlations. It's not 'abstracted from', it's 'imposed on'.

"That's at last a good working assumption unless somebody proves to me that I live in a phantasy and my actions don't have consequences and the events I think I experience either don't exist or don't affect me." -I'm sorry, but I'm getting quite fed up with this. I refer you to the comments I directed at Eddy. This is just a ridiculous straw man of no great worth. I don't see why anytime "philosophy" is mentioned it has to be assumed that nothing more sophisticated is being said than might isue from the lips of an infant or a Hollywood scriptwriter.

"So the question of whether causation exists in the world is in the same category as, e.g., whether poisons exist " - hardly. The key here is that a) we can learn that poisons exist from experience, whereas causation is a necessary concept for learning from experience, and b) we can imagine a world without poison, but not one without causation.

So, using the analogy of a map again: time, space and cause-and-effect can be likened to the ink with which we draw the map. They are on the map, and we cannot draw a map without them (or equivalents, perhaps), but that is not to say that if we go to, say, Madrid we will actually see great lines of ink running down the streets. I mean, we might, but it takes a great leap of faith to assume that we will.


EDIT: anyway, Hume is not the only philosopher. I hope that some interest can be found somewhere else in philosophy; my post about Hume was a month ago, Hume himself was more than two centuries ago, and I've made six posts since then. His writings are all available for free on the internet, he doesn't need to be translated and his style is fairly accessible. Feel free to go and read him if he is so fascinating to everybody.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

User avatar
Bryan
Lebom
Lebom
Posts: 134
Joined: Sun Dec 19, 2004 10:50 am
Location: Middlesex, England
Contact:

Post by Bryan »

A bit OT, but does anyone here actually classify or think of themselves as "Epicurean" (in the sense of the ancient Greek thinker)? I do. Would be nice to know I'm not alone on here. :)

hwhatting
Smeric
Smeric
Posts: 2315
Joined: Fri Sep 13, 2002 2:49 am
Location: Bonn, Germany

Post by hwhatting »

Salmoneus wrote:But our description is just repeating Hume, which makes disagreeing with him difficult. "We observe that, in our experience, event Y follows X and we can observe direct impact - e.g., if we drop something, it falls. We haven't had any counter-examples. We call that causation." - this is almost exactly what Hume says. But this isn't "abstracting" from sensations, because nowhere in my sensation of, say, letting go, or my sensation of the sound of a vase smashing, is there the idea of causation. Causation is a name we have given to certain correlations. It's not 'abstracted from', it's 'imposed on'.
Then what is abstraction, in your view? For me it means observing something and obtaining an idea of what is common to these observations. So the idea of causation is an abstraction of observing objects impacting other objects. The same that I get the idea of movement by observing objects running / flying / drifting etc., or the idea of swans by observing certain birds - and if not I personally, because these ideas have been mediated by language and culture, somebody got these ideas by observing and found word for them. "Causation" and "Correlation" are in the same class - the first is our (wrong or right) theory gained from observation that X could not happen if Y had not happened before, while the second is theory gained from observation that the sequence X -Y is accidental or just the common consequence of something else.
"That's at last a good working assumption unless somebody proves to me that I live in a phantasy and my actions don't have consequences and the events I think I experience either don't exist or don't affect me." -I'm sorry, but I'm getting quite fed up with this. I refer you to the comments I directed at Eddy. This is just a ridiculous straw man of no great worth. I don't see why anytime "philosophy" is mentioned it has to be assumed that nothing more sophisticated is being said than might isue from the lips of an infant or a Hollywood scriptwriter.
Sorry, but I just wanted to pre-empt the argument that we actually cannot observe anything or that there's nothing to observe or that objects don't have impacts on each other.
"So the question of whether causation exists in the world is in the same category as, e.g., whether poisons exist " - hardly. The key here is that a) we can learn that poisons exist from experience, whereas causation is a necessary concept for learning from experience, and b) we can imagine a world without poison, but not one without causation.
Well, I don't know that - that world would just be a chaotic blur of unrelated events, that's imaginable.
So, using the analogy of a map again: time, space and cause-and-effect can be likened to the ink with which we draw the map. They are on the map, and we cannot draw a map without them (or equivalents, perhaps), but that is not to say that if we go to, say, Madrid we will actually see great lines of ink running down the streets. I mean, we might, but it takes a great leap of faith to assume that we will.
You know, I was going use a similar analogy, but then decided not to. My analogy was different. On your map, you have a dot standing for city A. Five centimeters below (if the map hangs on a wall) is another dot, representing city B. If you have learned to read a map and know the scale this tells you that in the world the map represents, city A is 100 km to the North of city B. So, in reality the city A is not really "below" city B, the distance is bigger, cities are not black, circular entities, etc. But still the map represents something of the real world and if you use the map, vou'll be able to get from A to B. There is a relation betwen A and B (distance, direction) that is represented by different relations on the map (different scale, vertical instead position on a 3-dimensional object, etc.) The relationship between the idea of causation that we have in our minds and the effect objects have on each other in the world is similar to relationship between the relations among symbols on the map and the relations among real objects in the world.

Shortly, what I object to is assuming that people can observe objects and detect correlations between them, but then treating the idea of causation (which is just an instance of a relationship between events) as an idea that cannot be abstracted from experience and so somehow must come from somewhere else.

EDIT: anyway, Hume is not the only philosopher. I hope that some interest can be found somewhere else in philosophy; my post about Hume was a month ago, Hume himself was more than two centuries ago, and I've made six posts since then. His writings are all available for free on the internet, he doesn't need to be translated and his style is fairly accessible. Feel free to go and read him if he is so fascinating to everybody.
I can't speak for others, but I tend to discuss things that I have thoughts on (even if you may view them as insufficient). I have been - independently of your posts - thinking on the issues Hume raises and like to dicsuss them. Others, like Nietzsche, seem to address questions that may make sense with their specific philosophical background, but are hard to discuss for me. The "Genealogie der Moral" is one of the few original philosophical works I've actually read, and at that time I felt that in several instances I disapprove of both his argumentation and its conclusions, but as with Nietzsche it's hard to say when he's serious or when he just wants to provoke, I don't even know whether I even got him right most of the time. Anyway, I'm reading your summaries with interest, even if I don't comment on all of them, but allow me to discuss those things that I have the desire to discuss. If you feel no desire to continue this discussion, I'll certainly respect that. :)

Makerowner
Sanci
Sanci
Posts: 20
Joined: Tue Jul 10, 2007 2:00 am
Location: In the middle of the Canadian Vowel Shift

Post by Makerowner »

Salmoneus wrote:
I think it's best to ignore what Makerowner calls "the reflexive structure of the will" - it's not only wholly peripheral to the general direction of Nietzsche's philosophy, it's also directly contradictory to it (what Makerowner has outlined is clearly metaphysics of exactly the sort that Nietzsche ceaseless repudiates and builds his entire philosophy into a weapon against). If the man has occasionally adopted rhetorical positions (or indeed toyed with certain notions in his early unpublished notes) that are contradictory to his philosophy, we should not take him to be seriously defending them. In this case, Makerowner is describing a fairly typical Leibnizian dynamism of the sort made popular by Kant, and hence current in Nietzsche's education, only given a Schopenhauerian inflection; we shouldn't be surprised if he draws upon such cultural imagery from time to time.

In general, metaphysical interpretations of Nietzsche like this should be discarded for two reasons:
- they are put forward by Continentals who are more interesting in appropriating Nietzsche for their own metaphysical obsessions than in genuine scholarship
- they are usually built heavily upon the half-formed, exploratory, inferior, achronological (and usually early) Nachlass, rather than on Nietzsche's serious views as expressed in the works he intended for publication.

Makerowner is quite right that the will to power is a central part of Nietzsche's philosophy - but only in the genuine, ethical/epistemological, dimension, not in any ontological sense. Continentals chose to interpret him differently to justify their own metaphysical excesses.
Just goes to show how difficult Nietzsche interpretation is, since we've read more or less the same things and come to about opposite conclusions. Of course this illustrates Nietzsche's point about interpretations too...
I am relying more on the Will to Power collection, but most of those notes are from 87-88 anyways, the same period in which he was writing Twilight of the Idols, etc. so chronology is not the issue. Even though he doesn't use the specific phrase 'will to power' very often in his published writings, all his post-Zarathustra writings seem to me to contain essentially the same will to power concept as is more explicitly described in the notes. Of course the published writings take precedence, but I don't see why we should ignore the notes. They often give the connections between aspects of his thought that seem disconnected in the published works, and the will to power seems to me to be one of those cases. Just about everything Nietzsche says on other topics seems to fit in with the will to power. I don't think it's a "metaphysical excess" to try to interpret his philosophy as coherently as possible. (I would point to BGE 20 here, about the systematicity of philosophical concepts; there is of course a question of whether Nietzsche includes himself under this necessary systematicity, since he sometimes calls himself a philosopher, and sometimes criticizes "the philosophers" as if he were not one of them.)
Nietzsche repudiates metaphysics in the sense of claiming to know what there "really is" behind appearances, but he certainly doesn't repudiate the interpretation of appearances, and Nietzsche of course knew that "everything is an interpretation" doesn't mean that all interpretations are equal. Nietzsche proposes the will to power as the best interpretation of appearances--for him and people like him of course (perspectivism)--and it's based on this interpretation that democracy, Christianity, pessimism and everything Nietzsche despises can be criticized.
BGE 22 wrote: Let me be pardoned, as an old philologist who cannot desist from the mischief of putting his finger on bad modes of interpretation, but "Nature's conformity to law," of which you physicists talk so proudly, as though—why, it exists only owing to your interpretation and bad "philology." It is no matter of fact, no "text," but rather just a naively humanitarian adjustment and perversion of meaning, with which you make abundant concessions to the democratic instincts of the modern soul! "Everywhere equality before the law—Nature is not different in that respect, nor better than we": a fine instance of secret motive, in which the vulgar antagonism to everything privileged and autocratic—likewise a second and more refined atheism—is once more disguised. "Ni dieu, ni maitre"—that, also, is what you want; and therefore "Cheers for natural law!"—is it not so? But, as has been said, that is interpretation, not text; and somebody might come along, who, with opposite intentions and modes of interpretation, could read out of the same "Nature," and with regard to the same phenomena, just the tyrannically inconsiderate and relentless enforcement of the claims of power—an interpreter who should so place the unexceptionalness and unconditionalness of all "Will to Power" before your eyes, that almost every word, and the word "tyranny" itself, would eventually seem unsuitable, or like a weakening and softening metaphor—as being too human; and who should, nevertheless, end by asserting the same about this world as you do, namely, that it has a "necessary" and "calculable" course, NOT, however, because laws obtain in it, but because they are absolutely LACKING, and every power effects its ultimate consequences every moment. Granted that this also is only interpretation—and you will be eager enough to make this objection?—well, so much the better.
I would phrase it differently than you do, but I would agree that I'm not especially interested in discovering what may have been running through Nietzsche's mind as he wrote these lines; he could have been thinking about what to have for dinner or whether to go for a walk. What interests me is: what can Nietzsche's work say to us today? I would say that my approach to Nietzsche interpretation is a Nietzschean one: not claiming to discover what he "really thought", or bewailing that we'll never know it, but asking: given the works ("appearances") we have, what interpretation gives them the most force?
Hig! Hig! Micel gedeorf ys hyt.
Gea leof, micel gedeorf hit ys, forþam ic neom freoh.
And ic eom getrywe hlaforde minon.

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

Then why bother making reference to Nietzsche at all? Why not just scour through his notes until you find "I think I may take my umbrella today" or whatever and base all your views of Nietzsche on that sentence? You did after all interject when I was talking about Nietzsche - you're free to have your own theories of will to power, of course.

The long passage you quote I think quite well refutes the idea that Nietzsche is proposing a metaphysical theory. Clearly his suggestion of a Schopenhauerian universe is a rhetorical device intended to destablise existing assumptions; I don't see him giving it any metaphysical status (it doesn't even look as though that metastasised 'will to power' has much to do with the ethical/epistemological will to power of the Genealogy).

I don't agree that this, or even any other conception of the will to power is behind his criticisms of democracy or Christianity or pessimism - in fact, I don't see how it COULD be. If democracy is an expression of the will to power, we cannot criticise it on those grounds, since all other theories are as well. If democracy is contrary to the will to power - well, Nietzsche makes clear what he thinks about the naturalistic fallacy in ethics. Nothing can defy its nature - and even if it could, that would be no grounds to criticise it. So, neither expressing nor not-expressing the will to power is grounds for criticism. His criticisms are therefore based on other things. Democracy, for instance, is opposed because it promotes values that problematise excellence, and because it is a totalitarian and hence life-denying ideology (but none of that is contrary to the will to power; on the contrary, it is a fine example of it).

[I also don't accept that Nietzsche despises Christianity - on the contrary, he is actually very flattering toward it at points.]
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

Bryan: I don't consider myself an Epicurean, but I do consider it one of the most interesting ethical systems, and I sympathise with it. I probably want, at some point, to attempt to synthesise what I see as its strong points with those of Nietzsche and of Cynicism - but at the moment I'm too intellectually bogged down with potential consequences of Nietzsche to work out how to do it.

What I think is wrong with Epicureanism:
a) excessive passivity and pessimism. Surely we can do better than mere avoidance of pain?
b) excessive disengagement. While I appreciate that a degree of aloneness may be useful, I can't agree with the Epicurean retreat from society, and in particular from politics.
c) excessive identification of happiness with physical pleasure. I don't think this is viable.

What I do would take from it: the general demeanour of the tetrapharmakon, the emphasis on happiness as indicative of flourishing, and a version of the kinetic/katastematic distinction.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

User avatar
Aurora Rossa
Smeric
Smeric
Posts: 1138
Joined: Mon Aug 11, 2003 11:46 am
Location: The vendée of America
Contact:

Post by Aurora Rossa »

Democracy, for instance, is opposed because it promotes values that problematise excellence, and because it is a totalitarian and hence life-denying ideology (but none of that is contrary to the will to power; on the contrary, it is a fine example of it).
Wait, how could anyone call democracy totalitarian and what the hell do they even mean by that? Surely the opposite of democracy, an aristocracy unaccountable to the people would be even more totalitarian by most measures.
Image
"There was a particular car I soon came to think of as distinctly St. Louis-ish: a gigantic white S.U.V. with a W. bumper sticker on it for George W. Bush."

Culden
Sanci
Sanci
Posts: 24
Joined: Mon Nov 30, 2009 7:45 pm
Location: Murfreesboro, TN
Contact:

Post by Culden »

Eddy wrote:
Democracy, for instance, is opposed because it promotes values that problematise excellence, and because it is a totalitarian and hence life-denying ideology (but none of that is contrary to the will to power; on the contrary, it is a fine example of it).
Wait, how could anyone call democracy totalitarian and what the hell do they even mean by that? Surely the opposite of democracy, an aristocracy unaccountable to the people would be even more totalitarian by most measures.
Wouldn't it be because in a true democracy, there's the danger of tyranny of the majority?
http://superculden.angelfire.com

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

More than that.

In practical terms, democracy is ALWAYS a tyranny of the majority. It may sometimes be a benign tyranny that tries to not stamp too hard on people - but it's still dictatorial toward minorities. And everyone is in some minority.

In theoretical terms, democracy is technically "hegemonic" in the same way that communism or fascism are - it does not permit any legitimate opposition. Under a democracy, opposing democracy makes you an "enemy of the state" almost automatically. What's more, it is in practice totalising, because once the concept creeps in it does not confine itself to the political arena but tries to control all walks of life. Look at TV at the moment - there's shows about people VOTING on musical performers! Some theorists want companies to let their workers vote on their decisions - and all public companies let their "shareholders" vote. We select a "president of Europe" with seemingly no power - and people demand a vote on it! In some parts of america, they vote for judges! In all fields, the prevailing attitude is "if it's not voted for by the majority, it's not fair".

Now, if you think democracy is good, you'll be happy about all this. But it is still both hegemonic and totalitarian as an ideology, even if it doesn't always produce a totalitarian organisation.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

User avatar
Aurora Rossa
Smeric
Smeric
Posts: 1138
Joined: Mon Aug 11, 2003 11:46 am
Location: The vendée of America
Contact:

Post by Aurora Rossa »

In practical terms, democracy is ALWAYS a tyranny of the majority. It may sometimes be a benign tyranny that tries to not stamp too hard on people - but it's still dictatorial toward minorities. And everyone is in some minority.
But one could say the same about any political system, couldn't they? Theocracy is great for the clergy, but totalitarian for anyone who would question official dogma. Monarchy (unless tempered with democratic structures) gives the nobility and especially king total power over the peasantry. Dictatorships give all power to the dictator (or maybe their lackies) and subjugate everyone else. It seems hard to imagine what political system would disenfranchise the majority of the population without taking away their freedom.
Image
"There was a particular car I soon came to think of as distinctly St. Louis-ish: a gigantic white S.U.V. with a W. bumper sticker on it for George W. Bush."

User avatar
Delthayre
Lebom
Lebom
Posts: 222
Joined: Sun Mar 16, 2003 8:47 am

Triumph of inputs over outputs

Post by Delthayre »

What sort of democracy is at issue here? Is it the modern notion of represenative democracy, some purist ideal of direct democracy or some other conception of it?
"Great men are almost always bad men."
~Lord John Dalberg Acton

Culden
Sanci
Sanci
Posts: 24
Joined: Mon Nov 30, 2009 7:45 pm
Location: Murfreesboro, TN
Contact:

Post by Culden »

I don't think it matters whether it's a representative or pure democracy, either the majority vote to win on the issue, or vote in the people who will vote their way, and by virtue of being the majority, win. Depending on how the system is set up, of course, this can be minimized, but I don't think it can be removed (and of course, just because the tyrannical majority vote for someone doesn't always mean that their candidate does what they think he/she will).
http://superculden.angelfire.com

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

Thanks, whoever moved this here. Particularly if it was me.

Moving forward - almost finished!

----------------------------------------------------


Analytic Philosophy began in 1903, with GE Moore’s “Refutation of Idealism” and “Principia Ethica”, helped along by Bertrand Russell’s “Principles of Mathematics”. It’s never been entirely clear what Analytic Philosophy is – indeed, the later stages have entirely abandoned most of what was distinctive about it originally. Rather than being a system or a programme, it should probably be seen as a style or as a communal expedition. However, beginning with Russell and Moore, three principle features arose that marked a new style of philosophy:

1. A concern for such things as ‘meaning’, ‘propositions’, ‘logic’ and ‘language’.

2. An eschewing of grand ‘systematising’ and ‘synthesis’, in favour of the analysis of small, individual problems.

3. An emphasis upon “clarity”, “rigour” and “precision”, rather than rhetorical power or inherent poetry.

4. Not being Hegel. Not content with not actually being Hegel, the early Analytics were determined to be as unlike Hegel as possibly possible – Russell described the project as being driven by the desire to believe everything Hegel had disbelieved, and to disbelieve everything Hegel had believed. This was not a hatred born of ignorance, but of experience – Russell, like the other early Analytics, began as ardent Hegelians in their own right, before feeling stiffled by the ‘establishment’ of British Idealism. Hegel, to them, became the symbol of everything that was wrong with Britain, not only in philosophy, but also in politics, ethics and religion. It was with the first world war that they came to dominence, when the bloodshed seemed to have discredited the Hegelian path.

--------------------

The founder of Analytic Philosophy was GE Moore – legally named “George Edward”, he hated his names, and was known professionally only as “GE” (his wife, unable to cope with calling her beloved by his initials, took to calling him “Bill”). It was his belief that Hegelianism was not only slipshod and illogical in details, but also profoundly wrongheaded in intent. Following the lead of the old “School of Common Sense”, Moore argued that the meanings of common language sentences are readily apparent to everybody who uses them, and that, furthermore their truth cannot be denied while using language. His most famous example is his proof that he had two hands that existed outside his mind: he waved his hands in front of his face, and said “look, I’ve got two hands”. Since we all know that “I’ve got two hands” refers to hands that exist outside our minds, and since we can all see that he has two hands, it must be the case that he has two hands that exist outside his mind.

The audacity of this argument is not unusual for Moore. In metaphysics, he claimed that not only objects but also propositions themselves were real, physically existing objects. This directly opposes the Idealist argument that only one thing exists with the opposite claim, that millions of things, perhaps even an infinite number, exist. Indeed, he at times went further, saying that only propositions existed, and not objects at all. This propositions need not be true in virtue of anything else, but simply ARE true or false – Russell expressed it by saying that propositions are true in the same way that roses are red.

Most strikingly of all, he argued that looking at pretty artworks was the basis for all morality. This, he said, could not be argued for – “good” was unanalysable, and could not be identified with either physical or metaphysical properties. Instead, we had to rely on “moral intuition” (a part of common sense) – and it was clear that intuition said that enjoying art was the highest good. From this basis, Moore manages to recreate a surprising amount of common sense morality: with delightfully bourgeois ingenuity, he argues that murder is wrong because it is inconveniently distracting – murder occuring makes people more fearful about murder, and if I am constantly going around being concerned about being blown up by anarchists I can’t properly enjoy my fine wine and my collection of rare Picassos. In order to enable art-owners to appreciate the beautiful objects they possess, the rest of us ought to jolly well keep quiet and not make too much disturbance.

Moore’s dramatic style of positive argument has not been greatly respected. Nagel, for instance, speaks of three reactions to the chasm of skepticism: the heroic, the tragic, and that of GE Moore: the heroic philosopher attempts to leap across the gap, the tragic philosopher sits and weeps because he knows he can never cross, and GE Moore walks up to the gorge, turns around, and loudly claims that he is now on the other side. Moore’s negative work, however, although now considered almost entirely wrong, has been greatly influential as a model for later philosophers.

Most importantly, Moore has been taken (perhaps contentiously) to mark the beginning of what is known as “the linguistic turn”: roughly, a turn from thinking in terms of concepts to thinking in terms of words and meanings. Making use of the idea that two different sentences can mean the same thing, Moore sought to ‘analyse’ philosophical problems by rephrasing in other, simpler and more immediately understandable, terms. This linguistic analysis was widely thought to be able to cut through all previous philosophical confusions.

------

Moore’s influence, though, was soon eclipsed by his colleague, Bertrand Russell, to the extent that he is now usually left off the obligatory trinity, replaced by Russell’s co-opted predecessor, Gottlob Frege, and by his appointed successor, Ludwig Wittgenstein. Russell’s chief divergence from Moore is over the nature of analysis: where Moore rephrases things in ordinary language, Russell believes that this lacks clarity; instead, he analyses sentences into an “ideal language”. This language is identical both with formal logic (for which he drew on Frege and Bradley) and with mathematics in its purest sense, and is most systematically put forward in his 1910 “Principia Mathematica”. Here, Russell applies to mathematics the same principles of linguistic analysis: he seeks to “formalise” mathematics by demonstrating its truth in a more logical and undoubtable fashion on the basis of clearly stated axioms and rules of inference (in particular, on type theory – certain axioms regarding “sets”, engineered to avoid particular paradoxes inherent in Fregean set theory).

Far smaller, and yet more significant for philosophy, was his 1905 paper, “On Denoting”, in which he rejects Moore’s “object theory of meaning” – the theory that the meaning of a sentence is the object or state of affairs that it references. This theory had produced problems when philosophers considered non-existing objects, like invisible pink unicorns – in the claim “unicorns do not exist”, are there some ghostly unicorns that exist that we are talking about, that fail to exist even while they exist? Russell’s solution was to “analyse” such claims as general statements about the world: “X does not exist” becomes “it is not the case that any thing, Y, exists, such that Y is identical to X”. Similarly, he defines “the” in a novel way: “the X is Y” means “there is a thing, Z, such that Z is X, and for all A that are X, A is Z, and for all Z, Z is Y” – or, more simply, “the tree is tall” means “a tree exists, only one tree exists, and all trees are tall”. Thus, statements are converted from claims about specific entities to claims about the world. One famous consequence of this is that all statements about non-existent entities are false: “the unicorn is tall” is false, because the claim that “a unicorn exists” is false. [Statements claiming that non-existent entities do not exist are an exception, and can be true, because “existence is not a predicate”]

It may appear that this is nonsense: when we claim that “the tree is tall”, we aren’t saying that there is only one tree, or that all trees are tall. Russell, however, believes that we ARE saying that, we just don’t realise it. In this way, Russell abandons Moore’s commitment to common sense meanings: the real meanings of what we say may be quite different from what we think we are saying. This, after all, is where philosophical confusions come from: we naively believe that when when an MP says “the honourable member is correct”, it is possible that he may be correct – but of course, as there is more than one honourable member, that claim must always be false.

Both these works exemplify the method of “ideal language analysis”; both also work toward a new metaphysical theory, known as logical atomism, which did not however come fully into coherence until after the publication of the Principia. Logical atomism is often considered the distinctive creation of Russell – but in fact Russell himself acknowledged that he was only attempting to explain certain ideas taught to him by his “pupil”, Ludwig Wittgenstein.

--------------

Wittgenstein was born to one of the wealthiest families in Europe, but this probably did not translate into a peaceful childhood – three of his four brothers killed themselves. His childhood hero was the physicist, Ludwig Boltzman, but he killed himself before the two could meet. Nonetheless, Wittgenstein became a physicist and engineer, before discovering mathematics – and in particular the formal mathematics of Frege and Russell. Through communication with Frege, he was advised to seek to study under Russell, and accordingly the young man turned up unanounced one evening at the door of Russell’s rooms in Cambridge in 1911. Within months, Russell abdicated from philosophy, on the grounds that having seen Wittgenstein’s genius, " I could not hope ever again to do fundamental work in philosophy." Wittgenstein had had no formal training in philosophy – indeed, he never had any academic degree, except for the PhD granted to him by Russell and Moore long after he had become famous – and was notably ignorant about the philosophers of the past, with the notable exception of Schopenhauer, whose project he seems to have seen himself as continuing. He had little interest, in fact, in academic philosophy, and spent only a few years as a professor – much of his time was spent in a hut in Norway, or as a schoolteacher, or an architect, or a hospital porter, or a gardener. He briefly considered becoming a monk (they wouldn’t take him), or a manual labourer in the Soviet Union (nor would they), and he gave away all his money – to the rich, so as not to further corrupt the poor. As a professor at Cambridge, he made a point of banning from his classes any student who appeared too interested in them; when members of the Vienna Circle discussed his work in front of him, he turned his back and recited Bengali poetry. Nonetheless, he is widely considered the greatest philosopher of the century.

Logical atomism, as epitomised in Wittgenstein’s “Tractatus Logico-Philosophicus” (named in honour of Spinoza’s Tractatus Theologico-Politicus; like Spinoza’s masterpiece, the Ethics, Wittgenstein’s Tractatus is presented in numbered bullet points in the style of a mathematical proof) states that all meaningful propositions can be reduced to a number of “atomic” propositions, joined together by truth-functional connectives (the Principia Mathematica admits several such, but Wittgenstein reduces them all down to the single Scheffer Stroke). Atomic propositions themselves represent atomic facts – and atomic facts are simple (irreducible, non-complex) states of affairs that link together different “objects”. These objects have substance – that is to say, they remain present and the same in all possible worlds and cannot not exist. The atomic objects are unchanging – what changes are the facts that connect them. If, for instance, my house is green, there is a certain fact relating the house and greenness, and a fact relating the house and me. These facts are reflected by atomic propositions in ideal formal language: say, “Fab” and “Gac” [although it should be noted that my house, and probably greenness, is not actually a simple object – I pick the example purely to demonstrate the concept more clearly]. These atomic propositions are then combined into a “molecular” proposition, “Fab^Gac”. The molecular proposition is then represented by words in an actual language. For a sentence to be meaningful, it must represent a proposition, and that proposition must either be elemental or must connect elemental propositions by truth-functional connectives. Those elemental propositions must then represent an atomic fact. If any part of this chain is broken, then my words are meaningless.

Alongside this is an important distinction between “showing” and “saying”: what a proposition “says” is only a truth-condition, but what the proposition “shows” is the fact itself: a proposition is a sort of “picture” of a fact, and similarly a thought is a sort of picture of a proposition – there is an isomorphy between the internal structures of the three. A proposition, Wittgenstein says, “shows” what things would be like if it were the case, and then “says” that it IS the case. In this image, we may see a return to Hume’s Fork - all meaningful claims are about matters of fact or relations of ideas – but shows a way in which the same proposition can address both simultaneously, by showing one thing while saying another. Many things cannot be said, but only shown – such as the principles of ethics and religion. These things are strictly meaningless and senseless, but not necessarily nonsense – it is only true nonsense if it has the form of something that claims to be meaningful. Unexpectedly, Wittgenstein includes all logical and mathematical propositions in the category of “meaningless” things – they only “show” us the structure of concepts, and do not “say” anything about the world (a linguistic reincarnation of Hume). The propositions of the Tractatus are therefore themselves meaningless. Accordingly, we should abandon them once we have understood them:

“My propositions are elucidatory in this way: he who understands me finally recognizes them as senseless, when he has climbed out through them, on them, over them. (He must so to speak throw away the ladder, after he has climbed up on it.) He must transcend these propositions, and then he will see the world aright.”

As a concrete example of “throwing away the ladder”, we may consider the problem of Tractarian objects. The Tractatus says that they exist – indeed, that they must exist – yet it also implies that existence is something that is only meaningful in complex propositions. Strictly speaking, therefore, a Tractarian object is “that for which there is neither existence nor non-existence” – when we say that they exist, we are using a different meaning of “exists” from that used ordinarily to say, for instance, “the car exists”, because “the car exists” is ultimately cashed out in terms of the relations of particular simple objects, and a claim about one simple object by definition cannot be reduced to any relation between simple objects (else it would not be simple). Therefore, the claim that simples exist, although fundamental to the Tractatus, is a demonstration of trying to “say” what can only properly be “shown” – it is meaningless, but it informs and shapes our thoughts. This is an Analytic return to Schopenhauer’s views on the world-in-itself, knowable only through flawed but indispensible perspectives, beyond even being and non-being.

Another example of the meaningless is inductive reasoning, which Wittgenstein, following Hume, insists is only “psychologically” valid, not actually so – because induction is “reasoning” on the irrational basis of what the picture of our experience shows us, not genuine, linguistic, reasoning on the basis of what our experience says. What is shown is beyond logic, because logic is a set of rules governing what should be said.

In this respect, it is important to note that Wittgenstein had a problematic relationship to religion, both Judaism and Christianity – although he was not a believer, he did sometimes consider himself either a Jew or a Christian in a religious sense, and he had great respect for religion. The Tractatus was inspired not only by Russell and Frege, but also by his reading of Tolstoy and Kierkegaard; in the trenches, he was known to his fellow soldiers for handing out religious tracts, and called “the man with the gospels”. Consequently, we should not imagine that when he rules religion, or ethics, or mathematics, “meaningless”, he necessarily considers them unimportant. Indeed, the Tractatus ends with the gnomic intonation: “7. Whereof we cannot speak, thereof we must be silent”, and this was intended to be the foundation of his new life: the Tractatus, indeed the whole of philosophy, was only a preface, to be dealt with quickly before turning to the things that really mattered – the realities of human experience, or what he called “the mystical”.

The Tractatus was sent to Russell from an Italian POW camp, five years after they last communicated with each other, at a time when Russell did not even know that Wittgenstein was still alive. With the exception of one small article, it was the only work that Wittgenstein ever published.



------------------------------------------



Wittgenstein’s immediate influence on philosophy was immense, and he became one of the few 20th century philosophers to be genuinely famous in his own day. Although the details of his logical atomism are not now widely accepted – and were indeed entirely abandoned by Wittgenstein himelf later on – his programme helped to set the course and style of analytical philosophy for half a century. Wittgenstein’s theories allowed sharp dividing lines to be drawn between the philosophy of saying and the unaccountable, irresoluble, speculative pseudo-philosophy of showing – the former was to be the province of the “analyst”, calm and scholarly and usually collaborative, while the latter was the realm of the metaphysician, obscurantist and attention-seeking, typified in history by Hege and in the modern age by Heidegger.

Wittgenstein, however, faded into the background after the publication of the Tractatus, due to his refusal to publish and his reluctance to teach. Instead, he was succeeded by a school known as Logical Positivism; this was typified by the Vienna Circle, a philosophical discussion group that centrally concerned itself with Mach, Einstein, Russell and Wittgenstein. The group varied in constitution over the years, but key members included Otto Neurath, Moritz Schlick, Rudolf Carnap and Friedrich Waismann; they were popularised in Britain by AJ Ayer, and had the support of many of the prominent members of the original Analytic movement, including the mathematician and logician, Frank Ramsay, and the economist, JM Keynes; Kurt Gödel was an avowed member of the circle, although he disagreed with the other members on several issues, while their compatriot, Karl Popper, was published by them and in turn helped publicise their work; the American, WVO Quine, attended meetings on his travels around Europe.

Logical Positivism follows the Comtean heritage through the scientific lens of Mach: it believed in the importance of positive science, conducted through experimentation. From its Empiricist heritage, it inherited the dogma that all knowledge comes from experience, and all experience comes from the senses, which meant that “experimentation” had to have results that could be distinguished by the senses. Like the Pragmatists, they believed that the meaning of a concept (or word) was its ‘practical consequences’, but because of their positivist/empiricist heritage, they interpreted this to mean testable sensory consequences, and like Wittgenstein they moved the emphasis from single concepts to propositions. If the consequences are not testable, they are not practical, which means that testability becomes central to meaning. For example, “diamond is harder than glass” can be tested by scratching one against the other and seeing which is damaged. There is no skeptical gulf here – the test does not merely provide evidence for the proposition, but actually prooves the proposition (we can continue to doubt that the test has been performed accurately, but if we grant that it has, its results are definitive). It is important to note that a test is not an object, but rather a procedure – this gives us “the verification theory of meaning”, according to which the meaning of a claim is the procedure by which we test its truth, which includes the “truth conditions” – the sensory data we will receive if the theory is true. Essentially, all propositions are now to be seen as scientific hypotheses.

One consequence of this is a return to Millian phenomenalism. Because nothing can be said about an object that is not a proposition, and each proposition is testable, and each test identifies verifying sensory criteria, this means that the object itself is identified only through sensory criteria. We cannot talk of having all the sensory evidence to show that something is glass and yet doubting that it is glass – if the sensory evidence is there, it is glass. This evidence cannot practically be assembled, because it probably involves a whole range of hypothetical sensations we receive from running a large, perhaps infinite, number of tests, but nonetheless the object has in theory no criteria other than the evidence. The object can thus be considered to BE those ‘permanent possibilities of sensation’; we cannot object “but there’s more to it than anything we can sense”, because we cannot, through our senses, test whether there is anything we cannot sense. The ‘thing-in-itself’ is, as in Kant, excluded from our thought, but because we have taken the linguistic turn it is also excluded from our language – which means that we cannot be talking about the thing-in-itself at all, and if we insist that we are we are deluding ourselves. All there is is evidence, and all evidence is evidence for the hypothesis that we will receive such-and-such other evidence if we do such-and-such; all there is is evidence, and there is nothing ultimately ‘behind’ the evidence. It is illogical to even suggest that there may be – the words literally have no meaning, and we cannot think what we may think we think.

This phenomenalism/empiricism is then imported into the Tractatus: the Circle interpreted the Tractatus as talking about elementary sensations when it spoke of objects, with those sensations combined into ‘facts’. With the Tractarian scythe, the Circle ruled that all things that were not “saying” something about real or possible sensations were meaningless – and they were not as generous toward the meaningless as Wittgenstein himself was. Accordingly, all ethics and metaphysics were to be eliminated – although they were puzzled by the way that Wittgenstein extended meaninglessnes to the Tractatus itself. They accepted that the Tractatus, like the rest of logic and mathematics, was meaningless in the sense of not describing the world, but only our concepts – but nonetheless considered it important and true, not in a propositional sense, but as a series of guidelines for action. As an example: according to the verification principle, universal statements (“all (actual and possible) sheep are mammals”) are meaningless, because they cannot ever be verified; however, Wittgenstein tells us, and the Circle (initially) accepts, that universals can be taken as guidelines for the formation of particulars. So, “all sheep are mammals” tells us “if X is a sheep, it is legitimate to form the claim ‘X is a mammal’”; however, it does not SAY that we may do this (because it cannot), but the form of the universal SHOWS us the forms of the particulars that we may construct.

The result of this is scientism and the doctrine of unified science: all knowledge comes from science, and all sciences are only branches of one science. Results in any scientific discipline must be translatable into results in any other: if, for instance, psychology is to be meaningful, it must be translatable into verifiable theories in neurology, and ultimately in physics. Meanwhile, the task of philosophy is to deal with the language through which science is conducted and communicated: although this linguistic effort is strictly meaningless, it is important to render it as unentangled as possible so that the meaning of the science itself is more accessible. Philosophy therefore has two objectives: to winnow out all claims that are not scientific, and to analyse those that are into verification criteria that scientists can then devise tests for. Logical positivism is therefore all about “operationalisation” – it itself is an operationalisation of Russell and Wittgenstein (showing what their work means for practical science) and it believes that all philosophy should assist in operationalising the otherwise vague claims of language.

Because ethics, religion and so forth are rendered meaningless, in their conventional senses, and yet are clearly important to people, logical positivism had to provide some other account of their import. This is the doctrine known as “non-cognitivism” or “emotivism” – the surface form of statements in the domain of X (eg ethics or religion or aesthetics) may mirror the surface form of propositions of science (eg “murder is wrong” looks as though it ascribes a quality, “wrongness” to an entity, “murder”), but when such statements are analysed we see that the underlying form is really quite different. “Murder is wrong”, for instance, is actually an expression of an emotion: “Murder? Ugghh!” (for ‘emotivists’) or “Don’t kill people!” (prescriptivists) or “I acceed to the established social rule against killing people” (norm-expressivists).

Some logical positivist – most notably Carnap – attempted to validate the method of induction by describing “probability” not as a physical property but as a logical/epistemological one: under his ‘confirmation theory’ of probability, to say that there is a 50% chance of something happening means that if it does happen, a theory that predicted it happening is only 50% confirmed. Theories from induction, therefore, may never be entirely confirmed, but can be confirmed to increasing degrees by the mounting up of evidence. From this, Carnap attempted to build a formal “inductive logic” that can incorporate and quantify degrees of confirmation – unfortunately, despite the work of many years, he discovered that in every formalisation he could create, all universal laws were always 0% confirmed. Late in his life, he began again, attempting to address probability (and hence induction) on the basis of thermodynamic entropy, but he died before getting very far.

Logical positivism was an enthusiastic attempt to return to sure footing. On the basis of “given” sensory evidence and linguistically inescapable rules, it attempted to construct logically and formally an account of what could be known – all those things that could not be arrived at from this foundation could be ‘cast into the flames’. In this way, logical positivism was a restatement of Humean skepticism, and Hume’s Fork: all knowledge is of matters of fact (now refigured as the sensory verification of positivist hypotheses) or of relations of ideas (now reconfigured as the structure of language, and of the ‘ideal language’, formal logic, that underlay it); relations of ideas tell us nothing about the world. Fittingly, one of the most deadly blows against the school was from an axe left lying by Hume: induction. A great deal of what we take for granted is not available without induction, and induction cannot be justified from experience, nor formalised controllably. In particular, the principle of verification itself was imperilled, because of its inductive subtext: verification criteria only apply in “normal conditions”, a background state of affairs created by inductive reasoning; we have no way to formally know which conditions we can take as normal and enduring (and hence not consider relevant to our hypotheses), and which are fleeting (and hence must be considered); moreover, even if we did know this, we have no way of specifying them, as the possible conditions are not only complex but formally infinite.

Moreover, the logical positivist dependence upon science became vexing when science failed to meet the standards the school demanded. In mathematics and logic, it became increasingly difficult to construct sufficiently complete and consistent logical systems to model the sort of natural language and inference used by scientists (Carnap innovatively attempted to escape by allowing multiple, even inconsistent, logical systems); in experimental physics, scientists were increasingly turning to unverifiable and indirect theories and meta-theories, often lacking complete real interpretations, and it proved impossible to distinguish endeavours such as quantum mechanics from the old-fashioned metaphysics or theology that logical positivism had attempted to exorcise. Finally, in the work of Quine and Sellars, core precepts of empiricism itself were called into question. Combined with the work of the Later Wittgenstein, this caused logical positivism to slowly die away after WWII.
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

Mornche Geddick
Avisaru
Avisaru
Posts: 370
Joined: Wed Mar 30, 2005 4:22 pm
Location: UK

Post by Mornche Geddick »

My first thought after reading GE Moore was "at last! Somebody sensible!" Then when I read the aesthetic theory of ethics I found a criticism. Some good actions are not pretty - life-saving surgery for example. I admit the end result is a lot less ugly than death, but the process is not something an aesthete would want to watch. And then, the idea leads to some very undesirable consequences. 1) That beautiful people are worth more than plain, ugly, or just ordinary people. 2) That people with "sensibility" who can appreciate high art, are more valuable than the common herd. 3) That artists are more valuable than the rest of us. 4) That we ought to make judgements of life and death, and justice accordingly. It also means that if there is a fire in the National Gallery, we ought to concentrate on saving the paintings before we save the visitors.

Further question, who gets to decide what is art? For example I can appreciate the Shakespeare sonnets and the Beethoven symphonies, but not a painting by Picasso or a fine wine. So would I be entitled to aesthete privileges or not?

User avatar
Salmoneus
Sanno
Sanno
Posts: 3197
Joined: Thu Jan 15, 2004 5:00 pm
Location: One of the dark places of the world

Post by Salmoneus »

Regarding surgery: no, surgery is not good. But it may be right - the right produces the good. If you save a man's life, and he then goes off an appreciates art, you were right to do it.

Some complexities should be admitted here:

- Moore does not say that appreciating beauty is the ONLY good; in fact, he believes there are a plurality of irreducible goods, including knowledge and pleasure. It's just that the appreciation of a) beautiful objects and b) human intercourse are together the greatest and most important good

- Moore is a rule-consequentialist: he believes the right action is that which obeys the rule that in general is most likely to produce the outcome with the greatest good. In the above example, for instance, it doesn't matter if the man you save ACTUALLY is going to appreciate art, only that the rule of saving lives will in general lead to the most appreciation. Indeed, he's a strict rule-consequentialist, in that he believes that you should always follow the best rule, no matter how much evidence there is for it being counter-productive in a particular instance

Regarding your 1) - no. Well, yes, the existence of beauty is in itself good - but the existence of appreciation of beauty is more important. It's not the beautiful who are valued, but those who most appreciate beauty.
2) Er... yeah, probably.
3) No - he places no value on the creation of beauty. At least, a low value. Unless there's not enough beauty, that is.
4) Not so simple. Those gallery-goers appreciate the art. You'll probably get more art-appreciation from one painting and ten people than you will from ten paintings and one person. The existence of beauty is a necessary condition, but once there is 'enough' beauty, having the people able to appreciate it becomes more important

Who gets to decide? Nobody - this is ethics, not a political system.

----

Three other interesting ethical points about Moore:
- he was a non-naturalist: he argued that no natural facts could explain moral facts
- he was a realist: he believed there really were moral facts
- he believed that the sum could be greater than the parts (which he took from the Idealists) - appreciating imaginary beauty was good, and the existence of beauty was good, but appreciating actually existing beauty was far better than the sum of these two

===

General observation: he looks mad on ethics because he's mad. If you adopt a @common sense@ view, you're going to end up saying things that look ridiculous to later generations, because common sense changes greatly with the times, and between individuals. By relying on his authority, rather than on logic, he made it impossible to be even vaguely confident of his results.

--

(His views on ethics were extremely influential at the time among the british intelligentsia - most notably, among a group of his followers known as the Bloomsbury Set)
Blog: [url]http://vacuouswastrel.wordpress.com/[/url]

But the river tripped on her by and by, lapping
as though her heart was brook: Why, why, why! Weh, O weh
I'se so silly to be flowing but I no canna stay!

Mornche Geddick
Avisaru
Avisaru
Posts: 370
Joined: Wed Mar 30, 2005 4:22 pm
Location: UK

Post by Mornche Geddick »

It's quite true that when St Paul's was hit by a bomb in WWII, people risked their lives to save it. But the value of St Paul's Cathedral isn't just in its beauty, but also because of its place in history and because it is the most famous building in London. Its appearance is so familiar that it probably gets less appreciation in that way than it deserves. When St Paul's was in danger, many people with little power to appreciate architecture suddenly found that they loved it.

User avatar
sangi39
Avisaru
Avisaru
Posts: 402
Joined: Sun Mar 01, 2009 3:34 am
Location: North Yorkshire, UK

Post by sangi39 »

Mornche Geddick wrote: But the value of St Paul's Cathedral isn't just in its beauty, but also because of its place in history and because it is the most famous building in London. Its appearance is so familiar that it probably gets less appreciation in that way than it deserves.
I'd say "one of the most famous", at least in modern times. Im from Yorkshire and I couldn't pick out St. Paul's Cathedral if I tried. Now, give me Tower Bridge or the Houses of Parliament and I'd recognise them straight off. I do agree though, that when I do look at St. Paul's Cathedral it does give you that nice feeling of national history that would make you want to save it and preserve it, just like York Minster standing high above the rooftops of York which makes all Yorkshiremen regardless of belief or not feel proud to be from Yorkshire to the point where they'd die for that building.
You can tell the same lie a thousand times,
But it never gets any more true,
So close your eyes once more and once more believe
That they all still believe in you.
Just one time.

User avatar
Bryan
Lebom
Lebom
Posts: 134
Joined: Sun Dec 19, 2004 10:50 am
Location: Middlesex, England
Contact:

Post by Bryan »

Salmoneus wrote:Bryan: I don't consider myself an Epicurean, but I do consider it one of the most interesting ethical systems, and I sympathise with it. I probably want, at some point, to attempt to synthesise what I see as its strong points with those of Nietzsche and of Cynicism - but at the moment I'm too intellectually bogged down with potential consequences of Nietzsche to work out how to do it.

What I think is wrong with Epicureanism:
a) excessive passivity and pessimism. Surely we can do better than mere avoidance of pain?
b) excessive disengagement. While I appreciate that a degree of aloneness may be useful, I can't agree with the Epicurean retreat from society, and in particular from politics.
c) excessive identification of happiness with physical pleasure. I don't think this is viable.

What I do would take from it: the general demeanour of the tetrapharmakon, the emphasis on happiness as indicative of flourishing, and a version of the kinetic/katastematic distinction.
Hi there, Sal. Thanks for your interesting post.

IF we were talking about religions, I would have to say that I am not an "orthodox" Epicurean, but I do cleave to almost all of its fundamental tenets so consider myself "Epicurean". But, of course, I accept anything which I see as true, be it in Stoicism, Christianity, whatever. For example, I do believe Democrito-Epicurean atomistic physics (as wonderful and groundbreaking as they were) to be slightly superseded, shall we say, in 2009... Anyway...

I've got one of Nietzsche's works, but I haven't read it yet. Is there any particular edition or publisher (etc) that you would recommend to me for appreciating Nietzsche the best?

On to your three points about Epicureanism...

I don't really agree with your analysis of Epicureanism in point a. Epicureans say that the moment pain is removed is the greatest moment of pleasure. However, they do admit active as well as passive pleasures. There's many quotes to this effect. Before I bring them up and bore everyone and waste my time, I ask you: would you like me to post some of these quotes? You probably already know at least many of them, but I'm curious as to your interpretation, because I simply do not agree that Epicureanism is mere avoidance of pain (altho it of course does emphasise the passive pleasures, pleasures which some schools of thought do not even admit!).
Also, this "passivity" is usually not said with pessimism but the opposite. Indeed, there is an active component here: the constant rational choosing what to take part in and what to stay away from! .... quite unlike Marcus Aurelius whose philosophy, wonderful as it is, oft-times seems nowt but pure pessimism. I think it was Bertrand Russell who summarised that version of stoicism as, 'You can't be happy, so just settle for being good'. A fair summary I would say; and that is pessimism!

About your point b, I have to say I pretty much agree. Epicurean disengagement from politics and wider society has a lot of things to be said for it. Particularly for the poor and people never-to-have-power. However, I agree that Epicureanism goes too far in that area. Altho, it does concede some security comes from engagement with the wider world including politics. It nonetheless seems to me that we must engage in society fully, not shrink from it.

Your point c is strange because it seems to me to more-or-less gainsay your point a. But in any case, would you like to say more on this point c? I'm interested in your views here. Me, I understand the Epicurean emphasis of physical pleasure in a certain way which is evidently different to you. Epicurus said that the greatest pleasure was the removal of pain, and also that the man who least needs extravagance enjoys it the most, and also, "Bring me a small pot of cheese so that I may feast whenever I want", and in Lucretius we have reflections on the sufferings we could have and gratitudes for what we do have etc etc etc. The emphasis here is that physical pleasure and security is the basic root or origin, but that this is greatest when moderated, and the greatest pleasure still being that of the mind's reflection on certain realities. For example, Epicurus referred to the power and necessity of natural science for dissolving our fears about heavenly phenomena. This all points to a more nuanced view that the one which you out forward. But as I say, please expand if you would, since your points are not synopses, as it were, and I cannot necessarily infer anythign correctly from them as to your more subtle/developed views.

Peace. :)

Post Reply