|A brief overview of the development of Western Philosophy
|Page 5 of 11|
|Author:||Pthagnar [ Tue Nov 17, 2009 8:27 am ]|
|Author:||Makerowner [ Tue Nov 17, 2009 12:10 pm ]|
|Author:||Aurora Rossa [ Tue Nov 17, 2009 1:14 pm ]|
|Author:||zompist [ Tue Nov 17, 2009 6:49 pm ]|
|Author:||Radius Solis [ Tue Nov 17, 2009 7:03 pm ]|
|Author:||Makerowner [ Wed Nov 18, 2009 1:58 pm ]|
|Author:||The Unseen [ Wed Nov 18, 2009 2:56 pm ]|
|Author:||Salmoneus [ Wed Nov 18, 2009 3:38 pm ]|
This is philosophy.
I've given up the Zomp argument, mostly. However, I'd like to make one last point on zombies, and one point on experience.
Regarding zombies: you discount theories for being too unlikely to be true. But your notions of unlikeness ('absurdity') are based upon induction. What's more, they are inductions not from the facts, but from your perceptions, which themselves pre-include certain theoretical prejudices. Of course the idea of zombies is absurd - it goes against every intuition about the world that we have! But those intuitions are had precisely because we all assume, amongst other things, that zombies do not exist. So the fact you find it absurd tells us nothing about whether it is true or not, only about what your instincts are. The question then is where those instincts come from - and, as I have argued, their origin contains no sanction adequate to allow them to count as evidence.
Moreover, in discounting the theory, you rather miss the point. Nobody particularly cares whether everybody is a zombie or not - it's a trivial and uninteresting detail. What's important is whether everybody COULD be a zombie or not, which actually tells us things about the nature of our concepts. Those who accept zombies are committed to one set of positions (generally dualism or idealism), while those who deny zombies are likely to take a different set of positions (behavourism, and other sorts of functionalism). The practical feasibility of a Zombieworld is mostly irrelevent - what matters is its conceptual coherence (or lack thereof).
[And, of course, its function as a skeptical argument - even if you do not like the theory, can you really be absolutely and indomitably certain that it is not true?]
Regarding perception: yes, everything that Makerowner points out.
But I'd like one little dichotomy further, regarding your Tolstoy example:
- is it or is it not the case that the man who reads the 'imaginary' Tolstoy and the man who reads the 'actual' Dostoevsky with a moving blob of Tolstoy on it see the same thing?
- If they are seeing the same thing, then clearly they are interpreting it differently. This puts it in exactly the same position as the Latin/Italian experiment - we are aware of the same artifact, but extract different signification from what we are aware of. This is, again, a very old observation (Heraclitus makes much of the fact that, for instance, a bowl of water may be warm to one person and cold to another - which is little different from a text being Tolstoy to one person and Dostoevsky to another), and tells us nothing about vision itself. It just tells us that secondary qualities, like hotness, colour, taste, meaning and the like all have subjective factors. Fine - we've known that since at least Locke. This seems like a very convoluted experiment to demonstrate a well-known and not relevant point.
What's more, this is seemingly based on a confusion - between 'what I see' being the content of my perception, and 'what I see' being the object of my perception. If we see the same thing, but interpret differently, it is obviously true that "I do not see what I think I see" - in the sense that the object of my perception may not be what I first conclude it to be. This, again, is an old and uninteresting observation - the possibility of error. It does nothing to demonstrate that the CONTENT of my perception is not what I believe it to be.
- If they are, on the other hand, seeing different things ['you are not "seeing the same thing" as other observers', as you put it], then this just reinforces our point: we see exactly what we think we see. I think I see Tolstoy, and I do in fact see Tolstoy. If I in fact see Dostoevsky with a moving bit of Tolstoy, then we're all seeing the same thing, and the first point applies; it's only when I'm seeing Tolstoy that your argument becomes relevant, but when I'm seeing Tolstoy I'm seeing exactly what I think I see.
|Author:||The Unseen [ Wed Nov 18, 2009 5:53 pm ]|
|Author:||zompist [ Wed Nov 18, 2009 6:34 pm ]|
|Author:||zompist [ Wed Nov 18, 2009 7:24 pm ]|
|Author:||zompist [ Wed Nov 18, 2009 7:46 pm ]|
|Author:||Salmoneus [ Wed Nov 18, 2009 8:29 pm ]|
|Author:||Salmoneus [ Wed Nov 18, 2009 8:42 pm ]|
|Author:||zompist [ Wed Nov 18, 2009 11:08 pm ]|
|Author:||Salmoneus [ Thu Nov 19, 2009 7:37 am ]|
|Author:||Mornche Geddick [ Thu Nov 19, 2009 8:45 am ]|
Fidelity is a function of the senses, Pthug. If senses did not give you a model of your surroundings that was accurate at some level, they would be no use at all. Even a bacterium needs to know where the food source really is. A mutation which caused the glucose*-dependent chemoreceptors to be switched permanently ON or OFF would be quickly selected out of the population. A bacterium that had no glucose receptors, moving in a random walk pattern, would also be out-competed by a wild-type bacterium which did have them. The wild-type bacterium would get to the food faster than the random-walk bacterium.
Your remarks that complex nervous systems are recent developments and they needn't have evolved seems to me to be beside the point, which is whether sense perception is accurate or merely "useful". A mammal has a complex perceptual model. A bacterium has a simple one. Both are accurate in that they are based on the senses and tell the organism what really is out there. Not everything that really is out there, but the mammal senses more than the bacterium does.
Complex brains did not evolve in order to enable the organism to survive earthquakes, volcano eruptions, the Great Depression, World War II, the collapse of Roman civilisation, or global warming. But we will really need all the help our complex brains can give if we are to survive the last.
(Incidentally, what is your benchmark for Outside Context Problems? Extraterrestrial invasion? New species may have been invading new environments ever since the Precambrian, but the rabbits can't remember the Precambrian. It's Outside Context by their standards.)
*Or whichever food source it uses.
|Author:||Pthagnar [ Thu Nov 19, 2009 9:10 am ]|
fffffffffffffffffffffffffffff, are you trying not to get my point and get it at the same time? Accuracy *at some level* is PRECISELY THE POINT OF THE LONDON UNDERGROUND MAP ANALOGY. YOU UNDERSTAND WHAT SALMONEUS AND I WERE TRYING TO SAY, HOORAY!
I have already given several examples of Outside Context Problems -- radiation, meteors, etc.
If you want better examples, turn to philosophy. Salmoneus has been vague, probably because he is aware of how little he knows. I am not thus crippled, so I ask you to consider Morality.
Many otherwise intelligent people believe that the existence of "absolute morality" is a shining sign that God exists. It is not always God -- the general argument is along the lines of "Then what put this idea that such-and-such is *wrong* into your head? It cannot be entirely a matter of just following some arbitrary ethical code!" and so the idea of a more general Moral Law arises.
The basis for it is shaky and for most of history has been entirely metaphysical, which is why the above argument smells bad.
The form that this morality takes is, as the naive/philosophical/theological argument above suggests, very much a kind of thing *out there* -- particular things and situations *feel bad* because moral judgements like this are linked into the same world-navigation system we use to deal with everything.
It takes a particular kind of discipline to "step outside" of that and realise that a thing is bad not because it is a BAD THING, but because it *makes you feel bad* inside. This is an Outside Context Problem that the human world-processing system is not really very good at. The history of religion is, from one viewpoint, a story about dealing with this problem.
There is another other problem, however. This external-morality is good at one thing -- other people. It is more appealing to consider somebody a BAD PERSON because if they are a BAD PERSON you can do things to them to make the BAD go away. Here is the problem: is this an accurate description of how the world works in the same way that your visual field is an accurate description of how the world works?
|Author:||Salmoneus [ Thu Nov 19, 2009 2:54 pm ]|
|Post subject:||British Philosophy in the Nineteenth Century|
Philosophy in Britain (which is to say Scotland) did not immediately follow the route of philosophy in Germany – indeed, even key texts by Kant and his successors were not widely available in Scotland for many decades after they were written. Instead, philosophy was ruled by a different way of reacting to Hume: a way that attempted to maintain his general empirical assumptions and programme, while avoiding his sceptical conclusions. This philosophy is known as the Scottish School of Common Sense; its greatest exponents are Thomas Reid and William Hamilton. Broadly, the School sought to attack the Cartesian “way of ideas” that they believed led to Hume – that is, they insisted on an account of perception that did not rely on intervening ‘ideas’. Instead, Reid claimed that the initial sensation led immediately to a cognitive perception through the action of ‘common sense’; Hamilton went further by claiming that sensation and perception were two sides of the same coin, and not separable even in theory. “Common sense”, in this account is a feature of human nature, and cannot be doubted – in this, it is not too distant from Hume’s ‘habits of mind’, except that common sense operates right from the moment of sensation, rather than only operating between ideas once they have been formed. Common sense is also a methodological commitment: because all knowledge comes from sensory perception, which is itself conditioned by common sense, we cannot discount the perceptions of the vulgar, which are just as guided by common sense as those of the intellectual. This does not mean that the majority opinion must always win out, but rather than there must be a ‘dialogue of the vulgar and the educated’. This, in turn, refutes all sceptical and idealist views, which are contrary to common sense. Reid argues that his faculty of perception is a gift from nature, and therefore that he will trust it; and, furthermore, that as reason is likewise a product of nature, it is futile to trust reason if we do not trust perception: “they came both out of the same shop, and were made by the same artist; and if he puts one piece of false ware into my hands, what should hinder him from putting another?” Therefore, those who argue that reason is more trustworthy than perception have no foundation.
The School’s influence limited the exercise of philosophy in Scotland to ‘the science of the mind’ – approximately, the studies of perception and logic. Metaphysics was entirely rejected, and it is likely for this reason that the greatest British philosophy of the nineteenth century was a moral philosophy that, unusually, did not base itself in any metaphysical programme. This system is known as Utilitarianism.
Utilitarianism began with Bentham, but had older routes, stretching back to Hobbes. Its seed can be seen in Hume’s evaluation of character traits by their ‘utility’ to society; Bentham took this idea and made it more systematic. The key principles of Utilitarianism can be stated simply:
1. The only good is pleasure and the only ill is pain.
2. Moral evaluation should be on the basis of ‘utility’ – whether something produces pleasure or pain.
3. The subjects of moral evaluation are not characters, but acts.
4. The utility of an act is not a result of any typological or classificatory features of the act itself, but instead the result of its consequences.
5. All pleasure and pain is commensurable, regardless of whose pleasure or pain it may be.
6. The morally right act, therefore, is the act that produces the greatest pleasure for the greatest number of people.
In making pleasure the sole good, Bentham returns to the hedonism of the Greeks – but his is a universal hedonism. Pleasure is pleasure, whether it is tomorrow or today, and whether it is mine or yours. He also differs from the Greeks in discussing acts and rules, rather than virtues and characters – perhaps because Bentham approaches the question from the angle of political and social reform, and hence through the medium of law.
Bentham’s successor, John Stuart Mill, sought to soften the Utilitarian doctrine by allowing that pleasures differed in quality as well as in quantity. This enabled to us say, for example, that it is better to be Socrates dissatisfied than a fool satisfied, and that it is better to be a human than a sea slug, because humans (and Socrates) have access to higher qualities of pleasure.
Utilitarianism was immensely significant as a radical reforming movement, on issues such as economic reform (where it was a cornerstone of the early Labour movement) and the equality of the sexes (Mill was one of the foremost champions of political and legal rights for women, typified in his work “The Subjection of Women”). However, it faced a large number of criticisms:
- What is the utility to be maximised? It began with a linear scale of pleasure. Mill complicated it with a second dimension, leading to concerns about how the two dimensions could be made commensurable. Others gradually distanced themselves from pleasure, moving to such alternatives as ‘fulfilment of preferences’.
- What is to be assessed? Is it individual acts, or should we instead judge rules of action – that is, should we refuse to perform an act that leads to the greatest good, if we know it violates the rule that in general leads to the greatest good? Or should we instead agree with Hume, and assess character traits – the traits that lead to the greatest good. Even if we agree on seeking the greatest good, the choice of which classes of things we should be judging can be shown to yield substantially different results in some cases.
- Should we not accept the Kantian doctrine of inalienable human rights? Can those rights be explained through Utilitarianism, or should they be ignored – or accepted as an additional constraint?
- Do moral laws have to be publically acknowledged? The later Utilitarian, Sidgewick, observes that in some cases more pleasure may come from a society that believes itself to not follow Utilitarian laws - so is it the duty of the government to conceal its reasons for acting? Likewise, can it be true that an individual may better promote the good if he does not consciously follow Utilitarian principles, but only follows them indirectly while believing himself to follow other maxims? If this is so, does that mean that Utilitarians should never preach Utilitarianism?
- Relatedly, what room is there in this system for autonomy? Because it is based on subjective criteria, it essentially licenses all sorts of government manipulation and duplicity, providing that nobody ever finds out about it. Even if it is believed that in practice the government must be free and open because it cannot afford the risk of not being so and being found out, is it acceptable to believe that this is only a pragmatic issue and not one of fundamental moral value?
- Do Utilitarians maximise the average good, or the total good? If the latter, we must surely increase our population as much as possible – even if we all have only a tiny amount of pleasure, we’d still have more in total than a smaller, happier population. If it’s the average good, however, we should reduce our population to give each a greater share of the resources.
- Is death painful? If not, why is killing people wrong, on the Utilitarian account? In particular, if we avoid the population problem by saying that pain is not just a small amount of pleasure, but is actually negative pleasure, it would seem to follow that mass euthanasia of all those in the slightest degree of pain is the quickest way to improve total, or even average, happiness. The only way to avoid forced euthanasia (other than by introducing additional ‘human rights’ constraints, which are themselves problematic) is to assign death an infinite negative pleasure value – but this would demand that we instantly stop having children, since the best way to minimise death is to minimise birth.
These, and other, concerns, have lead to the extinction of Utilitarianism in its pure form; however, the various answers to these questions together comprise “Consequentialism”, the general doctrine that the right act is that which produces the most good consequences, generally defined in a primarily subjective way. Consequentialism, together with Kantian and Kantian-inspired deontological systems, is one of the two dominant ethical schools today.
Mill, however, was more than a consequentialist. He is best known as a political theorist, the father of modern liberalism – for which he drew on Hegelian/Comtean concepts of social progress and sociology, advising a society that nurtured ‘bold experiments in living’. His political views are enormously influential, and probably now form the core of popular opinion on political issues, but are beyond the scope of these posts.
In epistemology and metaphysics, his most important contributions were negative: the destruction of the School of Common Sense, which thereafter was moribund. His positive contributions, though less influential, were nonetheless prophetic.
Humean scepticism raised two key areas of ignorance: firstly, ignorance of external objects, given that all evidence comes from the senses; and secondly, ignorance of the structure of the world, given that our schematic beliefs appear spontaneous, and are not based upon experience. In the first area, Mill agreed with Kant that these schematic concepts were unavoidable, but he did not derive any metaphysical consequences from this. Instead, he addressed the entire concept of knowledge: the sceptic argued that we could not obtain the certain knowledge Descartes had demanded, and so Mill responded that if such knowledge was never even possibly possible, it could not really be want we wanted after all. Instead, the knowledge we ought to seek respected the schematic necessities into which we were forced. For Mill, “must implies ought” – if we cannot avoid certain forms of reason (such as particularly basic forms of induction), there cannot be anything wrong in not avoiding them. Induction is therefore justified if and when it is inescapable. It might be expected that this would extend to justifying logical reasoning, and to mathematics – but instead, Mill believes that mathematical truths, and even many forms of deduction (such as syllogism) are only known from experience: while “2+2=4” represents a deep and basic fact about the universe, it is still only a fact about the universe, and is thus known from experience, with the possibility of error and correction – we may at any moment be shown to be wrong. Syllogisms, meanwhile, are not forms of deduction at all, as they are merely tautological – what appears to be a conclusion of reason is in fact only an unstated empirical hypothesis contained in the premise.
When it comes to knowledge of objects, however, Mill does not rely on absolution from necessity (in which light it should be added that Mill believed only in verbal necessity – which is to say that nothing in the material world is strictly necessary). Mill follows Berkeley in saving empiricism from scepticism by identifying objects with sensations – but, ingeniously, he identifies them not with actual, present sensations, but with conditional sensations. Object are, as he puts it, “permanent possibilities of sensation” – not the sensation of red, but the conditional relation that if one looks, one will have the sensation of red. Objects are bundles of these conditionals – a chair not only looks in certain ways from certain angles, but supports you if you sit on it, creates heat and smoke if you set fire to it, and creates a certain knocking sound if you hit it. No ‘thing in itself’ is required to support these bundled conditionals – the object is the bundle. This doctrine, known as ‘phenomenalism’, would re-emerge a century later.
At the time, however, Mill succeeded in demonstrating the power of the Humean critique, without convincing many in his own response to it. Instead, he indirectly created a huge wave of interest in Leibniz, Kant and Hegel in Britain, which went on to dominate British philosophy until the First World War.
The British Idealism movement had many representatives, but the greatest is usually considered to be Francis Herbert (“F.H.”) Bradley. Like many British Idealists, Bradley believed in monism – the doctrine that only one thing existed, the Absolute, and that individuation was the work of our own minds (others, like James Ward, subscribed to the Leibnizian doctrine of monads). This monism demands idealism, the doctrine that all existence is mental – because everything is one, ‘matter’ must be one with our perception of it, so that our thoughts are substance of reality – but it is an Absolute Idealism, distinct from the subjective idealism of Berkeley, because for Bradley being mental or ideal is not the same as being subjective: there is no one single viewer in Bradley’s system. [We should not that many, including the Absolute Idealists themselves, have argued that even Berkeley was not truly a subjective idealist].
Bradley draws attention to relational propositions, and to three facets of them. Firstly, he observes that if a relation is itself a thing, it cannot relate two terms unless it itself is related in some way to both of them – but that these relations, being things, would then require their own relations, and so on into infinity. Relations cannot, then, be abtract entities in their own right, but must be dependent upon, and possibly internal to, objects (though this last point is debated – his opponents saw him as arguing for internal relations, but his supporters deny this). Furthermore, he believed that all propositions were conditional, not categorical: “the sky is blue” in fact means “if something is the sky, it is blue”. From this, two more important points arise. The first is that our concept of “the sky” is not itself simple, but contains a large number of beliefs – and these beliefs are themselves conditional. Each term in a relation is therefore a bundle of conditionals – not entirely dissimilar from Mill’s possibilities of sensation, although not conceived in strictly empirical terms. The conditionals so bundled themselves relate the term to other terms, themselves conceived of in conditional and relational ways. This has the consequence of undermining the old concept of the ‘idea’, perceived of as a picture or image of something – instead, the ‘idea’ must now be a process, a simulation of all the entangled conditionals, which cannot be distilled in a single stationary image. Furthermore, this simulation will branch out through the interlocked conditionals to encompass the entire world: every statement presupposes an entire model of the world, and helps to define that model. Therefore, even a statement like “the sky is blue” is not simply a statement about an atomic entity, ‘the sky’, but a statement about the entire world. Moreover, because the statements are conditional, it is a statement not only about the entire actual world, but about the entire possible world. It is a statement about the Absolute.
The second major issue is the question of the nature of truth in this world. If a statement is a conditional, what are its conditions? The apodosis is true if the protasis is true – but as the Absolute encompasses all possible worlds, all things are true somewhere within the Absolute. The apodosis must therefore not be ‘true’, but real – it must reflect the world in which we are actually present. This world, however, is not to be identified in an ontological or realist way, but in a coherentist manner – beginning from our own embodied nature in the world, we stretch out our understanding of the world in a coherent fashion, through a series of conditionals, which hold true in different but overlapping sets of possible worlds. The ‘real world’ is the world in which our beliefs most greatly overlap. This, however, is not the whole of the world, but only one subset of the Absolute, in which we happen to dwell; yet our language, conditional as it is, is able to reach beyond the world of our experience. In this way, some propositions are true in more worlds than in others, and are thus more greatly, or more absolutely true; our truths, in grasping not only the actual world but the entire cosmos of possible worlds, therefore approximate more or less greatly to an Absolute Truth that is unknowable, unobtainable, and even inconceivable.
British Idealism broke, explicitly, with the British tradition of empiricism – and yet it still retained perhaps the most British of all its features – its negativity. The British philosophers since Hobbes had always been primarily philosophers of doubt, negation, and minimalism – and though theories like Bradley’s may appear bold and expansive in consequence, they were nonetheless negative and sceptical in origin and execution. It is here, in attacking preconceived notions (such as time, space, individuality, categorical statements, and the correspondence theory of truth), that they have the most lasting impact. The most famous – indeed, almost the only – piece of Idealist work still admitted by Analytics is McTaggart’s argument for the unreality of time. Likewise, Bradley is now best known for his short, critical first work, “Ethical Studies”, in which he surveyed and demolished previous ethical theories – in particular, the chapter “Pleasure for Pleasure’s Sake” is a powerful rejection of utilitarianism that is still sometimes quoted. The chapter “My Station and its Duties” is even more famous, although it its subject is dealt with so sympathetically that careless later readers have often assumed it to reflect his own views, despite the flaws in it that he observes, and despite the fact that it appears only in chapter five of the seven-chapter work. This is not helped by the fact that other Idealists took the chapter as the basis for their own moral theories, to the extent that the moral theory it describes is now known by the title of the chapter.
Bradley’s ideas in ethics clearly owe much to Kant. Like Kant, he believes that morality is both willed freely and yet sometimes reluctant and obedient; like Kant, he believes in an internal moral law which we choose to abide me. For Bradley, this means that our moral duty is self-fulfillment, or self-realisation: the ‘actual’ self (which we see) is to be aligned with the ‘real’ self (which is what we truly are). According to Bradley, however, this real self cannot be seen as some individual, almost solipsist agent, as is often the case in Kantian accounts – instead, Bradley believes that just as the entire world is one Absolute, so too human individuality is an illusion of the actual, and that each man is not distinct, but is only the entire sum of humanity speaking and seeing through one particular reference point. His view can be defended without his metaphysics, however – by both evolution and by nurture, humanity is, he tells us, inherently social. To pursue our narrow, individual goals at the cost of others is not in our real interest, because the well-being of others is also our well-being. Self-realisation is therefore a process of harmonisation and universal love, in which we come to treat others just as we treat ourselves – because there is no difference.
One stage in society on the way to this is the doctrine of “My Station and its Duties”, which holds that each actual society has a number of different stations within it, to which it ascribes certain rights, and gives certain powers over other parts of society, and that these rights and powers in turn engender certain altruistic duties. This, in short, is the morality of the late Victorians – although it should be noted that the Idealists were not conservative at all, whatever later readers may have thought. Bradley goes beyond this model by observing that society itself may be rotten and in need of reform; Bosenquet attempts to deal with the same problems within the framework of ‘my station and its duties’ by including as one of the most fundamental duties of the citizen “the duty of revolution”. Instead, the Idealists held a peculiarly British form of ‘liberal conservativism’ – a doctrine that one the one hand called for strong traditions and obedience to social duties, and yet on the other hand allowed the possibility of change, and opposed all attempts by government or clergy to prevent change. [Another advocate may have been Tolkien, with his ‘anarchist’ conservativism]. The highest motto of the system was the prophetic “Die to Live” – a doctrine that taught that being true to one’s true self could sometimes require (and could only be demonstrated emphatically by) the willingness to sacrifice one’s own life in furtherance of one’s duties to others. The view was widely considered discredited by the use to which it was put in the First World War.
British Idealism did not go anywhere; its effect has been almost entirely negative. Analytic Philosophy began in 1903, and though it defined itself through opposition to Hegel, it defined Hegel through his representative on Earth, Bradley. Where Bradley contributed to Analytic Philosophy, his name and contributions were expunged – in logic, for instance, Bradley was at least as important a figure as Frege, but anything non-Fregean in Bradley was either eliminated or else claimed as an innovation by the Analytics. When Analytics eventually returned to considering sytems similar to Absolute Idealism, they rejected the name, and any trace of Bradleyan contamination. For decades, Analytics were open about their refusal to teach Bradley – his writing was too powerful, they said, for the young to be exposed to it, in case it might provide temptation for them. Only in the last few decades has any interest in Bradley reawoken.
|Author:||hwhatting [ Fri Nov 20, 2009 6:54 am ]|
|Post subject:||Re: British Philosophy in the Nineteenth Century|
|Author:||Aurora Rossa [ Fri Nov 20, 2009 10:15 am ]|
|Author:||Mornche Geddick [ Fri Nov 20, 2009 11:50 am ]|
|Author:||Salmoneus [ Sat Nov 21, 2009 4:58 pm ]|
Don't worry, I'm skipping back to the early 19th in a moment. Probably twice!
That said, I'm a little disappointed/surprised there's been no real comments so far on the 19th century, one of the most exciting and unusual times in philosophy. Kant, Fichte, Schelling, Hegel - they're not exactly intuitive thinkers! [no obscure pun intended]
Anyway, I seem to have written too much on Schopenhaur, so he'll have to have a post to himself - which is a bit odd, as he's not THAT important. But, I did study him, and he's a fairly panoptic thinker, so...
Then it'll be Kierkegaard and Nietzsche.
|Author:||Mornche Geddick [ Sun Nov 22, 2009 10:47 am ]|
|Author:||Salmoneus [ Sun Nov 22, 2009 11:52 am ]|
|Page 5 of 11||All times are UTC - 6 hours [ DST ]|
|Powered by phpBB® Forum Software © phpBB Group