Monday, March 29, 2010

Ethics and suicide

On Mon, Mar 22, 2010 at 20:35:51
Amalie asked this question:

This question is about why Kant's imperative about not using mankind only as a means rules out suicide.

I take a course in practical philosophy where we are now reading Grundlegung by Kant (we read it in Norwegian, so please excuse any strange translations). In class the other day we couldn't seem to agree on a question that showed up:

When talking about the second formulation of the categorical imperative, 'Act as if you use mankind (including yourself) as ends in themselves and not as means to an end' Kant presents some examples to illustrate it.

We found the first example hard to interpret.

He is testing the following maxim: is the action of committing suicide consistent with the idea of mankind as ends in themselves? Kant says it is not, because if one destroys oneself to escape a loathsome condition, one uses one person only as a means to maintain a bearable condition until life ends.

Here the problem appears: we think we do understand his imperative about not using mankind only as a means, what we don't understand is the formulation above: when Kant says 'one person' is that the person that thinks about committing suicide, or is it persons around him that have to bear with him until he kills himself? In other words, if Kant says that one uses oneself as a means, we find a logical limitation: how can one use oneself only as a means? But if he says that one uses someone else when thinking about committing suicide, we don't understand why one necessary uses someone else as a means before one die.

I do hope my question was clear, and I do hope someone finds it worth answering.

On Fri, Mar 26, 2010 at 14:12:39
Alvin asked this question:

I was reading about Mill from a Philosophy Now magazine and I find that he champions the desire for happiness too loosely. He said that the right moral action is the action which brings the greatest happiness to the greatest number of people; alright, it makes sense. But for example, suppose one day we humans became crazy and violent due to an outbreak of a wrongly experimented biological virus. But at the same time, we are sufficiently sane to be able to talk normally. Presidents all over the world declare that mandatory suicide becomes a law and everyone should do it immediately. Everyone agrees and they are happy to oblige. And so a mass suicide took place and humans are wiped out forever.

The people are feeling happy when they decide to take out their lives, but it seems obviously wrong isn't it? You might say that its coercion (i.e virus) and that coercion doesn't lead to happiness, but they are still happy with twisted ideas so does that count?

I am taking Amalie's and Alvin's questions together, not just because they both mention suicide but because they illustrate in the most dramatic way two diametrically opposed views of ethics based on the idea that universalizability is the essential defining characteristic of ethical judgement: Kant's Categorical Imperative, and preference utilitarianism.

Perhaps this is not so obvious in Alvin's case, as utilitarianism is known as a 'consequentialist' ethics by contrast with the 'deontological' ethics of Kant. However, in his book Utilitarianism Mill stated that he regarded his 'Greatest Happiness' principle as equivalent to Kant's Categorical Imperative. There is an element of truth in this rather odd claim, borne out in the moral philosophy of R.M. Hare.

Both Hare and Kant start off from the same point: how can there be such a thing as an ethical command? No factual claim is sufficient to generate an ethical command: As David Hume argued, you can't derive an 'ought' from an 'is'.

Kant's solution was to derive ethical commands from the general formula giving the form of what would be an ethical command, supposing that such a thing were possible. A hypothetical imperative, 'Do X if you want Y' can never be the form of a moral command because the motivation for doing X depends on the contingent assumption that you want Y. Kant is thus led by what seems a logically compelling inference to the Categorical Imperative, 'Act only on that maxim that you can at the same time will to be a universal law', and subsequent formulations which he claims are in some sense equivalent to the original formulation.

What emerges is the key idea that human rationality is the only thing in existence that is an end in itself, rather than a mere means to an end. The value of human beings resides wholly in their being 'lawmaking members of the Kingdom of Ends'. Everything else has merely instrumental value, as a means to that singular end.

Hare is best known as the advocate of the meta-ethical theory known as 'Prescriptivism'. Ethical statements, which on surface appearance appear descriptive in form, are in reality commands. The only constraint on what can be an ethical command is that it be universalizable.

There is a way of understanding this, according to which ethical beliefs and statements have no logical basis in reality. Anything can be an ethical belief or 'command' provided that it satisfies the formal requirements. If I believe that toothpaste tubes should always be squeezed from the bottom, then this is an ethical belief provided that I regard the statement as applying everyone in all circumstances. If you squeeze a toothpaste tube from the top, you are doing something which in my view is 'ethically wrong'.

The obvious difficulty is, on this view, everyone is is free to formulate his or her own 'ethical' rules. You always brush your teeth before breakfast, but I don't agree with that. It depends on whether or not I am in a hurry to get out. Whereas you don't agree with my ethical rule regarding squeezing toothpaste tubes, because some tubes are hard to squeeze from the bottom, especially if you have small hands.

Hare's solution is to apply a further crucial stage of universalization: The universal rules which constitute genuinely ethical laws are those, and only those which everyone can agree to. My belief that everyone should squeeze toothpaste tubes from the bottom is what Hare would term fanatical, because I am, in effect, unreasonably insisting that everyone share my values. But who am I to set myself up as a legislator for values? Hare's solution is simple and very elegant: the only valid basis for ethical commands — the only way to avoid fanaticism — is to hold that each and every person's set of preferences counts for the same, regardless of the content of those desires.

One important consequence of this view that the ethically right action is one which maximizes the total surplus of satisfaction of desires, over non-satisfaction of desires, either for all intelligent beings or — in the case of Hare's former pupil Peter Singer — all sentient beings.

This position is known as preference utilitarianism. This was not, in fact, what Mill held. On the contrary, Mill is committed to the idea that what will make people truly 'happy' does not always consist in getting what they desire. Some pleasures have a higher value than others. It is possible to be wrong about about what will make you most happy. However, from Hare's perspective, this notion is merely a form of fanaticism. Who am I to judge what kinds of activity or satisfaction are the ingredients for happiness? It is up to each person to decide for him or herself.

It should be clear by now that Alvin's scenario, where the human race is infected by a viral plague which makes everyone want to commit suicide, is a prima facie challenge to Hare's preference utilitarianism, but not to Mill's utilitarianism. Mill would say that we must act on the assumption that there is a possibility that a person can achieve happiness which they thought was not possible, which may involve being forcibly prevented from committing suicide. To simply allow everyone to commit suicide because that's what they want is to accept that there is no possible future scenario where the human race, despite their presently suicidal tendencies, achieves a positive balance of happiness over unhappiness, or pleasure over pain.

The preference utilitarian has resources for dealing with this objection, strong though it may be. He can point out that no-one has just one desire. The desire for suicide, be it ever so strong and incapable of being argued with, nevertheless has the potential to clash with other things that a person desires. It is not fanatical, from Hare's point of view, to engage people in dialogue in order to get them to see the inconsistency in their desires, with the ultimate aim of changing their view of what they really want. Maybe. At any rate, there is sufficient unclarity in the idea of determining what a person 'really' desires, all things considered, to provide sufficient room for manoeuvre.

All this, of course, has no bearing on the question whether it is wrong on Hare's theory for an individual person to commit suicide. It is consistent with Hare's view to hold that an individual who sincerely wishes to do away with himself, who won't be terribly missed and is meanwhile making everyone's lives a misery with his constant complaining, ought to be permitted to have what he wants, the termination of his unhappy existence. The rest of humanity, who do not desire to commit suicide, will be better off, while the individual concerned will have his preference for non-existence satisfied.

This could not be further away from Kant. Suicide is wrong, in any circumstance whatsoever, because it contradicts the Categorical Imperative. However, I can quite understand the difficulty Amalie and her classmates are having with this idea.

First of all, Kant is not saying that by committing suicide I am using any other particular person as a means. It is true that other persons may be affected by action, but that is a contingent question. That would not suffice to show that suicide is wrong in any circumstances whatsoever — for example, if Robinson Crusoe committed suicide before he had the opportunity to meet Man Friday. Kant means is what he says, that in committing suicide, I am making 'humanity in my person' a mere means to an end, namely the cessation of my suffering.

By 'humanity in my person' Kant is referring to all of humanity, literally everyone who has ever or will ever exist. By taking my own life, I effectively demonstrate that I view humanity as such, as a means to my end. The value — as a member of the Kingdom of Ends — that I deny to my own person through the maxim of my action, 'I will end my life if it is not sufficiently pleasing,' I thereby deny to all. From a certain perspective, this is contempt for humanity on a truly colossal scale.

In order to see how one could be led to this conclusion, one needs to understand that Kant's view, by contrast with Mill and Hare, is profoundly anti-hedonistic. Pleasures and pains are the things that push and pull us in a deterministic universe, but they are not part of what gives human beings their ultimate value. Only rationality — the one thing that sets us apart from the rest of creation — is suitable for being an end. Moreover, this rationality has to be understood not as a mere tool, or 'slave of the passions' as Hume calls it, but as something with intrinsic value, in itself.

Happiness, misery, pleasure, pain — these are all things that pass. F.H. Bradley in Ethical Studies calls them 'perishing particulars'. The greatest sensual enjoyment, thrilling though it may be at the time, passes and is gone. You can savour the memory, but that too is just something that passes away in time. Value is permanent or it is nothing. A work of art, for example. You and I have value, insofar as we exercise our capacity for rationality for its own sake.

It is difficult to make coherent sense of this, except in teleological terms: human beings have a purpose, a teleology, which they do not give themselves but which is given to them, namely, the capacity to form a community governed by the principle of ethical respect for one another as ends, in which each rationally legislates for the actions of all.

The idea is not thousand miles removed from Plato's vision of the The Republic. Plato does not deny that human beings have desires and emotions, in the absence of which we would not have any capacity for a meaningful existence. However, it is only through the opportunity which they give for the exercise of rationality that desires and emotions acquire positive value, by fulfilling their assigned functions in the ordered soul: the law-respecting citizen of the ideal Republic. On any other view, we are no better than brute animals.

I am no Kantian — or Platonist — but I can appreciate the majesty of Kant's conception. We live in a very I-centered world, where society is seen as the mere sum of individual units, each pursuing its own agenda for consumption. Besides my likes and dislikes, I am nothing. This view not only justifies suicide but taken to its logical conclusion requires euthanasia — including non-voluntary euthanasia for those infants judged at birth sufficiently incapable of leading a 'happy' life.

Is that the only choice? Is there no middle way between a Brave New World and Kant's Kingdom of Ends? Possibly there is. Maybe the question of suicide is the key. Is there any way in which one could defend the view that suicide is wrong, but nevertheless must sometimes be permitted? Or is that mere double-think?

Monday, March 15, 2010

Uniqueness of the self

On Sun, Mar 7, 2010 at 03:01:20
Erin asked this question:

If there is such thing as 'self', then is it possible that there are two completely identical human beings in the same universe?

I feel a song coming on...

There is always someone
For each of us they say
And you'll be my someone
For ever and a day
I could search the whole world over
Until my life is through
But I know I'll never find another you

'I'll Never Find Another You' by the Seekers

The Seekers hit is an example of the genius of pop music in getting to the core of a philosophical problem. I could search the whole world over, and find lots of people more or less like you, but none of those people can be an adequate substitute. You are unique and irreplaceable.

Many years ago when I was a undergraduate at Birkbeck College London, I did a stint (1974-5) as President of the Birkbeck Philosophy Society. One of the invited speakers was Arnold Zuboff, a Lecturer at University College London (who, amazingly, is still in the same post, 35 years later). Zuboff's paper was on the topic of Love.

It was from Zuboff, over dinner prior to the talk, that I got the example which I still use on my students today, of the fantastic coincidence that my parents met, and their parents met, and their parents and so on.

Zuboff wasn't content to go over the standard questions about love. He wanted to know, amongst other things, why is it that we want to stick our tongue into the mouth of our beloved and taste her spit? Why do we desire to do things with our beloved, which would disgust us if we were invited, or forced, to do them with anybody else?

It is not beyond the bounds of possibility that Zuboff's paper, and the lively discussion we had at that meeting of the Phil Soc planted a seed in my mind, which eventually grew into my book Naive Metaphysics. (Or at least one of the seeds. Another was being reminded by David Hamlyn about Wittgenstein's remark about 'two godheads' in the Notebooks 1914-1916, an insight which Wittgenstein seems to have quietly discarded by the time he came to compose his Tractatus.)

The tongue thing is about trying to 'touch' that which is metaphysically beyond reach, or at least that's how I recall the discussion of Zuboff's paper went. To be fair to Zuboff, he wasn't exactly sure himself what it showed, given that people have different gut reactions on this particular point.

As it happens, it's coming up to the first anniversary (March 25th) of the death of my wife, June Wynter-Klempner. In my Dedication to Naive Metaphysics (1994) I wrote:

For June-Allison:

No personal relationship is so secure that it has not, on some occasion, been unexpectedly thrown into question by a word or gesture. The sense of certainty, of which we were perhaps not even conscious, gives way to intimations of something unknown and dangerous; an unexplored region, a depth that has never been plumbed, an order threatened by chaos. Before the threat has time to materialize, the moment passes and certainty returns. And so it is with our relation to the world itself. Unconsciously taken for granted as the backdrop to all our experience and action, the world suddenly becomes visible as a subject towards which one stands in a precarious relation. At such a moment, the very attitude of certainty seems a distortion of reality; the world is and will always remain something absolutely other than I, it is not mine to take for granted. But then, as before, the moment passes and is forgotten.

I don't write like that any more. It isn't me, and wasn't even then. It was more about what I thought the way a 'philosopher' (or the kind of philosopher I aspired to be) should write. But sometimes you have to remind yourself of things, like the scary moments in every relationship when you get ever so close to the brink of realizing... what, exactly?

My Dedication implies that there are two states in a relationship, the normal state where we 'take things for granted' and the abnormal state when the abyss opens up and we gaze into the chaotic depths. This seems reminiscent of Heidegger's claim that the motivation for metaphysics comes from certain experiences — existential 'anguish' — which irrupt into ordinary life. Or maybe it is closer to Emmanuel Levinas, the idea that the characteristically 'metaphysical' experience arises with my discovery of the otherness of the Other.

There will never be another June, that's for sure, nor would I want there to be. One real relationship is enough for any lifetime. Forgive me if I express a certain sense of horror at the thought of coming that close to another person. Or maybe what I was really trying to say in my Dedication is that love secretly co-exists with horror. Could Sartre be the philosopher for me?

But that doesn't get us much closer to answering Erin's question. I was originally going to write about Leibniz' Law and Medieval disputes over the 'Principium Individuationis'. But actually these logical acrobatics get no closer to the essential point. What I wanted to say to Erin was that the self isn't like other Aristotelian 'substances'. It is unruly. It doesn't obey normal logic.

Timothy Sprigge in his book for Penguin Theories of Existence (1985) considers the key question from Nietzsche's Theory of the Eternal Recurrence: when the universe turns round again, will it be you and me next time around, or only people physically and mentally exactly like you and exactly like me? Sprigge says:

For myself, so far as the whole doctrine seems intelligible at all, I think it does amount to the view that we our very selves will be here again doing the very same things again, for it is hard to see what stronger form of survival there might be than to survive with precisely the same character and feelings, perhaps even with the same 'matter' to one's body, and besides these factors the temporal interruption seems to have little weight.

Timothy Sprigge Theories of Existence p.112

I don't see things this way. You don't need to make the 'recurrence' temporal. It could just as easily be spatial. Imagine a mega-universe or megaverse in which the known universe (the universe of physics) exists in a spatial or quasi-spatial array of identical universes (in one, two or three dimensions, take your pick). Unlike the physical universe, the megaverse is infinite. So there are infinitely many identical 'me's and infinitely many identical 'you's.

(Don't tell me that this is inconsistent with the 'known' facts about the physical universe. It's a thought experiment. It is always possible that we are wrong about what 'the physical facts' are.)

In Nietzsche's eternal recurrence, as I write these words I know that GK will write these words again, and again, an infinite number of times. Sprigge thinks that GK must be I. Would he say the same about the megaverse? Why not? Why can't I be every single one of the infinitely many GK's? What difference would it make?

This is where the logical unruliness of the self becomes all-too apparent. You can say what you like. You are really not saying anything. P.F. Strawson makes substantially the same point in his article, 'Self, Mind and Body' when he argues, following Kant's 'Paralogisms of Transcendental Psychology' (Critique of Pure Reason) that there is no way, in principle to count Cartesian souls. It makes no meaningful difference, whether there is one Cartesian soul associated with the body of GK, writing these words, or a thousand Cartesian souls, all thinking qualitatively identical thoughts; or whether, instead of one Cartesian soul persisting through time, 'I' am an endless sequence of momentary souls, each of which transmits its state to the next like a line of colliding billiard balls (which is originally Kant's point).

In Naive Metaphysics I consider a variant of the megaverse thought experiment, where all that exists is this room, endlessly reduplicated throughout infinite 'space'. As I walk in through one door, I see 'my' back, as 'I' walk through the next door along. Do I know with absolute certainty that I am me and not that other person? Not at all. I can be both. Just as I can not 'be' either (if I only exist in the present moment, so that the GK who goes through the door is no longer 'I').

I realize that for some people, these kinds of metaphysical speculation put philosophy beyond the pale. What is the point? I was only trying to understand the philosophy behind the Seekers song. Maybe there's nothing to 'understand'. Maybe, we've come to the point where understanding just comes to an end. But in that case, I'm not happy just to acknowledge that. Why does it end, just where it ends? Why there? What am I, what are you, that we can pose these questions?

Monday, March 8, 2010

Collingwood on absolute presuppositions

On Thur, Mar 4, 2010 at 22:12:33
Tim asked this question:

George Collingwood saw history as a rational process but is it rational to ignore the fact that our absolute presuppositions may be true or false? If we say no then we are back to the problem of investigating absolute presuppositions without any of our own absolute presuppositions to start the enquiry.

Is there an answer to this problem? Can we investigate the truth of absolute presuppositions without any of our own absolute presuppositions?

This is a great question which takes me back to the time when I was writing my D.Phil thesis The Metaphysics of Meaning and reading everything I could lay my hand on which had anything to do with metaphysics. My supervisor was John McDowell. I was supposed to be writing something on the philosophy of language, but all I could see was theorists of meaning trying, and failing, to do metaphysics.

The only answer was to go to the source: Plato, Kant, Hegel, Bradley, Whitehead, Heidegger.

The short answer to Tim's question — which I will explain in a minute — is that Collingwood hasn't 'ignored' the putative 'fact' that our absolute presuppositions may be true or false. According to Collingwood, truth is not 'correspondence with fact' but rather an answer to a question. Every question has presuppositions. Some of these are 'relative' and can therefore be questioned. But you can't question absolute presuppositions, because they are in a very real sense the ground you are standing on. There is no vantage point or place to stand from which one could regard one's absolute presuppositions as a 'proposition' with a 'truth value'.

It is understandable why many philosophers have regarded this as deeply unsatisfactory, and is the main reason why Collingwood has been branded a 'historicist' about metaphysics. Collingwood appears committed to the view that when we study the history of metaphysics, we are merely describing the thoughts of metaphysicians in relation to their time. There is no way to meaningfully raise the question whether these thoughts are 'true' or 'false' in a non-historically relative sense.

I first got onto Collingwood reading a book by Leslie Armour The Concept of Truth (Van Gorcum 1969), and Armour's follow-up book Logic and Reality (Van Gorcum 1972). I can highly recommend these to any philosophy student with a sense of adventure who is looking for a walk on the wild side, especially the second which attempts the (some would say) impossible feat of doing what Hegel attempted in his Science of Logic, only doing it right. This is thrilling stuff, speculative philosophy of the first order.

To get back to truth, Armour goes through all the standard theories of truth — correspondence, coherence, pragmatist — and finds each of them wanting, mainly for reasons which have been discussed in the literature, although with a few clever dinks of his own. So far, OK. But then he argues for a view which anyone who thought 'eclectic' was a word for something bad would be appalled by. Each of the theories is kind-of true, but lacks something. However, if you put all the theories together, you get something which approaches a true account of truth. Collingwood's theory of truth as 'an answer to a question' is the leavening in the cake.


I never got round to reading Collingwood's The Idea of History. I read and re-read An Essay on Metaphysics and An Autobiography. As with Armour, I can recommend these to any student who has an interest in metaphysics as a speculative, foundational inquiry.

One of McDowell's favourite quotes from Wittgenstein was:

If I have exhausted the justifications I have reached bedrock, and my spade is turned. Then I am inclined to say: 'This is simply what I do.'

Philosophical Investigations Para. 217

Wittgenstein is talking about explanations you can give for why you follow a certain rule. But he could just as easily have been talking about Collingwood's 'absolute presuppositions'. In philosophy, there is a point below which you cannot dig — a warning which according to McDowell philosophers like Dummett and Quine fail to heed in their attempts to reductively analyze linguistic meaning (see e.g. McDowell 'Truth Conditions, Bivalence and Verificationism' in Essays in Semantics Evans and McDowell Eds. OUP 1976).

I wasn't altogether convinced by the Armour and Collingwood line — minimalism about truth seemed, and still seems, more attractive and a lot less effort to defend — but Collingwood's critique of the traditional view of truth made me realize the key issue in any attempt to construct a metaphysic. You have to start somewhere. You need axioms from which to deduce your metaphysical theorems. But how do you defend your axioms? How do you prove that your axioms are true?

Descartes' 'I exist' is an example of a famous metaphysical axiom, which first-year philosophy students use as an introductory exercise. If 'I exist' is true whenever I think it, how does it follow that I existed in the past, or that I will exist in the future? Does there even have to be a 'subject' that 'thinks' in order for a thought to exist?

I'd studied Wittgenstein's private language argument and I knew better (or so I thought at the time). The ego is just an illusion generated by grammar. All first-person truths are necessarily supervenient on third-person truths, that is to say, on what can be communicated in language.

Then I had a brainwave. All the argument over 'realism' versus 'anti-realism' about truth and meaning can be dealt with in exactly the same way, as a critique of the truth illusion. There is no ego, there is no truth. Nothing 'in here', nothing 'out there'. There is no starting point for metaphysical inquiry. All there is, is the power of logic which the philosopher can bring to bear on any alleged metaphysical axiom or theory. Metaphysics is a dialectic of illusion. (See my 1982 D.Phil Abstract.)

For anyone looking for the great truths of metaphysics, this is a bitter pill to swallow. A Pyrrhic victory. But I was undismayed. I had discovered something, a negative truth. I'd plumbed the depths. To plumb the depths and know that there's nothing down there is knowledge they don't have. I mean, all the philosophers throughout history who have entertained the idea that there could be a 'true' metaphysical theory.

So Collingwood was dead right. You can study metaphysics. It's a fascinating logical exercise. But you know before you even set out that you are not going to find anything true. At best, all you will discover are the consequences of assumptions which, at the time, were thought to be beyond question.

But I agreed with those who were uneasy with historicism. To seems just too damned contingent to view metaphysics as merely consequential on human history, or intellectual history. I preferred Kant's idea that there are in some sense necessary illusions, which arise from the very nature of the mind. But, contra Kant, there was no way you could prove that these particular illusions had to arise. You just had to accept the illusions — the standing temptations which set us on the road to metaphysical inquiry — as a given.

And so I was led to a rather weird conclusion:

[T]he propositions of a system of metaphysics can serve only to refute metaphysical illusion; once one departs from that negative function there is nothing upon which to base the development of the system except the appeal to an 'incorrigible metaphysical intuition'. But that is just what the task of 'identifying the source of the illusion' would require us to do. So long as the dialectic is confined to its negative function it can yield only illuminating redescriptions of the illusion; we may cast those descriptions in ever more revealing forms, but the source of the illusion itself remains untouched.

G.Klempner D.Phil thesis The Metaphysics of Meaning 1982 p.222

The remedy?

Identifying the source of the illusion is indeed a necessary task; but it is not a task for metaphysical inquiry. For its necessity belongs, not to metaphysics but to psychology. It is that necessity which differentiates the explanation of the source of metaphysical illusion from the explanation of a mere error, rather than the discipline for which the explanation is set as a necessary task.

Ibid. p.223

Or, in plain terms, if you want to know where metaphysics ultimately comes from, get yourself psychoanalyzed. I really thought that! (But that's another story.)

— One up on Collingwood, eh?

Monday, March 1, 2010

On the possibility of comparison

On Fri, Feb 19, 2010 at 19:48:35
Brian asked this question:

I was talking to someone the other day and we stumbled on a question which like all good ones seems so obvious once it is asked, but which has stumped me:

How is comparison possible?

What is it to compare one thing with another — do we compare things or properties of things? can we only compare like with like? but if so haven't we already presupposed a comparison?

Is comparison a basic 'category'? is it prior or anterior to other concepts e.g. identity, difference, metaphor.

Which philosophers discuss the methodology of comparison?

Is this one of the 'good ones'? I've already picked it as the Ask a Philosopher Prize Question for February — the best of a not terrifically great bunch — but I'm still not sure just how good a philosophical question it is. Let's see.

How do you compare two peas in a pod? Or apples and oranges? Or my ear and the moon? 'Shall I compare thee to a summer's day?...'

Are we dealing with a basic logical category, like identity and difference? or could it be even more basic? or is it merely derived?

Ricoeur wrote a book on metaphor (The Rule of Metaphor 1981). I can't imagine a philosopher writing a book on comparison. There's something meaty about metaphor. Comparison seems too general a topic, compared with metaphor. But that's just my first reaction. I could be wrong.

Let's start with something easy. Does comparison have a methodology? Let's say I run a research team for a washing powder manufacturer. One of the things we might regularly do is compare different formulations of washing powder. Because this is a laboratory and not a laundry, the tests have to be strictly controlled, and the points of comparison clearly defined. Powder A is more effective on egg stains than Powder B at 40 degrees Centigrade.

However, a consumer might be more interested in which powder makes clothes smell nicer. How do you test this? what methodology do you apply? I read somewhere that deodorant manufacturers employ people to sniff the armpits of volunteers, in order to determine which formulation is more effective at preventing offensive odour. The training may not be quite as rigorous as for wine tasters but the job still requires a special skill. The aim is to get as objective an assessment as is possible given the inherent subjectivity of judgements of nice or nasty smell.

In order to make a specific comparison, a methodology may or may not be appropriate. In choosing the Prize Question of the month, I simply go through all the questions in my email in-box and make a short list. Then I run through the short list two or three times and pick the one I consider the best. That's how I chose Brian's. I didn't employ a 'methodology'. I just used my judgement. (There was a question on solipsism which I quite liked, but the questioner seemed a bit too confused: it wasn't sufficiently clear what the question was.)

But we're still circling round the problem. Brian seems to think that there is a potential paradox here: 'can we only compare like with like? but if so haven't we already presupposed a comparison?'

'You're comparing apples and oranges' is something you'd say to someone who asks for a comparison between two things which are too unlike to form a sensible judgement. But you can still compare apples with oranges: you can ask which fruit is richer in Vitamin C, or which is better value at the local supermarket this week. However, that presupposes that you have already identified applies as 'like' oranges in respect of their nutrition, or as value for money. Then again, you can compare an apple with a tennis ball (both good for a game of catch, although apples don't bounce).

We don't first acquire concepts and then discover that things falling under different concepts can be compared. They are different aspects of one and the same skill.

The ability to apply a concept, like 'red' or 'fragile' or 'intelligent', involves the ability to compare red things, or different objects with respect to their fragility, or different people with respect to their intelligence. But how do you do this? Doesn't the ability to make comparisons presuppose that you have a standard — e.g. for what counts as red, or fragile, or intelligent? But then, how do we judge that the standard is the correct standard for the thing it's for?

In the opening pages of The Blue and Brown Books Wittgenstein considers the following case:

If I give someone the order 'fetch me a red flower from that meadow', how is he to know what sort of flower to bring, as I've only given him a word?

Now the answer one might suggest first is that he went to look for a red flower carrying a red image in his mind, and comparing it with the flowers to see which of them had the colour of the image.

Blue and Brown Books Blackwell, 1969 p.3

Well, what's so wrong with that?

...consider the order 'imagine a red patch'. You are not tempted in this case to think that before obeying you must have imagined a red patch to serve you as a pattern for the red patch which you were ordered to imagine!

Ibid. (exclamation mark added)

Wittgenstein is making an important point here about the nature of concepts. My ability to recognize, e.g. a red flower as 'red' is, partly, what my grasp of the concept of red — or, in the linguistic mode, my understanding of the use of the word 'red' — consists in. The idea that I need an internal standard of red to compare red things with in order to tell whether or not they are red leads to a vicious regress.

I think Brian was kind of hoping that the concept of comparison is paradoxical because of the implicit threat of a vicious regress. Well, there isn't one, and it isn't. At least, not for that reason.

Concept use involves judgements of 'identity' and 'difference'. You can't be said to have a concept unless you are able to make judgements about the things that fall under the concept (identity) or the things that do not fall under it (difference). The ability to make judgements of numerical as opposed to qualitative identity and difference — the 'same man' or 'same horse' — is somewhat more sophisticated. Aristotle was the first philosopher to really explore this topic.

Imagine a world much simpler than the actual world, where objects differ only in kind and not in degree. There is no 'more' or 'less' (except in a strictly numerical sense), no 'shades', no borderline cases. In short, no scope for comparing which of two objects is closer to some given standard. In this imaginary world, for any concept F, and any object x, either x is an example of F, or x is not an example of F. There is no other possibility.

I have just demonstrated (I think!) that the concept of comparison is not derived from the concepts of identity and difference. As I conceded, even in this imaginary world, you can compare numbers: there can be more objects which satisfy a given description or concept than those which don't. Numerical comparison is a matter of simple arithmetic. I think Brian would agree that that isn't the notion of 'comparison' he had in mind.

For the same reason, I don't think we can say that identity and difference are derived from the concept of comparison. In the simple universe, objects either match (or satisfy) or fail to match (or satisfy) a given description or concept. Which leaves one remaining alternative: that comparison is an equally basic category, alongside identity and difference. That seems to make sense.

In the more complex universe we inhabit, objects fall at different points on a smoothly sliding scale with respect to a given concept or quality. Things are vague, blurred, have fuzzy edges. This is a huge philosophical topic. When logicians and philosophers of language debate the topic of vagueness, it can sometimes seem as if the existence of expressions which do not have a precise definition is an unfortunate quirk of ordinary language. Frege, the father of modern logic thought so. Two centuries earlier, Leibniz dreamed of a characteristica universalis, a form of precise notation which would render every philosophical problem soluble (see

But this gets things completely back to front. In the real world, things are not simply, 'F' or 'not-F' without qualification. They are more or less good examples of F, with the less good examples shading off into cases where it's difficult to form an opinion, which in turn shade off into cases which look more like not so good examples of not-F. While the canonical forms of human language appear to cut things up into the categories of 'same' and 'different', ordinary reality contradicts and subverts this ideological image at every turn.

Aristotle viewed human beings as creatures who categorize. To be rational is to possess the ability to sort things into species and genera, or recognize a valid syllogism. But it is surely closer to the truth to regard human beings as creatures who compare and evaluate. This was something Aristotle did consider, especially with regard to ethics. But ultimately, in an Aristotelian universe, logic comes first.

— It has just occurred to me that the 'golden mean' is Aristotle's contribution to the methodology of comparison. A brilliantly simple but deep idea. A topic for another question, perhaps.

Good question, Brian.