Hello. Today is not Monday. It is Tuesday. Despite this, I am still going to write my 'Philosophy Monday' bit. I hope that is ok with you.

In class this Philosophy Monday, we actually spoke about something that I (sort of) agree with. This article, by Haidt, tries to support the idea that 'Moral Reasoning' (essentially, reasoning that we use to justify something as being good or bad) is, in my words, not to be trusted. 

In the article, Haidt begins by providing an account of the 'prevalent' models of morality – those based on rationality. Roughly, rationalist accounts of morality assume that we have control over and understanding of our moral actions – rationalist moral theorists believe that we can come to a rational conclusion about the 'best' way in which to act and that this is different and separate from mere intuition, instinct or emotion. The idea here is that we can trust ourselves to have good reasons that both justify and explain why what we do is good

To briefly summarise Haidt's argument, I think he is claiming that, as there exists 'evidence' that suggests that our explicit reasons for making particular moral judgements are not necessarily what really causes us to make said moral judgements, – that is, the 'externally valid' reason we give to justify why X is bad is not the actual reason that we judge that X is bad – it is possible that all our moral reasoning is merely a rationalisation of and justification for our own intuitions and prejudices. I think I am presenting a version of his argument here which is slightly less absolute than the theory that he is proposing – I think he is trying to say that all moral reasoning is merely an after-the-fact justification for our intuitive snap judgements – but I think my account still presents his general idea.

I am going to make a response to this in three sections.

In the first section, I am going to make a couple of selected criticisms relating to ways in which Haidt undermines his overall argument by making a couple of 'tactical errors'. In the second I will try to sketch what I consider to be a stronger way of presenting his basic argument. In the third, I will cry a little bit and whinge about why Philosophy makes me sad. If you have no interest in reading the article (which is very long, although I reckon you could probably get the gist by reading the first page and skimming the examples), you can probably skip to the third section. That said, I will try to be brief.

The First Section

One way in which I think Haidt undermines his argument is in the way that he frames the debate as a battle between rationalist and 'social intuitivist' accounts of morality – which one is the one true theory? Haidt's answer lies in what he refers to as 'social intuitionism' – suggesting that the way morality works is not by us sitting around thinking about what is the theoretical right thing to do is (rationalism) but by us having a set of socially-evolved and learned intuitions which inform us as to right and wrong. While I agree with Haidt that 'social intuitionism' seems a more 'accurate', descriptive account of how moral judgements are generally made (In Real Life), philosophers such as Kant or Mill were not attempting to come up with descriptive accounts of morality. Rather, they were proposing rationally-justified, prescriptive accounts of morality – here are good, rational reasons, unmarred by intuition or too much 'common sense', for why X is morally good. As such, most moral theorists are not trying to describe how morality works but prescribe a system in which it could or should work – an example of such a system:

• in any situation, a morally good action is the one which maximises people's preferences and minimises harm.

A commonly held problem with prescriptive, rationalist systems such as that sketched above is that actions that are, by such systems, 'morally good', can feel wrong (or distasteful) and are therefore hard to carry out In Real Life. As such, such a theory, while perhaps rationally justified, is not necessarily descriptive of how things actually work.

In framing the debate as being 'which moral theory more accurately describes how humans work', Haidt has possibly already won his argument (on his terms) and thus leaves everyone else not particularly interested. Rationalistic models were never trying to be descriptive but were proposed as a rationally-justified ideal – Kant did not think that he was describing how we come to moral decisions, he was proposing a reasoned, justified way in which we could. The debate should not then be about which is more in accordance with evidence – Haidt framing his argument as such makes it seem like a red herring to moral theorists, already subscribed to rationalism – but rather which is more useful

The next tactical error of Haidt is made in his initial point. Right at the beginning of his piece, Haidt tells a story in which two siblings, Mark and Julie, sleep together. He then describes a situation in which people, having heard this story, feel an automatic, intuitive reaction of disgust and thus attempt to come up with rationally-justified moral reasons to justify their disgust. As, in the example given, all the obvious arguments for the badness of Mark and Julie's act are subverted – they used contraception, no-one's feelings were hurt, they enjoyed it and there were no negative after-effects – Haidt suggests the 'common' last response, after running out of 'reasons' is: "I can't explain it, I just know it's wrong". As such, Haidt tries to use this to support that 'moral reasoning' exists to merely justify our automatic, unreasoned intuitions, rather than form them. 

To a critical, handsome and intelligent reader, this argument seems to be directed at a straw man and therefore weakens Haidt's point. While I think he was actually just using this example to simplify and make intutitive his view rather than relying on it to pull any real argumentative weight, its place right at the beginning of his article somewhat undermines his argumentative credibility. 

Anyone committed to the idea of a rationalist moral theory would, by definition, upon having no remaining reasons to justify why Mark and Julie are bad, agree that, despite intuition, they are not bad. As such, the rationalist can reply to Haidt that this example, while maybe applying to those with tiny brains who stick stupidly to their intuitions despite rationality, definitely does not apply to those to whom Haidt is supposedly making his point. Once again, I agree with Haidt, but think this example, by being weak, does worse than just not-supporting his argument strongly, rather, it undermines his credibility to make the grand claims that he is attempting to make.

Just quietly, I think the undermining of rationality that, ultimately, Haidt is arguing for is a view toward which many Philosophers feel some distaste. In accordance with Haidt's theory, he should be aware that these same Philosophers, with considerable argumentative skills and a vested interest in justifiying their distaste, will exploit any weaknesses in his argument to do so. As such, also in line with Haidt's theory, he needs to be especially careful not to present his argument in a way in which his credibility can be undermined separately from his main argument. 

My final broad criticism of Haidt's piece relates to his examples. As I have already spent too much time talking in this section about things that I don't really want to talk about, I will just state that they too are too easily refuted. Again, while I agree with the broad strokes of his argument, the way in which he makes it is too easily rebutted due to what I consider to be tactical errors. A result of Haidt's theory is the idea that people's (moral) judgements are actually more effectively persuaded by intimidation, social pressure or rhetoric than by pure reason. As such, trying to present an evidence-based, essentially-rationally-justified argument which undermines much of that which Philosophers, who basically determine the rules of 'rational justification', hold dear, is probably not the best way to get his point across. Additionally, if he is going to try and undermine rationalism on rationalism's terms, to avoid being deemed irrelevant to 'real' thinkers, he needs his argument to be rationally airtight and valid. For the reasons given above, among others, Haidt's argument is not.

Was that brief?

In the upcoming second section, I am going to 'briefly' sketch how I think Haidt's argument can be made more strongly. Without anything more than anecdotal evidence from my brain, of course.

The Second Section

In my understanding of what he is essentially saying, I agree with Haidt. I think that morality, more than Philosophers like to believe, is a lot more to do with social convention, conditioning and intuition than about rationality and reason. I also believe that a lot of justification for our actions is separate from and, likely, after the fact – we often make judgements or act based on intuition then provide reasoned justification afterwards.

This seems obvious to me. I think I think this way because, for as long as I can remember, I have been extremely skilled at coming up with 'reasonable' explanations for things. Without having any actual knowledge, I have generally always been able to come up with reasons for why I did something or explanations for how something works – I have always had the ability to analyse and 'appear-to-understand' systems in general.

As far as I can tell, everyone does this to a certain degree. To blow my trumpet for a short second, I think I do it quicker and 'more effectively' than most people. My reasoning behind thinking this is the constant admonitions and teasing I receive for having 'thought about things too much', as if I have rote-learned, prepared and organised my thoughts and explanations before every conversation, when really I feel like I am winging it the whole time. Self-congratulation aside, my ability to do this – come up with often contradictory yet equally 'valid' 'explanations' and 'reasons' despite a complete lack of 'actual knowledge' – suggests that all my 'reasons' can't possibly be true. So, while I might just be stuck in my own confused head, my intuition is that, as reasons and explanations are so cheap and easy to come by – I can pull them out of my arse and believe them, regardless of whether I have any method of verification – we shouldn't put so much stock in them. So that's where I'm starting from – I am suspicious of my seemingly-unending ability to bullshit.

The way in which I would suggest that every rational person should be similarly mistrustful of their own justifications and reasons is based on the principles of Rationality itself. I am going to use an example to make my point which appears to present my argument weakly, because it is so obvious and common-sensical. While this example may enable people with an investment in trusting themselves to minimisemy claim's importance, hopefully it will provide enough of a niggle to at least provoke a restless night's sleep.

I can only hope.

The first premise of my argument is to make the assumption that 'rationality' likes 'consistency'. As to what I mean by 'consistency', I mean whatever I mean such that it suits me, basically. To try and pin it down, I sort of mean that the moment we see one counterexample to a theory, – that is, an inconsistency – it is not 'rational' to hold that the theory is true in all cases. For example, to hold to the theory 'that all swans are white' after seeing a black swan, is not in accordance with rationality.

My example is simple. Optical illusions, such as this, are common. As such, we know that sometimes we 'see' things that are not real. The specific example I want to refer to is that of a stick being held half-into a pond – when half-submerged, the stick looks bent, even though it's not. Refraction is such a common phenomenon and many of us have a basic understanding of the physics of it, so this illusion doesn't worry us. The first time we see an optical illusion though, on some level, isn't part of our reaction: "shit, how can I ever believe my eyes again?" Our acknowledgement of the existence of optical illusions though – the acknowledgement that, yes, sometimes our eyes 'lie' to us – allows, in our worldview, for trusting, despite inconsistency. In a way, I would suggest that this is not purely 'rational'. Trusting our eyes is well supported, intuitive and has worked well for us so far, but none of these are purely 'rational' reasons for doing so. They are pragmatic, intuitive and, in some ways, emotional. 

Now, I am not suggesting that we should all get labradors or walk around with sticks (unless you're blind, obviously). Rather, I am suggesting that in an extremist, push-it-to-its-logical-conclusion sense, it is not rational to trust that your vision is reporting 'truthfully' to you 100% of the time – we have evidence that it's not. In a way, by our experience with 'The World' and optical illusions, we have supposedly built up a pretty good idea of when to trust our eyes and when not to. When we see the bogeyman coming out of the closet, or a snake on our towel rack, we know that our eyes are 'playing tricks'. As such, I would suggest that, whether we explicitly think about it or not, we know not to trust our eyes 100% of the time. Or to shorten that sentence, just for brevity, we know not to Trust our eyes. 

In a similar vein, I believe there is evidence that we should not Trust our reason. If, as I am assuming, rationality relies upon consistency, one instance in which reason does not do what we think it does – one instance in which the reason we give for judging something as good is not the actual reason – is enough evidence to suggest that, in a binary sense, we should not Trust our reason. Surely we can acquire enough anecdotal evidence – by second-guessing those around us, or upon brief introspection as to why the good looking woman/man is somehow smarter, funnier and, let's face it, more Good than the ugly, hairy one – to find one instance in which we bullshit to ourselves. And believe it. 

I could argue further along this vein, but I think those that don't like my conclusion have already disagreed by this stage. Perhaps my premise is too convenient?

Rather than respond to such imagined responses – the above view is what I happen to believe at the moment alongside what I consider to be a fairly rational basis for doing so and I am not really interested in adopting other people's intuitions and assumptions (that's their job) – I would like to flesh out what I mean by my conclusion.

I am aware that, especially by using the example of vision, I have opened myself up to someone agreeing with my argument and concluding that "of course we shouldn't trust ourselves 100%. That idea has been around for a long time and is obvious. I want my 20 minutes back. Idiot." All being said and done, I'm pretty happy with that response. What it gives me is an in – I believe by agreeing to this argument, my imaginary foe then opens themselves up to the question as to what degree are they rationally justified in trusting themselves. As soon as you accept that it is not wise to trust your reason 100% of the time, you then have to justify how you can verify whether or not you should trust yourself at any given time. Additionally, it leads to the question as to whether 'trust' is a continuous or binary concept – can you really trust something 'a little bit'? Isn't that like being a 'little bit pregnant'?

As such, I'll leave that there for now and move to the third section, where I will try to vent and express my frustration with this week's particular Philosophy Monday experience.

The Third Section (read this – it's not really about Philosophy but my personal angst. For a change)

I am going to stop trying to be clever and diplomatic and will instead attempt to express why, even though the content of this week's Philosophy Monday was right up my alley – I was reading the paper thinking "Yes, this is exactly the sort of discussion I want to have. Finally!" – I was incredibly frustrated. I was also mildly upset and full of self-doubt.

What else is new?

If you have read the first two sections, thank you. If you haven't but are reading this here right now, also thank you.

Not quite as much as the people who read it all, mind you, but still thanks. 

In case you did skip bits, basically, I think I have a pretty solid argument, within the realms and by the rules of rationality and reason, to support to idea that we shouldn't, at the very least, completely trust rationality and reason. Essentially, I think that, if you are going to consistently and properly apply 'Rationality and Reason', then they undermine themselves – rationality supports the conclusion that it shouldn't be trusted.

To be clear, I have felt like this for a long time. I have grown up skeptical. I was never indoctrinated into any organised system of belief other than an unrealistically high standard of knowledge, which concludes that there really are no answers. This is my intuition and bias – I think 'knowledge', as I assume 'real philosophers' understand it and want to justify things by, is too high a bar for us to ever reach. I have arguments as to why I think this definition of knowledge is 'intuitive' and 'widely-held' and therefore valid as in a general skeptical argument, but they are not relevant here. Rather, it is enough to say that this is my intuition that I cannot seem to shake. My frustration with Philosophy, with Psychology, with University and with the world in general, is that this intuition and related beliefs seem to be 'distasteful'.

It seems like the goal of Philosophy, a lot of the time, is to try and prove that people who think like me are, somehow, wrong. The best thing 'people like me' have going for them is that Philosophy has been around for thousands of years and people like me are still unconvinced. I take that as Skeptics 1 - Other Philosophers 0.

While winning is nice, what this means is that my views, from the outset, are not, I feel, treated as 'valid' – my views are the ugly alternative that the institution of Philosophy is trying to avoid. As such, being generally thoughtful, philosophical, alienated and skeptical before going to University, I have been asking 'the wrong questions' throughout my whole Philosophy degree. The thing that is most frustrating with this is that I do not feel that I have ever received a good, rational argument to convince me that my belief is wrong. What people who argue with me seem to do most frequently is misunderstand what I'm saying (fair enough) or just disagree for no good (as far as I can tell) reason.

Of course, I think this is because there isn't one. I think that my arguments are so self-referential and undermining that they can't be disproved. I'll agree that skepticism may not seem 'helpful' or, in a lot of ways, interesting, but neither of these reasons can distract me from the fact that, by my understanding of rationality and reason, it seems right.

I'm not saying that my arguments are awesome and that I am the smartest man alive and no-one else can match my intellectual might – not much anyway. I think that my way of thinking is a glitch – it's a flaw in the system of rationality and reason that everyone seems fine with ignoring, because facing it doesn't seem to allow them to do Philosophy in the sense that they understand it. My problem is not that I am a super-smart genius, but rather I have one dead pixel one my screen and I can't help but stare at it.

Figuratively, that is. Also, I think this pixel takes up the whole screen. 

Anyhow, that is my general struggle in Philosophy (and life). Now for my specific frustration from the latest Philosophy Monday.

As I roughly outlined above, there are flaws with the structure and tactics of Haidt's argument. My first frustration in class was that, as a class, we seemed content to pick apart these flaws and, I believe, mistake a poorly-represented argument for a poor argument. In the context of the class, as I am a competent Philosophy student and my bread and butter is undermining and picking apart arguments, I was all for pointing out Haidt's missteps. What annoyed me was the fact that, because I was criticising Haidt's execution of his argument, I felt like I was misunderstood and interpreted as disagreeing with Haidt. To compensate, I had to go all militant about making clear my view – that I agree with Haidt and believe, ultimately, that it is not rational and is somewhat unwise to trust ourselves.

Especially Philosophers.

First of all, this makes me uncomfortable, because it makes me out to be a weirdo. The common reaction to my making clear my extreme skepticism and personal mistrust often goes along the lines of "how do you function like that?" (Not well, apparently). In addition to my perhaps-imagined weirdo-ness, the lecturer made a comment, joking, I believe, in response to my 'joking' attack on Philosophers as believing our own bullshit, that I was obviously an honours student near the end of my course and stressed by having to write a lot of essays.

I didn't realise it until a little bit after, but this annoyed the shit out of me.

Two reasons. One, it's wrong. I am an honours student, but I am doing it extremely part-time, have no real stress due to Uni (other than feelings of annoyance and alienation) and have had my 'problems' with self-trust, in some way, pretty 'consistently' for as long as I can remember (at least 15 years or so, probably more). So there.

Two, in an incredibly irritating twist of irony (or something), the lecturer, in trying to undermine by argument in one way, supporting what I was arguing for. She was, essentially, suggesting that, my views, judgements and beliefs were due to chiefly my emotional state – not to my reasoning capacities.

That's what I was saying.

If this meant that she agreed with me, this would be OK. I would agree – I definitely feel and think the way I do because of my upbringing, experience, experiences of alienation, etc. I believe that all these things influence my reason. Quite a lot. While the the picture that she was painting is not, I believe, accurate –  that I childishly hate philosophy because its hard work – the general gist is correct. What really got my goat right up there, however, was the fact that, essentially (again, because of my various complexes and insecurities), I felt she was treating me like an idiot. Rather than responding to me as if I had a valid viewpoint and a rationally-justified argument, she was attempting to undermine my views by saying 'well, you're grumpy today, aren't you?'

I could argue that this irritates me because it's inherently inconsistent but, really, that's not it. It just made me feel shit, unappreciated and fed into all my feelings of alienation and frustration.

So, I'm a contrary little shit.

From these feelings of alienation and frustration, I then began to worry that maybe I'm not really special or different. That in itself shouldn't be too much of a problem, except for the fact that I have established being special and different as central to my sense of self.

Whoops.

As such, I am paranoid that I am merely contrary because, essentially, I get off on not fitting in. This worry is particularly relevant to me and my present set of life crises because, with relation to my about-to-become-awesome music career, being contrary, not fitting and being antisocial will most likely stop me from reaching 'My Full Potential™'. If I am different, that's ok, I will have to trust myself and, if I am different in a good way, it will work out in the end. If, however, I am just boring and 'normal' (like YOU) (no, that was a joke. Sorry) then I am kidding myself and stabbing myself in the foot. And face. And arm. And... whatever.

Although, 'boring and normal' doesn't really exist – everyone is special.

I just need to feel MORE special.

So, I'm a contrary little shit.

 

Wow. That was a lot of that and it's really late. I will finish now.

 

Love from Jacob.