From the collective to the individual

IN some parts of Western society individualism rules supreme and reaches its apogee in neoliberalism in which the only relation that exists between individuals is transactional. This relationship is encapsulated within the mythical figure of Homo Economicus who is supposedly driven solely by rational self-interest and becomes a consumer and spectator in society rather than a participant. Philosophically, it is expressed in its purest form as methodological individualism, which asserts that all attempts to explain social or individual phenomena are to be rejected unless they are couched wholly in terms of facts about individuals. The problem with this position, however, is that it excludes any individual for whom a sense of community is constitutive of how she perceives her being.

It should be said that other political philosophies are available and one such is provided by the economists Paul Collier and John Kay in their book Greed is Dead. They argue that that extreme individualism ‘is no longer intellectually tenable’, raising the question as whether it was ever intellectually tenable. It could be argued, of course, that when the luminaries of the Enlightenment suggested that we should use a bit more reason in our lives, they were also promoting the rights of the individual against the oppressive State. Perhaps this was necessary at the time – and the rise of universal human rights still rightly protects the individual in this sense – but perhaps now the dominance of the individual has gone too far and we need to address the imbalance because, as Collier and Kay point out ‘human nature has given us a unique capacity for mutuality’.

Is greed dead?

The importance of mutuality was also stressed by the Russian thinker and anarchist Kniaz Petr Alekseevich Kropotkin in his 1921 work Mutual Aid: A Function of Evolution in which he writes ‘besides the law of Mutual Struggle there is in Nature the law of Mutual Aid, which, for the success of the struggle of life, and especially for the progressive evolution of the species, is far more important than the law of mutual contest’. The us of the word ‘progressive’ here is significant because it sets Kropotkin up in a communitarian tradition that is very different from that of Collier and Kay, as we shall see.

Kniaz Petr Alekseevich Kropotkin

Nevertheless, the authors do agree with Kropotkin that biology ‘far from lending support to the premises of individualism, undermines them’, and they point to myriad organisations that are neither individualistic nor statist including families, clubs and associations as well as , one might add, political parties and trade unions. Adam Smith, primarily a moral philosopher but with his book The Wealth of Nations regarded as the founding father of modern economics, fully acknowledged the complexity of humans and their relationships – and yet he has been trivialised by neoliberals to mean the benefits of the invisible hand of self-interested individuals, a metaphor that he used just once in his otherwise rich and rewarding book.

Adam Smith

Collier and Kay make their position clear when they write: “And we claim that agency – moral, social and economic – is not polarized between the individual and the state, but that society is made up of a rich, interacting web of group activities through which individuals find that fulfilment.” It’s probably fair to say that the authors fit firmly in what might be called the conservative tradition of communitarianism alongside Edmund Burke’s ‘little platoons’ and Hegel’s ‘civic community’, although they would probably balk at his valorisation of the State. Indeed, their central argument is that there has been too much centralization in the State. Even our much-loved NHS comes in for severe criticism and they argue that health care is ‘well suited to decentralized provision’, although they don’t say how this would happen without health care descending into a postcode lottery .

As we have seen, this is very far from the communitarianism of Kropotkin or, indeed, his fellow anarchist Noam Chomsky who wrote in his book On Anarchism that while he looks forward to a post-capitalist society and the ‘dismantling of state powers’ he also acknowledges that ‘certain aspects of the state system, like the one that makes sure children eat, have to be defended – in fact defended very vigorously’. And Karl Marx explored the notion of communitarianism, which he rooted in our material being in that the ‘sum total of these relationships of production constitutes the economic structure of society – the real foundations on which rises a legal and political superstructure and to which correspond definite forms of social consciousness’. Famously in A Contribution to the Critique of Political Economy he wrote: “It is not the consciousness of men that determine their being, but, on the contrary, their social being that determines their consciousness.”

Lest we forget, and in response to the individualism of Martin Buber in a previous blog, religion can also deliver a more communitarian approach. In his book The Way of St Benedict the former Archbishop of Canterbury Rowan Williams writes: “Or, to pick up our earlier language, it is the unavoidable nearness of others that becomes an extension of ourselves. One of the things we have to grow into unselfconsciousness about is the steady environment of others.”

St Benedict

One of the most appealing aspects of communitarianism, then, is that it appeals to thinkers across the political, moral and religious spectrum. And one of the most appealing aspects of Greed is Dead, at least for Salisbury Democracy Alliance, is that their decentralizing communitarianism leads them to regard Citizens’ Assemblies as being an ‘interesting innovation in democratic practice’.

What needs to be stressed in all of this, however, is that communitarians are not advocating that the collective should crush the individual but, rather, that given sympathetic conditions the individual, properly understood, emerges out of the collective, is shaped by the latter but, in turn, given the opportunity, helps to shape the collective. As Collier and Kay write, it is through the collective that individuals find their fulfilment.

To ambiguity and beyond!

I and Thou was a concept introduced by a German theologian, Martin Buber in his book ‘Ich und Du’ which roughly means I and Thou. Buber believes that these two basic word pairs are essential to understanding how one responds or communicates to another.

HOW the collective emerges out of the individual or how the individual emerges out of the community are questions that go to the heart of modern society. Of course, two possible solutions are either that there is no such thing as community or that there is no such thing as the individual. But for the purposes of this blog we will assume that both exist and attempt to work out how, if at all, the individual becomes part of the collective without losing an ethical perspective.

In I and Thou Martin Buber, as the title of his book implies, starts with the individual. For him the social means ‘the community that is built up out of relation’ but he is aware of the problem that involves the ‘collection of human units that do not know relation – modern man’s palpable condition of lack of relation’. Bearing in mind that this book was written in the 1930s, one wonders what Buber would think about today’s atomized society. Be that as it may, Buber writes: “Then we find only the one flow I to Thou unending, to one boundless flow of the real life.” But then Buber argues that the ‘religious man stands as a single, isolated, separated being before God, since he has also gone beyond the state of the moral man, who is still involved in duty and obligation to the world’. The moral man is ‘still burdened with responsibility for the action of those who act’. Here Buber becomes somewhat opaque as he argues at the same time that although the individual is not ‘freed from responsibility’ he has nevertheless ‘abolished moral judgements for ever’.

Martin Buber

If this sounds familiar it may be because Buber was influenced by Soren Kierkegaard for whom God becomes dispensable if He is drawn into the ethical sphere and will then, eventually, disappear. Interestingly, Buber references Nietzsche and it it’s hard not to draw a parallel with his notion of Beyond Good and Evil, which transfers amorality from God to to a post-God world, a world which morality has no meaning unless it is ruled by the Ubermensch for whom morality means whatever is good or bad for him.

Friedrich Nietzsche

Another profound philosophy of the individual is existentialism, although it is often argued that Kierkegaard and Nietzsche were its precursors. Jean-Paul Sartre was acutely aware of the problem that self-creation and individual freedom posed for ethics. He always intended to address this problem after his magnum opus Being and Nothingness but he never did. It was left to his lover Simone de Beauvoir to tackle this problem and she attempted this in her Ethics of Ambiguity in which she claims that it is the essential tensions that we experience in life – the chief of which is that between life and death – that lead to the place where ethics, politics and metaphysics intersect. What this means concretely is that at the very point that we become aware of our own existence we also become aware of sharing it with others so that, for example, ‘I’ cannot ‘will my own freedom without, at the same time, willing the freedom of others’.

The Ethics of Ambiguity

John Rawls in his A Theory of Justice also attempts to extrapolate from the individual to the sort of society that it would choose if it had no idea what its position in society was. But as we have seen in a previous blog he does this in a question-begging sort of way by excluding any kind of communitarian solution in which a communal role is, in part at least, constitutive of the individual’s identity.

It seems, then, that the very notion of ethics is precarious when one starts with the individual. Either it disappears with the appearance of God (Buber and Kierkegaard) or it disappears with the death of God (Nietzsche). Alternatively, it rests in the restless world of ambiguity (de Beauvoir) or begs the question against anyone who proposes a more communitarian approach.

In the next blog we will focus on communitarianism to see if it can do any better and move on from ambiguity.

Long live the Idiot!

OVER the past 30 years or so humanity has been stealthily infantilized as advertisers, powerful lobby groups, thinktanks, governments and social media giants have infiltrated our brains. Whether it’s the relentless pursuit of instant gratification or the echo chambers of Facebook our willingness to surrender our privacy and even the direction of our lives has become truly Kafkaesque.

Ever since Daniel Kahneman’s seminal book Thinking, Fast and Slow we have known that one aspect of our brains involves the two modes of thinking. According to Kahneman: “System 1 operates automatically and quickly, with little of no effort and no sense of voluntary control” while “System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.” Use of System 2can help to improve on judgements and counter bias but ‘it is reluctant to do so because it is indolent’ and ‘little can be achieved without considerable effort’. It does not take much of a leap of imagination to realise that the advertisers et al have realised this aspect of our brain and successfully tapped into System 1 while systematically discouraging the use of System 2 so that we crave the instant gratification of buying now and demand constant unnecessary upgrades.

One might be tempted to ask whether any of this matters. Many would argue that the answer is an emphatic YES because it has a direct impact on our freedom, agency and our particular form of representative government. In The Age of Surveillance Capitalism Shoshana Zuboff traces how the likes of Google suddenly realised that they could bundle up all the waste data that they accumulated and sell it on to whoever would find it useful. It was the psychologist B. K. Skinner who realised the political value of all this when he ‘viewed the creative and often messy conflicts of politics, especially democratic politics, as a source of friction that threatens the rational efficiency of the community as a single, high functioning super-organism’.

It is the serendipitous marriage between neoliberalism and the social media giants, some would argue, that has led to what the philosopher Byung Chul Han calls ‘psychopolitics’ in his book of the same name.

Byung Chul Han

Han describes how neoliberalism ‘makes citizens into consumers’ and how as consumers ‘today’s voters have no real interest in politics – in actively shaping the community’. And participation now amounts to little more than ‘grievance and complaint’ which has given rise to ‘spectator democracy’. Leaving aside the question as to whether we actually have a democracy, rather than what might more accurately described representative government, this combination of neoliberalism and social media means ‘we are entering the age of digital psychopolitics’ which involves ‘passing from passive surveillance to active steering’.

Han points to how Big Data now enables political parties to micro-target voters in the way that, as Channel 4 News recently highlighted, during the 2016 USA election the Trump campaign acquired detailed information on 200 million Americans and were able to target potential Clinton-voting black citizens with adverts designed to dissuade them from voting. In many of these areas the black vote collapsed. He drives the point home when he states that ‘Big Data can even read desires we do not know we harbour’, entering the ‘collective unconscious’ territory of Carl Jung.

So, what, if anything, is to be done? Zuboff argues that we should be the grit in the system, slowing down the smooth transition to a hive society by declaring ‘no more’. Han takes this one step further by urging us to become idiots presumably in the mould of Dostoevsky’s ‘positively beautiful man’ in The Idiot who clashes with the emptiness of his 19th century Russian society. “The idiot is idiosyncratic,” writes Han, setting the idiot in conflict with ‘smart’ devices and their acolytes. “Idiotism stands opposed to the neoliberal power of domination, total communication and total surveillance.”

The Idiot -Prince Lev Nikolayevich Myshkin

The idiosyncratic idiot is nothing to do with the attenuated Homo Economicus of neoliberalism in which the only relationship we have with others is transactional. This is the atomized society which makes it easy for Big Data to isolate and inhabit. Rather, the idiot is the genuine embedded citizen who emerges from his or her social being. It is this individual that will become the grit, that will side with Doug against Kevin Bacon’s annoying EE cypher, will declare ‘no more’ and reclaim Big Data for the people. Long live the Idiot!

The ragged trousered Classicists

The Athenian Agora

Whenever Boris Johnson makes a reference to the Classics he hopes to demonstrate his membership of the intellectual elite. At the same time, however, he also provides evidence of the way that the Classics have been appropriated by the ruling clique. Indeed, it seems as though the Classics are largely confined to expensive private schools and, perhaps, surviving Grammar schools but have been eliminated from vast swathes of State schools as being in some way irrelevant.

But in a remarkable new book by the historians Edith Hall and Henry Stead called A People’s History of Classics it is argued that this was not always the case. The authors write: “Our book refutes wholesale the argument that classical education must be intrinsically elitist or reactionary; it has been the the curriculum of empire but it can be the curriculum of liberation.” It is their contention that it was fear of the rebellious influence of the Classics that led it to be suppressed among the working class. So, while Edmund Burke feared that the ‘pearls of intellectual culture would be besmirched by the swinish masses’, Thomas Hobbes ‘frets that classical literature inspires people to revolutions’.

In the early part of the book the authors show how the Classics helped to construct the identity and ‘psychological experience of substantial groups of working class Britons’, which inspired educational institutions like the Workers’ Educational Association, the Council of Labour Colleges and the Plebs League. But the authors also point out that the Classics have been inspirational for individual members of the working class and that Classical material features in ‘poor people’s expression of class dissatisfaction and frustration, disaffection, anger, deprivation, psychological trauma (even diagnosis of insanity) and dispossession’. Many British radicals were inspired and motivated by the Greeks and Romans between the ‘American and French Revolutions and the collapse of the Chartist movement’. Thomas Paine, author of the Rights of Man, for example, while critical of the use or Classics by the ruling clique, nevertheless thought that a ‘grasp of human history, including the markedly political history and developed secular ethics of ancient Greece and Rome, were essential to modern democrats’ understanding of the past’. And for Paine the Greek philosophers needed to be read ‘because they recommended benevolent moral systems’. In short Hall and Stead claim that the ‘democrats of the 1790s…were immersed in and inspired by ancient philosophers and history read in translation’.

Anyone who has been involved in the trade union movement will know about the influence of the Classics on the multifarious and colourful banners paraded on marches – the Tolpuddle Festival, normally held in July, is a good place to see many of them from all over the country. Indeed, the authors themselves trace the ‘proud and colourful use of figures from classical mythology and history in Trade Union banner art and in emblems of positive self-definition amongst craftspeople’ including the ‘ironies of the intense relationship between mining and the ancient world’.

This is a serious and scholarly work of history designed to liberate the Classics from the ruling clique for ‘progressive and enlightened causes’. “Our book, therefore, is not just about the past, but a rallying cry to modern Britain to support the case for the universal availability in schools of classical civilization and ancient history’.

One of the most obvious examples of how the Classics have influenced working class culture is the remarkable book The Ragged Trousered Philanthropists by the Irish author Robert Noonan, writing as Robert Tressell, and drawn from his experiences as an underpaid painter and decorator. But at a higher level the book is clearly based on Plato’s allegory of The Cave in The Republic, which is used to explore the idea of workers’ false consciousness.

Plato’s The Cave allegory

Tressell’s book starts: “The house was named ‘The Cave’,” and it is where much of the action takes place. The book essentially adapts Plato’s allegory to analyse the workmen’s ‘sedation by alcohol and unthinking reproduction of the false ideas required to perpetuate their oppression’.

The ancient Greek philosophers were renowned for their clear-sighted take on humanity and, in the case of the Stoics at least, promoted the idea that a ‘decent life is about cultivation of one’s character and concern for other people (and even Nature itself) and it is best enjoyed by way of a proper – but not fanatical – detachment from mere worldly goods,” as Massimo Pigliucci has it in How to be a Stoic. In this sense the Greek philosophers could be seen to counter the extreme individualism of neoliberalism, its insularity and its tendency to turn citizens into consumers. However, it is something of a stretch to conclude that the study of the Classic could have had the sort of revolutionary implications so feared by Hobbes. And it is thin gruel indeed when compared with the red meat of Marx and Engels in The Communist Manifesto in which they declare: “The proletarians have nothing to lost but their chains. They have a world to win. Working men of all countries, unite!”

Marcus Aurelius – Roman Emperor, general and Stoic philosopher, but hardly a revolutionary!

But if Hall and Stead over reach themselves with regard to the revolutionary aspects of the Classics, they make a very good case for bringing classical education into mainstream education and wresting it from the clutches of people like Johnson.

The meaning of life is not 42!

IMAGINE you are on your smart ‘phone (assuming you have one) and someone comes up to you and asks whether you believe in electricity. You might be forgiven for looking askance at this person and assuming that they were mad. But think about this for a moment – 500 years ago Europeans would have been equally nonplussed had they been asked whether they believed in God. It is this sort of insight that marks out A Wonderful LIFE by philosopher and psychology researcher Frank Martela.

God’s presence was everywhere in the life of our ancestors. “Theirs was a world dominated by the supernatural – spirits, demons and magic,” writes Martela. “The existence of God and spirits wasn’t a question of belief but an immediate certainty.” Just like electricity is today. However, if you ask many people about God now it is unlikely to elicit quite the same degree of certainty. It is often thought that people began to seriously question the meaning of life after the publication of Charles Darwin’s On the Origin of Species. But the findings of geologists had been undermining the veracity of the Bible for years and as early as 1834 the essayist, satirist and historian Thomas Carlyle is thought to be the first person in the English speaking world to coin the term ‘meaning of life’ in his extraordinary book Sartor Resartus (meaning tailor re-tailored). Indeed, Carlyle makes his protagonist Herr Teufelsdrockh question everything. “Doubt had darkened into Unbelief,” says he; “shade after shade goes grimly over your soul till you have the fixed, starless, Tartarean black.” Today, we might call this an existential crisis and in fact Soren Kierkegaard, often thought of as a forerunner of existentialism, was also active that this time.

Soren Kierkegaard

For Martela the ‘combination of losing touch with religion through the rise of the scientific world view plus the Romantic notion that, to truly live, you must experience your life as highly meaningful, formed a perfect storm that gave rise to the concept of the existential crisis and conditions endemic to our modern culture today, a society where the lack of meaningfulness can become all-consuming’. The rise of the cult of the individual also contributed to the search for meaning. The sense of the autonomous individual, however, is a relatively recent phenomenon. Indeed, as Martela notes, ‘the whole idea of a private inner self beyond one’s public self started to appear in the literature only from the 16th century onwards’.

According to Martela, what we actually need to do to make sense of all this is to shift from the ‘meaning of life’ which is ‘about something beyond life in question justifying its meaningfulness’ to ‘meaning in life’. And once you make that move then you find that you already have ‘many relationships, experiences, and emotions in your life that already feel meaningful to you regardless of a rational explanation as to why’. And it’s not Sartre that expresses this but Simone de Beauvoir who emphasises that we’re ‘already situated’. In an introduction to Beauvoir’s Introduction to an Ethics of Ambiguity it is claimed that she ’emphasises the inter-subjective dimensions of existence’ and argues that ‘I cannot will my own freedom without, at the same time, willing the freedom of others’, echoing Kant’s categorical imperative.

From all this Martela identifies ‘autonomy, competence, relatedness, and benevolence’ as the key factors in what makes for a meaningful life. These are Ok as far as they go. However, it could be argued , as Daniel Dennett does in From Bacteria to Bach and Back that many animals display competence but it’s the injection of comprehension that introduces consciousness, which is a central feature of humanity if not exclusive to it. Again ‘relatedness’ is a thin word for what might better be described as our social being. Indeed, for the sadly recently deceased Rutger Bregman in Human Kind – A Hopeful History our social being was our super-power over the more individualistic Neanderthals whose superior intelligence could not spread knowledge as widely and as quickly as it could by the more social Homo Sapiens. It is something that Martela does at least recognise in passing when he writes that ‘we need to work together to strengthen the forms of community available to us’. The word ‘benevolence’ is too vague a concept which might better described as altruism which some evolutionary biologists – including Richard Dawkins in The Selfish Gene – argue that altruism forms part of our genetic make-up, even if it competes with our more ego-centric traits.

In Douglas Adams’ The Hitchhikers Guide to the Galaxy a super computer designed to find the answer to the meaning of life eventually replies ‘forty-two’. As Martela says the answer points to the ridiculousness of the question itself.

Clearly there are still many people who do draw meaning from a higher authority but for those that don’t then, equally clearly, finding meaning in life – rather than of life – is a more fruitful quest. But there is a danger that starting from the perspective of the individual and working from there to inter-subjectivity is starting from the wrong end. Contrary to our current obsession with the individual – exemplified by the metaphysical Homo Economicus – it could be argued that our meaning starts with our social being and the problem then becomes how we create the conditions in which the full potential of the autonomous citizen can be realised as it emerges from the collective – without abandoning the latter.

What is the Self?

IT’S one of those questions that has intrigued philosophers for centuries. Once you have stripped away things like your name, address and occupation etc what is left? Is there a core essence that is unmistakably the Self? Or, as some Buddhists say, is the Self an illusion we have to cure ourselves of? As Stephen Batchelor writes in Buddhism Without Beliefs: “There is no essential me that exists apart from this unique configuration of biological and cultural processes.” When Batchelor searches for his Self in meditation ‘I find it is like trying to catch my own shadow’ rather like Gilbert Ryles’ ‘ghost in the machine’, which he uses in The Concept of Mind to ridicule Descartes. Nevertheless, Batchelor also concedes that even if the Self is not something ‘neither is it nothing…It is simply ungraspable, unfindable’.

The ghost in the machine

Notions of the Self certainly drift into themes revolving uneasily around dualism of the material and the immaterial – and when the Self is associated with the latter it sounds something like a soul. Not so for materialist Galen Strawson in The Subject of Experience. For him the Self is a ‘conscious subject of experience’, which most certainly does not involve ‘some sort of belief in the immaterial soul, or in life after death’. He adds: “Philosophical materialists who believe, as I do, that we are wholly physical beings, and that the theory of evolution by natural selection is true, and that animal consciousness of the sort with which we are familiar evolved by purely physical natural processes on a planet where no such consciousness previously existed, have this sense of the mental as strongly as anyone else.” But for him the mental self is ‘a thing or entity, in some robust sense’. What is interesting about Strawson is that not only is he a materialist in a field that is dominated by immaterialists, he also argues that ‘one can have a full sense of the single mental self at any given time without thinking of the self as something that has long-term continuity’, a view that certainly flies in the face of mainstream thought in which continuity of something, whether it be body or mental, is often thought to be vital. Strawson can even accommodate Buddhist thought. “I believe the Buddhists have the truth when they deny the existence of a persisting mental self, in the human case, and nearly all of those who want there to be a self want it to be a persisting self.” At the same time he believes the famous metaphor of the ‘stream of consciousness’, first proposed by William James in The Principles of Psychology, is false. In contrast Strawson argues that the ‘phenomenal form of our consciousness is that of a gappy series of eruptions of consciousness as if from a substrate of non-consciousness’.

What does it mean to have a word with yourself?

For Strawson, who uses his own experiences of meditation to inform his argument, selves exist as ‘subjects of experience that are single mental things’ and he does agree with James when he writes that ‘the same brain may sub-serve many conscious selves’ in what Strawson likens to a string of pearls. It has to be said, as Strawson acknowledges, that this view of the Self may not be sufficient for many people who want there to be a Self. It is also frighteningly fragile: “If, finally, someone says that any sense of the self as a thing may dissolve in the self-awareness of meditation, I will agree, and reply that in that case self-experience, of the kind that is at present of concern will also have dissolved (this being, perhaps, after all the aim of meditation).” It could be argued, however, that it is difficult to reconcile this fragility with Strawson’s claim that the Self or, as he prefers to put it, the ‘subject of experience’, is a ‘physical object’ and that even if it ‘may be short-lived…it is nonetheless real, and it is as much a physical object as any piano’. The obvious objection to make is that pianos are not so easily dissolved.

But having made the case for the transience of the Self, Strawson then argues that one’s sense of permanence or transience may, after all, be psychological with some Diachronic people having a strong sense of Self over time while Episodics – like Strawson himself – do not. At the same time, however, he floats the idea that we might exist on a spectrum between the Diachronic and Episodic.

Are you Episodic of Diachronic?

There is much more in this book to make one ponder and puzzle, including an argument against the common claim that we are nothing but a narrative of ourselves that we create for and of ourselves. For Strawson ‘we live beyond any tales that we happen to enact’ and he believes that story-telling in this sense can lead to an ‘inauthentic view of ourselves’. The correct position is one of ‘discovery, not creation or constitution’, although one might wonder whether there is much to discover if the subject of experience can be dissolved so easily like a will of the wisp. And it’s interesting also that although Strawson is not convinced by the narrative view of the Self he never tackles the problem of infinite regression of the story-telling view. After all, if there is a narrative there must be a narrator…and so on.

But it is the shifting nature of the subject of experience which is at once a robust physical object and a fragile entity that makes one feel most queasy. It is a bit like Schrodinger’s cat that is both alive and dead until it is observed. Under Strawson the Self becomes both subject and object and seems to disappear in a puff of its own logic as Douglas Adams says somewhere in the Hitchhiker’s Guide to the Galaxy.

The dark night of Buddhism

IT’S rare for anyone to look at the dark side of Buddhism. We all know about the violence that can be engendered by various religions but people disillusioned by the great monotheistic religions often turn to what they perceive to be the the gentler vision of Buddhism. Indeed, there are some people, including Stephen Batchelor – author of Buddhism Without Faith – who argue that Buddhism isn’t even a religion and, rather, should be thought of as a guide for life. So, how are we to account for the way Buddhists treated the the Muslim Rohingya in Myanmar?

There have also been well documented incidents during which individuals have appeared to experience severe mental illness particularly after periods of intensive meditations in retreats. In their book The Buddha Pill Doctors Miguel Farias and Catherine Wikholm unearth evidence that shows that many people can experience at least one negative effect – and a significant number who have ‘profoundly adverse effects’ following retreats. One particularly troubling case involved a psychiatrist called Dr Russell Razzaque who found himself ‘descending into a deeply meditative state; I somehow travelled through the sensations of my body and the thoughts in my mind to a space of sheer nothingness that felt, at the same time, like it was somehow the womb of everything’. Although he initially regarded this as a blissful state, things deteriorated in the following days as he was being pulled in the opposite direction into a manic state. Eventually, he managed to stay grounded, although as the authors point out, most people who meditate are not experts in psychiatric diagnosis.

According to the authors, western practitioners are ‘aware that not all is plain sailing with meditation – they have even named the emotional difficulties that can arise from their meditative practice as the ‘dark night’. Part of the problem seems to be the collapse of the ‘narrative of the self’ (the notion that the self is a narrative is itself contested, which will be explored in the next blog), which can result in a ‘sense of vertigo rather than blissful realization of the emptiness of the self’. And research has shown that adverse effects are not always confined to intensive retreats.

And then we come to the knockout punch – ‘meditation wasn’t developed so we could lead less stressful lives or improve our wellbeing’. They add: “It’s primary purpose was much more radical – to rupture your idea of who you are; to shake to the core your sense of self so that you realize there is ‘nothing there’.” This is not how we are encouraged to see meditation in the West where ‘meditation has been revamped as a natural pill that will quieten your mind and make you happier’.

Of course, in the wider context Buddhism is often regarded as being a peaceful practice and, like the early Christians, as a turn away from violence. “But a cursory glance at the news broadcasts about Buddhist countries challenges this peaceful image,” the writers observe. And this brings us back to the question at the beginning of this blog about the treatment of the Rohingya. Just as Christianity developed the notion of the Just War so Buddhism developed ‘its own theory of compassionate killing‘. In a bizarre twist in Buddhist thought – which Batchelor might argue coincided with it becoming seen as a religion – one consequence the idea of emotional indifference and selflessness is that practitioners are not ‘morally responsible for for their actions because they act without self-interest’. In other words ‘without clear ethical rules that very spiritual selflessness can serve all kinds of ill purposes’, as happened with Japanese Buddhism during World War Two.

According to Zen priest and historian Brian Victoria the ‘Japanese military used Zen Buddhist ideas and meditation techniques’ to support the war. He shows how Buddhist priests regarded ‘warfare and killing’ as ‘manifestations of Buddhist compassion, selflessness and dedication to the Japanese emperor’. There is literally nothing to lose in killing or dying once you realize the ’emptiness of the self’, an idea that becomes the equivalent promise of eternal life in religions like Islam and Christianity.

It is fair to say that all of this comes as a shock to anyone sympathetic to Buddhism and meditation, including the authors of the the Buddha Pill. But they are not about to dispense with meditation. They write: “Perhaps meditation was never supposed to be more than a tool to help with self knowledge; one that could never be divorced from a strong ethical grounding, who we are and the world we live in’. And as Batchelor writes: “A culture of awakening cannot exist independently of the specific social, religious, artistic, and ethnic cultures in which it is embedded.”

Furthermore, the authors write of meditation: “If we admit its frailties and limits, that it takes other things for the techniques to make real positive change – the right intentions, a good teacher and moral framing – they can still prove effective engines of personal change.”

All of which puts the Western appropriation of meditation – what Ron Purser dismisses as McMindfulness – and its decoupling from any moral and political context under severe scrutiny. It’s the idea of self-optimizing and using mindfulness as a way of helping us to cope with the stresses of modern life without tackling the socio-political causes of that stress – including the atomization of society – that needs to be addressed by the mindful community. Social change requires collective action, not meditation, which should, it could be argued, be put to use to sustain that action.

The knight of faith

“Do the gods love holiness because it is holy, or is it holy because they love it?” So asked Socrates as reported by Plato in the Euthyphron. It’s a deceptively simple question but one that has had wide-ranging ramifications down the millennia and remains one of the most important ever asked. For if the answer to the first part of the question is ‘yes’, then the possibility arises that the holy – or as we might say today, the ‘good’ – is not dependent on the gods, or God, and is, therefore, independently accessible to humanity. If, however, the answer to the second part of the question is ‘yes’ it means that whatever God loves is good and humanity has no idea what the good is – and, perhaps, neither does God. Morality, as we know it, disappears.

Socrates

Although he doesn’t formally acknowledge it, the Euthyphron question courses through Soren Kierkegaard book Fear and Trembling, which he wrote under the pseudonym Johannes de Silentio. The central premise in this beautifully written book is what Kierkegaard calls the Teleological Suspension of the Ethical.

Soren Kierkegaard

What this means is an ethics that eschews any sense of consequentialist normative theories like Utilitarianism. In fact Kierkegaard actually sets himself up in opposition to Fredrich Hegel for whom the ethical life involves behaviour that is moral only when it adds to the good of society. But presumably even Immanuel Kant’s categorical imperative is also unacceptable. In the Groundwork of the Metaphysics of Morals Kant writes ‘act only in accordance with that maxim which you can at the same time will that it become a universal law’. Kant is often categorized as being a non-consequentialist but in Kierkegaard’s eyes the categorical imperative still retains a teleological element.

To make his argument Kierkegaard chooses the extraordinary story of Abraham in which God tests him by demanding that he sacrifices his much-loved son Isaac, as told in Genesis 22.

Abraham’s hand is stayed at the last minute

Abraham’s faith is such that he is moved to carry out the order without question only to have his hand stayed at the last minute by an Angel, allowing the son and father to return home unscathed.

For Kierkegaard this is the ultimate test of what he calls the knight of faith. It’s the second of two moves the first of which he calls the knight of resignation who can find ‘peace and repose’ in retiring from life, as in a monastery.

The knight of resignation

But the knight of faith takes the next step because his faith in God is absolute. This faith remains constant even if, unlike the story in Genesis, the Angel does not intervene. Kierkegaard writes: “Let us go further. We let Isaac actually be sacrificed. Abraham had faith…God could give him a new Isaac, bring the sacrificial offer back to life. He believed in the strength of the absurd, for all human calculations had long since been suspended.” At the heart of this is the paradox ‘that the single individual is higher than the universal though in such a way be it noted…that having been in the universal the single individual now sets himself up apart as the particular above the universal’. One might be forgiven for seeing in this the religious version of Nietzsche’s ubermensch.

So, what are we to make of all this? The Swedish philosopher Martin Hagglund in his This Life – Why Mortality Makes us Free argues that to say, as some do, that Kierkegaard ‘has found a way to combine devotion to God with devotion to finite life’ rings hollow because the ‘double movement of religious faith actually denies the experience of finitude by precluding the experience of irrevocable loss’. Hagglund contrasts this with what he calls ‘secular faith’ in which every time ‘you care for someone who may be lost or leave you behind, every time you devote yourself to a cause whose fate is uncertain you perform an act of secular faith’. On the other hand if there is a ‘God for whom everything is possible, then anything can be permitted, even the killing of your own child for no other reason other than God’s command’, because even in death, as Kierkegaard acknowledges, Isaac will be returned to him.

Writing in the latest edition of Philosophy Now Roger Caldwell goes further when he writes that it ‘is not difficult to find passages in his writings to make one suspect he is somewhat unbalanced’. In all of this it is astonishing that the Euthyphron question is not mentioned, although Caldwell comes closest when he writes that ‘if it is possible for divine commands to take precedence over human ethics, then faith is higher than morality’. Indeed, it could be argued, as previously suggested, that if you answer in the positive to the second part of the question then morality itself vanishes for humanity – and for God. And there is a dark symmetry here because according to Clare Carlisle in her biography Philosopher of the Heart, Kierkegaard warned in Fear and Trembling that ‘once God is absorbed into the ethical sphere he will become dispensable, and eventually disappear altogether’. So, the stark choice is between morality and God. For those who don’t believe in God, of course, there is no choice, but it does remain for believers. One of Kierkegaard’s aims in the book is to draw a distinction between what he regarded as dead Christianity, where people merely go through the motions of faith, and his live Christianity as exemplified by Abraham. For most Christians, one suspects, the move to the knight of faith is a step too far and for atheists and agnostics it’s a very good reason not to believe in God.

Back to the commons!

FOR more than 50 years the idea of commonly owned land has been blighted by Garret Hardin in his hugely influential article The Tragedy of the Commons. Hardin claimed that environmental disaster would ensue if land was in common ownership as the the population grew because he assumed that individuals would only think of their short-term gain rather than the long-term collective benefits.

But what if he was wrong? There are two fundamental assumptions that Hardin makes which skew his conclusion. The first is that the most fundamental starting point is the individual; and the second is that altruism forms no part of our genetic make-up. As a result, as Tine De Moor makes clear in her book The Dilemma of the Commoners, he ‘assumed that individuals would be unable to communicate and organize to prevent over-harvesting of the resources’. As she also points out he also assumes that ‘human nature is such that greed and selfishness will always lead to free-riding and, subsequently, excessive exploitation’. And she adds: “Although since its publication the metaphor ‘the tragedy of the commons’ has been extremely popular in various scientific disciplines and with policy-makers, many researchers have given proof of the opposite: individuals, commoners, and others are capable of preventing free-riding by institution building.”

Like many others De Moor paddles her canoe against a torrent of individualistic neoliberal ideology. It could be argued, in contrast, that we are actually social beings – indeed some argue that our sociability was Homo Sapiens’s big advantage over the Neanderthals, who were probably more intelligent than us but less social and, therefore, less able to spread innovations. If this is true then neoliberalism has got it the wrong way round. The problem is not how a fully formed individuated consciousness becomes a social being – if that is indeed an aim of neoliberalism – but how social beings become a fully formed individuated consciousness. As De Moor says individualism and collectivism are not incompatible – ‘both are part of the emancipation of the individual: first from family ties and later from other collectivities’ she writes, tellingly starting with the collective and, crucially, not abandoning that collective in the forging of the individual. De Moor outlines a brief history of the rise of the commons and guilds from 1000AD. Interestingly, although the commons sought to protect itself against the vagaries of the market, that does not mean that they were against the market itself – indeed, many also engaged with the market. Of course, common land continued up until the infamous enclosures began during the 16th century and it is estimated by J. L. and Barbara Hammond in The Village Labourer that even by 1685 three fifths of all cultivated land in England was still ‘farmed on the old common-field system’.

To make her case more fully De Moor spends a considerable amount of time analysing the history of the commons in Florence between 1500 and 1850 and concludes that collectively-owned commons could and should play a role in modern society alongside the Market and the State. “It is peculiar,” she writes “that people believe the market can solve all kinds of problems, and that citizens cannot and do not deserve the same level of trust given to market institutions.” As part of this diversification, writes De Moor, the ‘biggest challenge, and the greatest potential, lies in the dialogue that needs to be developed between government and civil institutions’. And of course the co-operation in the sense that she means it can work equally well with modern day service and industrial economies as well as common ownership of land for cultivation. As Joshua Greene writes in Moral Tribes our natural tendency towards co-operation ‘evolved for the amoral purpose of successful competition’. He continues: “And yet somehow we, with our overgrown primate brains, can grasp the abstract principles behind nature’s machines and make them our own.”

It can seem hopelessly idealistic in today’s attenuated social and political life to call for something different, either in the form of De Moor’s collectives or the communities of resistance that featured in a previous blog. But we should remember that things can change very quickly. The founders of neoliberalism like Hayek and Mises were outriders for years until the political door was pushed open with the emergence of Thatcher and Reagan. Salisbury Democracy Alliance can’t do much at a regional or national level to effect change, but it can exert influence on the local political scene. Deliberative Democracy can work well in conjunction with local collectives in a way that rebalances the relationship between the individual and the community while lessening the stranglehold of the ‘elective dictatorship’ of representative government.

Is this the end for Original Sin?

Original Sin

ARE humans fundamentally good or bad? It’s a question that runs through the history of human thought. According to Immanuel Kant: “Out of the crooked timber of humanity, no straight thing was ever made.” The two philosophers who perhaps best represent the pessimistic view and the optimistic are, respectively, Thomas Hobbes and Jean-Jacque Rousseau.

According to Rutger Bregman in his new book Humankind, these two thinkers ‘continue to be pitted against each other in the philosophical boxing ring’. Their respective positions go to the heart of the deep divide. And indeed, Bregman thinks that the ramifications are far-reaching embracing harsher ‘punishments versus better social services, reform school versus art school, top-down management versus empowered teams, old-fashioned breadwinners versus baby-toting dads – take just about any debate you can think of and it goes back, in some way, to the opposition between Hobbes and Rousseau’. This is big claim and one to which followers of Karl Marx and Adam Smith might raise an eyebrow or two. After all it could be argued that as important as the ‘good’ versus ‘bad’ is the conflict between individualism and communitarianism, but with that caveat in place, it’s difficult not to recognise the importance of human nature.

Thomas Hobbes

For Thomas Hobbes human ‘life in that state of nature was, in his words, “solitary, poor, nasty, brutish, and short” because humans “are driven by fear” leaving us in a “condition of war of all against all”,’ writes Bregman. But fear not because chaos ‘can be tamed and peace established if we all just agree to relinquish our liberty’ into the hands of a ‘solitary sovereign’ whom he dubs the Leviathan – the name that also graces his magnum opus (as a matter of interest Hobbes was born near Malmesbury and there is an early edition of The Leviathan in the town’s museum). Bregman characterizes this position as: “Give us power, or all is lost.” It is civilization that is our saviour from brute nature. The idea that humanity is fundamentally bad is also contained within the notion of Original Sin which, regardless of whether you are Christian or not, is embedded in much of Western culture and can, perhaps, be observed with chilling effect in the words of the traditional Baptism service.

Jean-Jacque Rousseau

Rousseau, on the other hand, takes a completely opposite view. For him we are naturally good in a state of nature and it is only the institutions of civilization that warp that natural goodness. He argues that ‘civil society is not a blessing, but a curse’. Rousseau understood that ‘man is naturally good, and that it is from these institutions alone that man becomes wicked’. In contrast to Hobbes, writes Bregman, Rousseau is saying: “Give us liberty, or all is lost.”

The conflict between good and evil.

Bregman is in no doubt that Rousseau is right. For him for most of human history we ‘inhabited an egalitarian world without kings or aristocrats, presidents or CEOs’, and if anyone got a bit uppity they were quickly swatted by the community. But according to Bregman problems began about 10,000 years ago. “From the moment we began settling in one place and amassing private property, our group instinct was no longer innocuous. Combined with scarcity and hierarchy it became downright toxic. And once leaders began raising armies to do their bidding there was no stopping the corruptive effects of power,” he writes, acknowledging at the same time the importance of community attachments in countering this process.

The problem is that Hobbes won the argument, ably assisted as we have seen at least in the West by Original Sin. And of course Bregman has some work to do to convince us that Rousseau was right. What about William Golding’s Lord of the Flies? What about the infamous psychological experiment by Stanley Milgrim during which 65 per cent of volunteers gave what they thought were potentially lethal shocks to ‘learners’ who got memory tests wrong? The experiment was immortalized in the book Obedience to Authority and helped to explain everything from the holocaust to the supposed veneer of civilization and the justification of Hobbes. According to Bregman, however, a more detailed examination of the shock treatment experiment shows how resistant the guinea pigs were to obeying the orders. Bregman argues that ‘evil doesn’t live just beneath the surface; it takes immense effort to draw it out’ particularly if, as in the case of the shock treatment exercise, ‘they think they are being evil for the greater good’. He brings the same argument to bear on the perpetrators of the holocaust, particularly Adolf Eichmann who was characterized by Hannah Arendt as the epitome of the ‘banality of evil’. For Bregman, however, he believes that Eichmann, and others like him, didn’t do what they did because they wanted to do evil but because they thought they were doing the right thing, they thought they were doing good, however misguided they were (it should be said here that this is a hugely simplified account of his argument).

There is an occasional whiff of confirmation bias in Humankind because Bregman is so committed to his thesis. But, refreshingly, he is well aware of this problem and does his best to eliminate it. On the other hand he is paddling heroically against a tide of human thought that simply assumes that Hobbes was right. And to be clear Bregman is not arguing that we should all abandon civilization and become hunter gatherers. He acknowledges that things have become better in at least some parts of the world over the past 200 years, even if there has been a regression in recent years. In the end his claim is remarkably simple – that most people are pretty decent most of the time and he finishes with an optimistic rallying cry: “So be realistic. Be courageous. Be true to your nature and offer your trust. Do good in broad daylight and don’t be ashamed of your generosity. You may be dismissed as gullible and naïve at first. But remember, what’s naïve today may be common sense tomorrow.” Oh, and don’t watch the news!