The price of fear

Why are we so polarized? It is an important question and one that is perplexing. Sure, social media rightly get their fair share of the blame. But is that a symptom, an exacerbating factor rather than a cause? Surely it is. We cannot simply blame everything on social media and sit back thinking we have resolved the problem. So, what else is there? This blog has previously identified the tendency of our representative government to infantilize the polis, to exclude people from the policy-making process. Indeed, that was the express aim of the founding fathers of the American constitution. Now, no-one can seriously propose that every member of society can or should think deeply about public affairs, even though Socrates said that in a fully functioning democracy every person’s thought matters. But one should expect society to furnish people with the opportunity to partake beyond voting every few years. It is why Salisbury Democracy Alliance is so keen to introduce an element of deliberative democracy in the form of the existing Salisbury Democracy Café and its continuing campaign to bring Citizens’ Juries to the city. But still, it seems, we haven’t got to the route of the problem.

Why are we so polarized?

According to Martha C Nussbaum in The Monarchy of FEAR it is, as the title shouts out, fear that lies at the heart of the problem. As she writes: “Thinking is hard, fear and blame are easy.” She claims that fear ‘is not only the earliest emotion in human life, it is also the most broadly shared within the animal kingdom’. Fear, she argues, is also profoundly anti-social. When we feel compassion we reach out and consider what is happening to other people. On the other hand fear is ‘intensely narcissistic’. It drives out all thought of others’. An infant’s fear is ‘entirely focused on its own body’ but, given stable and loving care, it can ‘start to become capable of generosity and altruism’.

When we are afraid we are thinking only about ourselves.

As an American philosopher Nussbaum focuses on the States, but her arguments can be equally applied to the UK. For her America is an angry country and, she claims, anger is the ‘child of fear’. Public anger contains not just protest at wrongs, a reaction that is healthy for society when the protest is well founded, but also a burning desire for revenge, as if the suffering of someone else could solve the group’s or the nation’s problems. It is not difficult to see how the likes of Trump and conspiracy theorists in general feed off this sense of retribution as they fuel fear and anger.

So far so bad. But what is Nussbaum’s solution to the fear and anger that seems to permeate out societies? Well, it is the fostering of ‘loving, imaginative vision (through poetry, music, and the other arts), and a spirit of deliberation and rational critique, embodied in philosophy, but also in good political discourse everywhere’. And overarching this is ‘hope’ that it has to be active and committed. She writes: “Practical hope, not idle hope, since you get to work to produce a good future’. Hope is the exact opposite of fear. “Hope expels and surges forward, fear shrinks back.”

How do we foster a spirit of deliberation?

All this seems to be rather pie-in-the sky, a just so narrative in which, with one bound, we are free of fear. With our fractious societies and a dominant ideology that seems to be intent on infantilizing us, it is difficult to see how we can foster the kind of environment in which these aspirations can be achieved. To help her with this Nussbaum refers to the psychotherapist Donald Winnicott who investigated ways in which people could grow and flourish rather than shrink. He called for the ‘facilitating environment’, which starts with a loving stability within the family. In the wider context, writes Nussbaum, ‘families cannot make children secure and balanced, capable of withstanding onslaughts of fear, if they are hungry, if they lack medical care, if children lack good schools and a safe neighbourhood environment’.

So, while Nussbaum does not look at or suggest detailed policies, she does outline some strategies. And she is also clear that the creation of a ‘facilitating environment’ on a national scale is a pre-condition for creating the kind of deliberative society she calls for. She also outlines 10 crucial capabilities that need to be fostered, including bodily health and integrity, being able to use the ‘senses to imagine, think and reason’. Having the emotional intelligence to ‘have attachment to things and people outside ourselves’. We should also foster a capability to ‘live with and towards others’ and to have the ‘social bases of self-respect and non-humiliation’. Perhaps the most important capability, at least from the perspective of Salisbury Democracy Alliance, is: “Being able to participate effectively in political choices that govern one’s life; having the right of political participation, protective of free speech and association.”

A facilitating environment.

Although Nussbaum does not enter the murky world of representative government, it seems to be clear that neoliberalism is not the sort of governance she believes will achieve the kind of facilitating environment of which she speaks. But turning society round to a more deliberative, facilitating bent is not easy and can test one’s ‘active hope’ to breaking point. But we do know that things can change quickly and, although social media have their downsides that can help to consolidate the status quo, they also have the potential to facilitate change much more quickly than in the past. So, although ‘active hope’ seems to a thin thread at the moment, it is the only thread we have to hold on to.


A matter of consciousness

WHAT is consciousness? It is a question that has exercised the minds of neuroscientists and neuro-philosophers for decades – and plain old philosophers for centuries. And it continues to do so. For many – mostly philosophers – consciousness and the related question of the mind is immaterial. In some sense it is not of the material world in which we live but exists outside of our everyday lives. This is what the philosopher Gilbert Ryle described disparagingly as the ‘ghost in the machine’. For others – mostly neuroscientists – it is entirely material.

What is consciousness?

It is fair to say that most neuroscientists believe that consciousness is part of the physical world. In The Idea of the Brain Matthew Cobb takes as his starting point Francis Crick’s materialist assumption that everything we feel and perceive is ‘in fact no more than the behaviour of a vast assembly of nerve cells and their associated molecules’. Cobb largely agrees with this assumption but concedes that understanding how consciousness emerges remains elusive and is likely to remain so for many decades. That is not to say scientists are thrashing around in the dark. He writes: “Probably the most precise agreed localisation is that the level of consciousness is largely determined by the brainstem and the basal forebrain, while its content – what is being perceived – is processed by the cortex, hypothalamus and so on.”

Is consciousness and the mind just a matter of matter?

All this is important because consciousness largely determines who we are and whether or not we have freewill – as our everyday experience tells us we do. The concept of freewill was challenged by a series of experiments by the neuroscientist Benjamin Libet that seemed to show that decisions we think have been taken by our conscious minds have in fact ‘already been taken by your nervous system’. That conclusion has itself been challenged by those who hold that ‘Libet’s experiment holds only if subjects are making arbitrary choices, not if they making important, deliberate decisions’. For example, when you are driving you are normally unaware of the actual process of driving except when you taking action to avoid an accident.

Do we have freewill?

Interestingly, a key element of Cobb’s trawl through the history of brain research is that philosophers and scientists alike have used metaphors to help explain the function of the brain and to prompt further research. Much of this has been fruitful, although, as Cobb points out, there always comes a stage when the understanding that metaphors allow is ‘outweighed by the limits they impose’. Past metaphors have included the brain as a form of clockwork and later as a tree. The current metaphor is the brain as a computer. And Cobb believes that we are approaching the end of the computer metaphor. What is not clear, however, is what will replace it. And of course once we fully understand consciousness – if we ever do – then there will be no need of metaphors.

For now though the computer model remains the dominant one and one of the fruitful lines of enquiry suggests that consciousness is an emergent property which begins its journey in the brain. This involves the claim that the resulting effect – consciousness – is bigger than the sum of its parts and, as Cobb puts it, ‘obeys its own lawfulness’. But one of the problems of the computer model – and most other models for that matter – is that it excludes the environment in which the brain is embedded. Cobb writes: “This might seem trivially obvious, but neither the body nor the environment feature in modelling approaches that seek to understand the brain.” This really is strange because the brain obviously interacts with the body and with the external environment. Indeed, one of the astonishing features of consciousness is the ability to look out from the dark theatre of the skull to observe, interpret and construct the external world. “Excluding these aspects from the model, or from the experimental set-up, will lead at best to an inadequate understanding,” writes Cobb.

The dark theatre of the skull.

The complexity of the brain and the resulting consciousness, mind and sense of Self is such that Cobb believes that we will spend the next century making advances before we have finally solved the riddle. Maybe the various computational projects will come good. “Or a theory will somehow pop out of the vast amounts of imaging data we are generating. Or we will slowly piece together a theory (or theories) out of a series of separate but satisfactory explanations.” Cobb continues like this with several possibilities of the way forward. But just to emphasise the difficulty of the way ahead he finishes the book with simply: “Or…”

Of course, one possibility that Cobb does not countenance is that scientists will never solve the enigma of consciousness. And, maybe, consciousness is not after all a material thing. But at least the scientific approach allows for enquiry, research and the accumulation of evidence – as has and is happening. If, on the other hand, consciousness is an immaterial entity then, by definition, it is not susceptible to scientific enquiry and will remain beyond our understanding for ever.


The history of Capitalism

THE common narrative of Capitalism is that it is just part of the natural order of humanity, that hurdles like feudalism needed to swept away for Capitalism to emerge and bring freedom to all. Even Karl Marx thought that Capitalism was a great leap forward and, potentially at least, the release of human potential. If you want a paean to the dynamism of Capitalism you could do worse than read The Communist Manifesto. Of course, he saw it as only a stepping stone to Communism but still it was, he thought, an advance.

Was feudalism just a hurdle for Capitalism to overcome?

But there is another narrative which challenges the notion that Capitalism is in some sense endemic, ahistorical and a kind of force of human nature. This, according to Ellen Meiksins Wood in The Origin of Capitalism is just another narrative, which she calls the Commercialization Model. In this model the ‘highest stage of progress, represents a maturation of age-old commercial practices (together with technical advances) and their liberation from political and cultural constraints’. However, Woods argues, this explanation has in common ‘certain assumptions about the continuity of trade and markets from their earliest manifestations in exchange to their maturity in modern industrial capitalism’. In this model it is ‘feudalism that represents the real historic rupture, interrupting the natural development of commercial society’.

Even during Feudalism, according to the model, the intrinsic logic of the free market simply lay dormant. From the start ‘rationally self-interested individuals maximising their utilities by selling their goods for profit’ was the fundamental driving force of society artificially suppressed by Feudalism.

Wood, however, argues that Capitalism is historically located and represented a completely new relation of production due to a unique set of circumstances. She writes: “A market economy can only exist in a market society, that is a society where instead of an economy embedded in social relations, social relations are embedded in the economy.” Indeed, it could be argued that before Capitalism people simply would not have understood the idea of an ‘economy’ as a separate entity. And she quotes the economic historian and anthropologist Karl Polanyi who argues that, far from being the natural order of things, the system of self-regulating markets was so disruptive ‘not only to social relations but also to the human psyche…’ that ‘its implementation had to be at the same time the history protection from its ravages’. And without ‘protective counter measures, particularly by means of state intervention, human society would have been annihilated’.

Society had to be embedded in the economy for Capitalism to thrive.

Interestingly, Wood challenges a common view that Capitalism represents the pinnacle of human freedom because the ‘dominant characteristic of the capitalist market is not opportunity or choice but, on the contrary, compulsion’. She writes: “Material life and social reproduction in capitalism are universally mediated by the market, so that all individuals must in one way or another enter into market relations in order to gain access to the means of life.”

Wood defines Capitalism as a ‘system in which goods and services, down to the most basic necessities of life, are produced for profitable exchange, where even human labour-power is a commodity for sale in the market, and where all economic actors are dependent on the market’. This system, she claims, is unique and very different from all previous ways of ‘organizing material life and social reproduction’. And yet explanations of the origin of Capitalism have been ‘fundamentally circular: they have assumed the prior existence of capitalism in order to explain its coming into being’ – a logical fallacy known as begging the question.

Wood argues that Capitalism did not, as is often assumed, start in towns and cities but in the countryside. Prior to the rise of Capitalism much of the land in England was owned by peasants, although the concept of ownership was different from our understanding of it. It actually meant more like access to land, which often involved more than one person. And there was always common land. That is not to say that life for the peasant was easy. But rather than economic imperatives impoverishing the peasant, it was the nobility and landlords, often using brute force, that expropriated crippling payment from them. But as the English ruling class was demilitarized before its counterparts on continental Europe, and political power split away from the nobles into the State in the form of the Monarchy, so nobles had to rely more and more on improving production to earn their keep – at the expense of the peasant. Their philosophical justification was provided by John Locke.

Wood writes: “The theme running throughout his (Locke’s) discussion is that the earth is there to be made productive and profitable, and this is why private property, which emanates from labour, trumps common possession.” But crucially Locke argues that there is no direct link between labour and property because ‘one man can appropriate the labour of another’. Wood adds: “It appears that the issue for Locke has less to do with the activity of labour as such than with its profitable use.” And: “The point is…that the landlord who puts his land to productive use, who improves it, even if it is by means of someone else’s labour, is being industrious, no less – and perhaps more than the labouring servant.” It is in this appropriation of property and labour that is distinctively Capitalist.

And it is these purely economic imperatives in agrarian Capitalism that led to the notorious enclosures, industrialization, wage-labour and the rise of the proletariat. Of course, simply identifying Capitalism as a unique phenomenon does not mean that it must be overcome. With all its faults Capitalism has produced unprecedented economic prosperity and, to misquote Churchill, it may the worst way of organizing society apart from all the others. But Wood asserts that Capitalism is incapable of promoting sustainable development precisely because of its unique economic imperatives that privilege exchange value over use value – profit not people. If that is true then the conundrum for humanity is how to create a more sustainable society without going down the path of the kind dictatorships that blighted the 20th century.


From human to ahuman

FOR millennia humans have regarded themselves as being superior to the rest of the living world. It often takes the form of a kind of exceptionalism that assumes that the rules that apply to the rest of nature do not apply to humans.

At its most extreme it is exercised by nations, which see themselves as being exceptional or better than any other nation. We live in the anthropocene, a proposed geological epoch in which human activity has a significant negative impact on the Earth’s ecosystems including climate change. One response is what is called posthumanist theory, which posits the idea of people existing in a state beyond humanity.

However, Patricia MacCormack in The Ahuman Manifesto argues that posthumanism ‘seems to have exhausted itself in a morass of nihilism and despair’. And in what for many people is likely to come as an existential shock, MacCormack is proposing the end of humanity itself. She writes: “The death of the anthropocene opens up thousands of voices, trajectories and necessary activisms. I use death here as it will be used in the entire manifesto: both advocating for the deceleration of human life through cessation of reproduction, thus the death of humans (though, as will be clear, with care as we live out the lives we have), and the absolute end of the perception that apprehends all living organisms and relations through an anthropocentric-signifying system.” To which one reaction might be ‘wow’!

Is posthumanism itself now outdated?

She compares her position to that of Franco Berardi (aka Bifo) who is looking for an ‘ethical method of withdrawal from the present barbarism’ and to find ‘new ethical values’. In an uncompromising response MacCormack writes: “I share these ambitions with Bifo, but we diverge when he asks how can we remain human. My response is we should not want to.” Her fundamental position comes from the belief that we ‘humans are simply part of a thing known as earth’. We are not special in any way and what she is arguing for is ‘ultimately the end of humans’ violent occupation of Earth’. Her call to action is to:

Forsake human privilege.

Practice abolitionist veganism.

Cease reproduction of humans.

Develop experimental modes of expression beyond anthropocentric-signifying systems of representation and recognition – and

Care for the world at this time until we are gone.

“Ultimately, The Ahuman Manifesto is a a call to activism for the other at the expense of the self, not as a form of martyrdom, but because life in this book is understood ecosophically as a natural contract.” Interestingly, however, MacCormack combines the desire to kill off the human species with a new love of humanity as it disappears. A kind of death love one might show to someone with a terminal disease who wants to die. But this death love has a contradiction at its heart because if, as we slowly disappear, we can begin to love ourselves and the natural world more and more, then the less justification there is for ending humanity.

Another important point to make is that while she argues against human exceptionalism, she seems to be arguing that only humans, as far as we know, can even contemplate ending their time on Earth – thus demonstrating that we are truly exceptional. And, since it is extremely unlikely that humans as a whole will agree to cease their reproduction, the only realistic way of achieving MacCormack’s goal would appear to be some kind of forced process, so undermining her desire for humans to love each other more.

And yet there is truth in this book as well. There is truth when she draws out the paradox of a species that knows it cannot live for ever, yet covets ‘transhumanism, religious afterlives, eternal reincarnation or living on through our art or our children’.

We know we must die yet covet the afterlife

There is truth when she points out that ‘capitalism has converted pleasure to measure and desire so mechanized that we are striving to be equal to inanimate luxury objects even while claiming to be superior to sentient nonhumans.”

In a review of the book in issue 152 of Philosophy Now Dr Stephen Alexander argues that her ‘moralism triumphs over her own confessed world view’, although he agrees that her position is not ‘philosophical nihilism, but a form ethical affirmation’, which itself seems hard to justify.

One can get a lot out of this book if one sees it as a kind of thought experiment along the lines of what would happen to the world if humans extracted themselves from it? Central to this question, of course, is climate change. But in trying to slow down or even reverse climate change we are talking about saving humans in particular and biodiversity in general – the Earth itself will continue regardless until such time as the Sun collapses in on itself.

Ultimately, simply arguing against human exceptionalism does not in itself justify bringing humanity to a slow end. In many ways, although we are a part of nature not separate from it, we are in many ways extraordinarily exceptional. So, perhaps what we should be doing is trying to turn that exceptionalism into a more positive force than negative. Whether this is any more achievable that posthumanism or ahumanism is not clear.


The fall and rise of philosophy

IN the middle of the 20th century philosophy was on its knees. A group of intellectuals in Vienna – known as the Vienna Circle – led by Moritz Schlick and brought to the UK by the brash young philosopher A J Ayer – declared war, not on a field of philosophy but philosophy itself.

A J Ayer

At a public meeting on philosophy in Oxford Ayer stood up and declared: “You are all facing an early extinction. The armies of Cambridge and Vienna are already upon to you.” For some this was as shocking as when Nietzsche declared ‘God id dead’ in Also Sprach Zarathustra.

The Vienna Circle was trying to make sense of the world after the devastation wreaked by the First World War. They thought that empiricism could rescue representative government and humanism. And in the process, by declaring that if a proposition could not be verified by empirical observation then it was nonsense, they dismissed metaphysics. In response to anything even vaguely metaphysical a logical positivist – as they called themselves – would respond: “What on earth do you mean by that?” More of a battle cry than a question. But it wasn’t just metaphysics that was facing extinction. Ayer wrote: “What possible observation could verify ‘one ought to help one’s neighbour?'” And if it could not be observed, then it could not be verified and, finally, was therefore nonsense. All moral philosophy was entirely subjective and could not be objectively verified – so dump it.

However, there were four great female philosophers who were having none of this. They made the bold counter claim that humans were metaphysical animals. And it is these four remarkable thinkers – Iris Murdoch, Mary Midgley, Elizabeth Anscombe and Philippa Foot – that are at the centre of a book called Metaphysical Animals – How Four Women Brought Philosophy Back to Life by Clare Mac Cumhaill and Rachel Wiseman. Their plan was a bold one because it not only rejected the brutalism of logical positivism but also attempted to close the is/ought divide identified by David Hume, who argued that an ‘ought’ statement cannot be directly derived from an ‘is’ statement. The quartet united against the Vienna Circle and Ayer to declare a joint ‘NO’.

Foot, for example, rejected Ayers subjectivism. The authors write: “She wanted to be able to say to the Nazis ‘but we are right and you are wrong’. She wanted the idea of an objective moral reality against which action could be judged wrong or bad and not just inconsistent or irrational as some philosophers claimed in a partial response to the Vienna Circle.

Philippa Foot

In particular, she was concerned about the philosophy of Richard Hare, who was a PoW under the Japanese, and basically accepted Ayer’s position but argued that it needed morally firmer ground to account for the suffering imposed by the Japanese army on prisoners. He found it, or thought he had, by claiming that all a morality needed to be was consistent and rational. For our quartet that wasn’t enough because there was no way to distinguish between the consistency and rationality of Hare and that of the Nazis or Japanese army.

According to Cumhaill and Wiseman each of the four women ‘found different ways to balance our animality with the fact that we are language-using, question-asking, picture-making creatures’. And they continue: “As metaphysical animals, our invention, symbols and artworks change our Umwelt (self-centred world) and, to some degree, our very nature.” Foot – who invented the famous trolley problem thought experiment, which asks if you would be prepared to sacrifice one person to save several – is today considered to be one of the most significant analytic moral philosophers of the 20th century.

Meanwhile, Elizabeth Anscombe helped revive Aristotelian virtue ethics, along with Foot.

Elizabeth Anscombe

Virtue ethics essentially declares that a virtuous act is one that a virtuous person would typically do and, to avoid circularity, in order to achieve a state of eudemonia – or flourishing.

Iris Murdoch, as well as being one of the most important philosophers of the 20th century, also wrote 26 novels. In one of her most important philosophical books – Metaphysics as a Guide to Morals – she attempts to find a morality that does not depend on the ‘literal truth’ of religion and to defend morality against technology, science and, of course, logical positivism.

Iris Murdoch

Apropos of the latter, in her book she acknowledges David Hume’s contention that ‘moral value cannot be derived from fact’ but contends that a strict separation of fact and value, as attempted by the logical positivists, ignores the point that a survey of facts will ‘involve moral discrimination’ and moral evaluation is often influenced by facts.

Mary Midgley spent a lot of her time fighting, unsuccessfully, the closure of university philosophy departments under the Conservatives led by one Margaret Thatcher. She did this because she believed that philosophy was not a luxury. Cumhaill and Wiseman write that for Midgley philosophy is ‘something we humans need in order for our lives to go well’.

Mary Midgley

And she argued ferociously against the belief that we can ‘entrust our future to technology and artificial intelligence’.

In What is Philosophy For? she wrote: “What actually happens to us will surely still be determined by human choices. Not even the most admirable machines can make better choices than the people who are supposed to be programming them. So we had surely better rely here on using our own Minds rather than wait for Matter to do the job.

“And, if this is right, I suspect that…philosophical reasoning – will now become rather important.” Amen to that, but the question remains – has it become important? Probably not.


The evolution of altruism

THE idea of altruism is attractive. The very possibility that at least some of the time humans are able to act with the intention of benefiting others either at some cost to oneself or at least without expectation of a reward is important to secular ethics. For religions it is problematical because every act is in some sense mediated through the faith of the agent.

Of course, there are those who argue that there is no such thing as altruism, that every act is guided by self-interest. This is not the place to argue against egoism as such, although it is relevant to point out that it is very difficult for egoists to prove that there has never been an act of altruism, while acknowledging that this is in itself not proof that there actually has been one.

Altruism might be on firmer ground if it could be shown that it is part of our genetic make up that has evolved. But if altruism is a result of the blind forces of evolution, then that could be a problem for ethics because agency is removed from the equation. If altruistic acts are determined then how can they form part of an ethical framework, which requires conscious agency? For Ian Vine in Embracing the Other the initial paradox is that the apparent selfishness of the human genome is required to make it evolutionarily successful. At the same time, as Vine points out, even Darwin had ‘acknowledged prosocial instincts in animals, involving feelings of “sympathy”‘. But how can this two aspects co-exist? Vine calls this the ‘biological paradox of altruism’. Although he believes that this paradox is real enough, however, Vine points out that ‘traits like readily risking one’s own life on another’s behalf are too widespread in nature to be trivial anomalies’.

Part of the answer comes from biologists who have shown how altruism could have evolved along with selfishness, although they see bio-altruism and bio-selfishness ‘non-teleologically – without any reference to conscious motives, and simply in terms of actual consequences of behaviour for fitness’. But, as Vine suggests, this biological explanation does not explain why ‘humans may make high-risk or predictably costly sacrifices to help non-kin or even out-group members’ as when non-Jews risked torture and death to help Jews during the Holocaust.

Vine argues that intelligent ‘purposiveness means that we direct actions teleologically towards states of affairs that can be consciously anticipated’, which is in stark contrast to the non-teleological nature of evolution. And he adds that while the ‘extent of our power to our maladaptive goals is an empirically open issue’ it is ‘foolhardy to write off all traits at variance with fitness a priori as errors and products of manipulation and self deception’. However, he writes, it ‘remains necessary to outline a biologically coherent account of the means by which we may have become able to enter new realms of social purposiveness, motivated by altruistic concerns for another’s well-being’.

In his attempt to achieve this Vine redefines altruism to incorporate ‘pure altruism and egoism, with mixed motives in between. “Acts qualify as more or less altruistic as long as concern for another’s interests is a sufficient goal to cause them,” writes Vine.

And he tentatively places our ability to perform acts of altruism in our ‘capacity to feel shared identities with other persons’. It may be that this goes back to the dynamics of ‘mother/infant interactions’ which suggests a ‘pre-programmed readiness to attain early forms of self-with-other.

“If initial awareness of self are ‘we-cognition’ of self-with-other and ‘we-volition’ of self-for-other, the possibility of altruism becomes real.” Which means we can escape the ‘enclosed worlds of evolutionary imperatives’. And that also releases the possibility that human ethics transcends material determinism. Presumably also, once ethics is set free then we can we are also capable of consciously promoting our pro-social altruistic behaviour, even if our less pro-social more self-centred motives are never far away.

It has to be said that Vine’s argument is somewhat convoluted. Maybe the answer to the ‘biological paradox of altruism’ is that it isn’t a paradox at all. Remember that evolution takes place at the gene level not the genome level and while the latter can pretend to be altruistic, the former cannot. So, if altruism exists at the gene level, then it has to be genuine altruism not the self-contradictory notion of reciprocal altruism so beloved of many evolution scientists. Obviously, genuine altruism must have evolved because it has some evolutionary benefit to the genome – the happy but entirely unintended consequence is real altruism at the genome level.


Why the poor get the blame

ONE of the features of modern society in the UK is the belief that that there are deserving and undeserving poor. In fact, it’s not just a feature of the modern world – it has been a common refrain for centuries as ruling cliques attempt to justify their position by claiming what they believe to be the moral high ground. It has often been the case that the Establishment has deliberately made life difficult for the least well off while insisting that it beneficial for them to dig themselves out of the problems created by their supposed superiors.

For Darren McGarvey in The Social Distance Between Us, the core problem is class – a much derided concept in recent decades with some prominent figures claiming that it no longer exists. But McGarvey – a writer with a working-class Glaswegian background – brings class back to front and centre. He writes: “Britain’s central problem is class and the distance it drives between those who have done well under the current economic settlement and those who are suffering because of it.” And a perfect storm was created by the marginalization of trade unions that contributed to the creation of the precariat along with a ‘depressing resignation, acceptance even, among vast swathes of a young twenty first century workforce that insecure work and poverty wages are normal’.

On the other side of the social divide, he writes, in a thinly veiled reference to Boris Johnson, that an ‘academically unremarkable boy may rise to the very apex of British society, despite being of very low character or ability, as a result of the resilient social connections, a sense of entitlement and relative protection from serious accountability that a fee-paying education can provide’.

Our ruling cliques do everything in their considerable power to discourage us from becoming critically engaged citizens – indeed, it could be argued that representative government is expressly designed to do that, as the fathers of the American constitution, people like James Madison and John Adams, knew only too well. And, writes McGarvey, it is in this smokescreen of ignorance that ‘politicians are given a free pass as poverty is broken down into bitesize sub-genres, each with its own poverty industry growing up around it, while the real story of its systemic nature is rarely told’.

Ignorance is encouraged by our ruling cliques.

For McGarvey it all comes back to class conflict, although the system always allows enough people to advance their aspirations to blunt the desire for radical change. But class ‘remains the primary dividing line in society’. McGarvey says this was demonstrated during the pandemic when ‘half the country went on to Zoom, while the other half delivered their alcohol, sex toys and bread-makers’.

A truly transformative society, he argues, begins with education, pointing out that the ‘various education systems across the home nations are broadly segregated according to social class and where pathways to further and higher education, as well as the labour market, are set’. What is needed is an education based on the ‘principle that every child has a right to the same quality of education, irrespective of the class position of their parents.’ To this end ‘all fee-paying in education must be abolished and replaced by a fully comprehensive system defined by equal access, where school allocation is lottery-based’. And if this is too strong then, at the very least as a first step we should start by ‘revoking the charitable status enjoyed by independent schools and pegging the funding of the state sector to at least 80 per cent of what private schools generate per head’.

McGarvey also urges ‘rebalancing industrial relations by strengthening worker representation’. He adds: “Why are citizens acting in a free market as consumers deemed to be behaving rationally but when they organize as workers for better pay, eyebrows are raised?” He argues for a Universal Basic Service, paid for by a new wealth tax, to improve and provide better public transport, childcare and free further and higher education for all.

His final recommendation relates to strengthening our representative form of government with greater accountability for the House of Commons and the House of Lords. Voting, he argues, should be made compulsory to increase participation – and some form of PR should be introduced.

The problem here is that McGarvey falls into the trap of assuming that what we have is actually democracy in its entirety rather than what should more properly called representative government. Anyone who has read previous blogs here will know that up until the end of the 18th century democracy as practiced in ancient Athens (the exclusion of slaves and women notwithstanding) was incompatible with representative government as practiced by the American republic. So, reforming representative government by making voting compulsory and introducing PR is a bit like moving the deckchairs around on the Titanic. What is needed is the introduction of deliberative democracy and Citizens’ Assemblies from the parish through district, county, regional and national governance, including a Peoples’ Assembly to replace the House of Lords based on the random selection of citizens, but stratified to ensure demographic balance.

Darren McGarvey will be delivering the Reith lecture on Freedom from Want on Radio 4 on Wednesday 14 December at 9am.


The infantilization of humanity

WHAT a spectacle! During the Queen’s funeral hundreds of thousands of devoted subjects queued for hour after hour to see the catafalque for several days. Broadcasters cleared the decks, with the BBC showing a 24/7 feed of deferential subjects paying their respects, often bowing or curtseying. The Establishment closed ranks and claimed that this was the nation coming together to cement the status quo as the best of all possible worlds. The dissident voice was drowned out of censored itself. But this is only one part of the Establishment in its pomp, turning citizens into loyal subjects.

The Queen’s catafalque

One might have hoped in the 21st century that the various aspects of the ruling clique would have encouraged people to be more critically engaged citizens. In fact, you hardly ever hear the general public referred to as citizens. As we have seen we are subjects of the Monarch. In the courts we are defendants – often it seems the least important actors in the theatre of the absurd that is our judiciary system; in our form of representative government we are the ‘electorate’ or the ‘voters’ or ‘constituents’ – rarely citizens; in Christianity we are the children of God or Christ; in the economy we are ‘consumers’ or ‘customers’ encouraged to buy NOW rather than wait to save up; in the NHS we are ‘patients’ or ‘users’. The idea of the critically engaged citizen is drowned out by a wave of euphemisms, as we are reduced to witless spectators – infantilized. We are actively encouraged not to worry our little heads about the constitutional monarchy, criminal justice, government at any level from parish to national, religion, the economy or health.

It is a far cry from the polis of ancient Athens when free men (and it has to be conceded that it was only freemen, rather than women and slaves, but it is the principle we are talking about here, not the specific practice) were expected to take part in public life. As Aristotle said humans (men of course) are political animals and the private life was thought to be inferior – indeed that is where the word privation comes from.

The private life was considered to be inferior to the public life in ancient Athens.

The politician and military commander Pericles said: “We alone regard the man who takes no part in public affairs, not one who minds his own business, but as a good for nothing.” Those who preferred their own counsel were idiotes, the origin of the word ‘idiot’. Athenian assemblies in the fifth and fourth centuries BCE were largely chosen by lot, and they were the nerve centre of power. Indeed, right up until end of the 18th century representative government and democracy, in the Athenian sense, were regarded as being incompatible. As the second US president John Adams wrote: “Is not representation an essential and fundamental departure from democracy? …Representation and democracy are a contradiction is terms.”

At some point, however, the word ‘democracy’ was tagged on to the word ‘representative’ and the focus of campaigners switched to extending the franchise. But as universal suffrage became a reality in parts of the world, so interest in any participation in the polis was drastically reduced as citizens became ‘voters’ or the ‘electorate’ and ‘constituents’ – mere spectators.

Even protestors are ultimately spectators in representative systems.

In The Next Revolution Murray Bookchin reflects this conflation of two incompatible concepts when he draws a distinction between ‘state craft’ and ‘politics’. The first, he argues, is the situation we have now in which the influence of the citizen is steadily diminished because of the limitation of representative government – although even this is infinitely better, of course, than the dictatorship of people like Putin because at least one get rid of representatives through the ballot box. The second – politics proper – involves citizens having direct participatory control over their government and communities. It should be said that statecraft encompasses all representative institutions from Parliament in the UK to the smallest parish council. The nearest citizens get to direct democracy is being invited to comment on proposed policies with the final decision still resting with their representatives.

Bookchin writes: “As I have written elsewhere, historically, politics did not emerge from the state – an apparatus whose professional machinery is designed to dominate and facilitate the exploitation of citizenry in the interests of a privileged class. Rather, politics, almost by definition, is the active engagement of free citizens in the handling of their municipal affairs and in their defence of freedom.” On this account, politics has been almost entirely extinguished from modern society, in which we are, rather, infantilized by the state.

Bookchin advocates a form of Confederalism, which is a network of ‘administrative councils whose members or delegates are elected from popular face-to-face democratic assemblies in the various villages, towns and even neighbourhoods of large cities’. He adds: “The members of these confederated councils are strictly mandated, recallable, and responsible to the assemblies that choose them for the purpose of coordinating and administering the policies formulated by the assemblies themselves.” He does not refer to them specifically, but Citizens’ Assemblies should be a form a part of the confederation. Either way, it is a system that completely over-turns the status quo, with power residing in citizens, not a ruling clique.

The idea that what we have now is democracy is so deeply embedded in our consciousness that it is almost impossible to imagine an alternative worthy of the name ‘democracy’. But, as Bookchin points out, blind acceptance of the status quo as though there is no alternative is, arguably, the greatest barrier to social change – as we saw in the previous blog.


Why are we stuck in a rut?

ARE human beings – and human life itself – fundamentally good or bad? It is a question that has taxed philosophers for millennia. In one of its most recent manifestations it is represented on the one hand by Thomas Hobbes who regarded life before civilisation as being ‘nasty, brutish and short’, which we could only escape by surrendering our freedom to a supreme leader – a leviathan, which happens to give its name to his magnum opus. On the other hand we have Jean Jacques Rousseau who argued the complete opposite – that the innocence of humanity was corrupted by civilization.

According to Hobbes, life is naturally ‘nasty, brutish and short’.

But what if the very idea of whether humans are good or bad is a category error? That is the claim made by the late David Graeber and David Wengrow in The Dawn of Everything – A New History of Humanity. They point out that the terms ‘good’ and ‘bad’ are purely human concepts. No one would claim that a non-human animal or a plant was good of bad. “It follows that arguing about whether humans are good or evil makes as much sense as arguing about whether humans are fundamentally fat or thin,” they write. In fact, their research shows that human society before the Agricultural and Industrial Revolutions was neither brutish not idyllic. They claim, on the contrary, that the ‘world of hunter gatherers as it existed before the coming of agriculture was one of bold social experiment, resembling a carnival parade of political forms’. Further: “And far from setting class difference in stone, a surprising number of the world’s earliest cities were organized on robustly egalitarian lines, with no need for authoritarian rulers, ambitious warriors – politicians, or even bossy administrators.”

From their research, the authors have found that neither Hobbes nor Rousseau were right. There was no single pattern. “The only consistent phenomenon is the very fact of alteration, and the consequent awareness of different social possibilities,” they write. So, when we ask what the origins of social inequality were, we are asking the wrong question. “If human beings, through most of our history, have moved back and forth between different social arrangements, assembling and disassembling hierarchies on a regular basis, may be the real question should be ‘how did we get stuck?’ How did we end up in a single mode?” That mode, they argue is eminence and subservience – once seen as temporary expedients or even grand theatre – now embedded as ‘inescapable elements of the human condition’.

How did we get stuck in a rut?

The key for David Graeber and David Wengrow is not that inequality has its roots in pre-civilization society and is now an inevitably permanent feature of human society. It is not that we have lost a kind of innocence as in the Christian myth of Original Sin. What we have lost is the ability to even envisage different social and economic orders. They write: “The contrast with our present situation could not be more stark. Nowadays, most of us find it increasingly difficult to even to picture what an alternative economic or social order might be like. Our distant ancestors seem, by contrast, to have moved regularly back and forth between them.”

If we are to break the mould of class division and economic inequality the, the authors say, we must rediscover three freedoms – and not just the negative freedom like freedom of speech. They are: 1 – the freedom to move or relocate from one’s surroundings; 2 – the freedom to shape entirely new social realities, or shift back and forth between different ones; 3 – the freedom to ignore or disobey commands issued by others. Another overarching freedom which enables these three is the positive freedom which empowers people to act and not to be merely passive recipients. So, for example, while we have the negative freedom to relocate, the ability to actually do so depends on whether or not you have the required social and economic security. It means developing the idea of active citizenship – to be citizens rather than just ‘consumers’, ‘constituents’, taxpayers’ or ‘subjects’. (This will be the subject of a future blog).

Above all we need to recognise that civilization and complexity need not come at the price of human freedom, that participatory democracy – maybe in the form of citizens’ assemblies – is not necessarily possible only in small groups but impossible to scale up to city or national level. We need to rediscover our ability to imagine alternatives and to consign to the dustbin of history the toxic phrase ‘there is no alternative’.


Do trees have brains?

“THUS, from the war of nature, from famine and death, the most exalted object which we are capable of conceiving, namely, the production of the higher animals, directly follows.” So wrote Charles Darwin in the last paragraph of On the Origin of Species. The sense one gets is that all species below the ‘higher animals’ are in some inferior products of evolution by natural selection. But is there more to the lower forms of life, even to plant life than this? Well, according to Peter Wohlleben in The Hidden Life of TREES, there most certainly is.

Charles Darwin

Indeed, the titles of his chapters tell their own stories. Titles like ‘Friendships’, ‘The Language of Trees’ and ‘Social Society’ sound more like a sociological thesis than a book on trees. Writing about the culture of trees, he writes: “It appears that nutrient exchange and helping neighbours in times of need is the rule, and this leads to the conclusion that forests are super organisms with interconnection much like ant colonies.” Not only that, trees are able to recognise their own roots from those of others and ‘even from the roots of related individuals’. On the other hand, the delinquents of the tree world are planted coniferous forests, which cannot network with each other because their ‘roots are irreparably damaged when they are planted’.

When Wohlleben started writing the book he managed the forest in the Eifel mountains in Germany and it began to change his experience of the forest. “When you know that trees experience pain and have memories and that tree parents live together with their children, then you can no longer just chop them down and disrupt their lives.”

We believe that language can only take place via human and some other ‘higher’ species using words, symbols or signs. However, there are different ways of communicating and trees in particular communicate using scent. For example, it was noticed in the African savannah that acacia trees can give off a warning gas to signal to others that they are under attack.


Beeches, spruce and oaks ‘all register pain as soon as some creature starts nibbling them.’ In fact, writes Wohlleben, trees communicate through smell, vision and electrical impulses. And if trees can demonstrate something akin to friendship, then it seems that they also have something resembling a social security system. In undisturbed beech forests, trees share resources by synchronising their photosynthesis levels so that they are all ‘equally successful’. “The trees, it seems, are equalizing differences between the strong and the weak.”, writes Wohlleben.

Now, perhaps the most controversial claim made by Wohlleben is his contention that trees have brains. He argues that since studies show that trees can learn, they must be able to store their knowledge. Nevertheless, the idea that trees have a brain may sound to some like a bit of a stretch. Wohlleben writes: “For there to be something we would recognise as a brain, neurobiological processes must be involved, and for these, in addition to chemical messages, you need electrical impulses.

Do trees communicate through electrical impulses?

“And these are precisely what we can measure in trees, and we’ve been able to do so since as far back as the nineteenth century.” Brain-like structures can be identified at root tips, and a powerful analogy can be found in the use of the word ‘dendron’ (from the Greek meaning ‘tree’) for certain processes in the human brain.

An interesting question is whether all of this has any resonance with the concept of human consciousness. Of course, we have to be cautious here because it has not yet been established whether or not trees do actually have a brain. But we can get round that by using the conditional if/then. So, we can say, if trees have brains, then does that help in our understanding of consciousness?

A schematic portrayal of dendrons.

Well, one of the markers of consciousness is intensionality or aboutness. That is, for an organism to have consciousness it must be able to be aware of stuff outside of itself. One of the claims by those who are opposed to the idea that the material brain has consciousness is that intensionality has no place in pure matter. But if organisms like trees can be said to have inensionality, then this strut of the anti-materialists is knocked away. This is not the end of the matter, however, because there more struts for the anti-materialists to cling to.


Covid v Neoliberalism

IT has become increasingly obvious that when Covid hit in early 2020 the UK was disastrously unprepared. It was the most pervasive pandemic since the Spanish Flu after the World War I. But it was the political decisions over the last 30 years, which, ironically, were hell bent on eliminating political decision-making, that exacerbated the impact.

The Covid virus

The aim since the 1980s has been to reduce political decision-making on the assumption that the so-called ‘free economy’ and the much-vaunted rational choice theory, which at the heart of neoliberalism, knows best. However, Covid exposed this theory as being hopelessly inadequate.

As Adam Tooze writes in Shutdown – How Covid Shook the World Economy, Covid required a ‘willingness to contend with political choices, choices about resource distribution and priorities at every level’. And he adds: “That ran up against the prevailing desire of the last forty years to avoid precisely that, to depoliticize, to use markets…to avoid such decisions. This is the basic thrust behind what is known as neoliberalism, or the market revolution – to depoliticize distribution issues, including the very unequal consequences of societal risks, whether those be to structural change in the global division of labour, environmental damage, or disease.” It should be said here that New Labour did do a lot to reverse the damage done to public bodies like the NHS and to reduce poverty, especially among the young, with policies like the minimum wage, Sure Start, and child poverty reduction programmes. But, arguably, is still supported the basic premises of neoliberalism and thought it could ride the tiger to reduce its worst effects – until the tiger bit back of course.

Getting back to the days in the run-up to Covid, however, for the briefest of moments it looked as though things were about to change, that inequality and the globalized lifestyles of the richest of the rich cliques was about to be challenged.

This new challenge was epitomized by that part of the Left Wing on both sides of the Atlantic fired up by Jermey Corbyn and Bernie Sanders. But, as Tooze writes: “The promise of a radicalized and re-energised left, organized around the idea of the Green New Deal, seemed to dissipate amid the pandemic.” And one might add, in the UK Brexit. One should also point out here that the Corbyn venture failed before the pandemic happened. There was, however, a bitter irony in all of this: “Even as the advocates of the Green New Deal went down to political defeat, 2020 resoundingly confirmed the realism of their approach.” The response to Covid was massive fiscal life support even bigger than in 2008 after the financial meltdown, thus confirming the ‘essential insights of economic doctrines advocated by radical Keynesians and made newly fashionable by Modern Monetary Theory (MMT). State finances are not limited like those of a household. As Keynes himself wrote: “Anything we can actually do we can afford.” And the actual history of so-called small state neoliberalism has been a ‘series of state interventions in the interests of capital accumulation’.

Covid also demonstrated just how dependant the economy is on the stability of nature. Tooze writes: “A tiny virus mutation in a microbe could threaten the entire world economy.” And it tore down the partitions neoliberalism had erected dividing the ‘economy from nature, economics from social policy and from politics per se’. According to him the pandemic exposed the ‘illusion that there is a thing called the economy that is separate from society’. And when government intervened the markets recovered remarkably well – but the recovery was unequal. “Worldwide, the wealth of the billionaires rose by $1.9 trillion in 2020, with $560 billion of that benefiting America’s wealthiest people. Among the surreal and jarring juxtapositions of 2020, the disconnect between high finance and the day-to-day struggles of billions of people around the world stood out.”, writes Tooze.

Tooze does not provide a solution to the problem except to say that we must shift our worldview in order to be ready to meet the challenges that face us now and may face us in the future. He writes: “If 2020 taught us anything it is how ready we must be to revise our worldview. The Green New Deal was brilliantly on point, but it imagined climate as the most urgent threat to the Anthropocene. It too was overrun by the pandemic’. He recommends a kind of open mindedness ‘commensurate with the times we live in’.

We need to change our worldview

It has to be said that this is a pretty thin response to the problem we face today. Sure, openness is an important quality in an increasingly splintered and polarized world. But to say that the Green New Deal was overrun by the pandemic, is not to say that it was wrong. We need something like the Green New Deal that was offered by the radical left in the run up to Covid. The mealy-mouthed lip-service that the candidates gave to the climate change during the Conservative Party’s unedifying leadership contest and the economic disaster that has ensued since, just won’t do. And it is worrying that the invasion of by Russia of Ukraine is being used to justify the status quo in the West.

Defending the status quo!

If we are going to save Homo Sapiens and other animals, then we need to turn apathetic citizens, who look on the Westminster cliques as mere spectators, into active participants. And one way to do that is to introduce citizens’ assemblies at every level of society – from parish councils to a new People’s Assembly to replace the House of Lords.


The dark theatre of the mind

WE intuitively believe that what we see is what there is. Despite philosophers like Kant and Schopenhauer telling us that it is actually the brain that determines how we experience the phenomenal world, it has never felt right; it still doesn’t. But how does the brain find out about the world, trapped as it is inside the dark theatre of the skull. As neuroscientist David Eagleman writes in The Brain ‘the brain has no access to the world outside’. He adds: “Sealed within the dark, silent chamber of your skull, your brain has never never directly experienced the external world, and it never will.”

So the brain relies on sensory organs to pick up information carried by photons, air waves, molecules, texture, and temperature, which it then turns into electrochemical signals. These in turn pass through networks of neurons. Eagleman writes: “There are a hundred billion neurons in the human brain, and each neuron sends tens or hundreds of electrical pulses to thousands of other neurons every second of your life. Everything you experience – every sight, sound, smell – rather than being a direct experience – is an electrochemical rendition in a dark theatre.” Even for those of us who are familiar with this sort of research, it remains mindboggling.

Experiencing the world as we do feels effortless. But it’s not. The brain has to put in an enormous amount of effort every second of our waking day just to see. And that’s not all, it then has to synchronize all the other senses, all of which it processes as different speeds. For example, light has to go through a more complicated process than auditory signals, which is why sprinters react quicker to the sound of a gun than to a light signal. And yet all these different processing speeds are gathered together to make it seem as though they are happening at the same time. This is because your ‘brain collects up all the information from the senses before it it decides upon a story of what happens’. And before any of this happens the brain guesses what’s out there and then adjusts its internal model depending on the extent to which its expectations are met. It is, in effect, a Bayesian inference machine.

But how does all this translate into the hard problem of consciousness? How does matter become intentional? For Eagleman the answer is that consciousness is an emergent property. Now we are entering choppy waters. Not everybody is on board when it comes to emergent properties, which essentially purports to explain how consciousness emerges from a sufficiently complex nervous system. Prof Raymond Tallis, writing in Philosophy Now, for example, is unimpressed: “It is, however, becoming increasingly obvious that ’emergence’ doesn’t reduce the problem of life, even less the puzzle of conscious intelligent life. Emergence looks more like a description than an explanation.”

However, Prof Tallis does not explore further work by Prof Giulio Tononi of the University of Wisconsin, who goes beyond the emergent property model to suggest, from his studies of people while asleep and awake, that consciousness requires a perfect balance between what he calls differentiation – that is enough complexity to represent different states – and integration, which requires ‘enough connectivity to have distant parts of the network in tight communication with one another’. Eagleman comments: “In this framework, the balance of differentiation and integration can be quantified, and he proposes that only systems in the right range experience consciousness.” Prof Tallis might respond by saying that this is a just a more detailed description rather than an explanation. But is there not a point at which a description becomes so detailed that it flips into becoming an explanation? Do at least some scientific discoveries start as descriptions but in doing so are also explanations or at least involve explanation? Christian Baden writes: “The conditions and rules sustaining the explanation can themselves be described and explained, resulting in substantial overlap between description and explanation.” And maybe the balance between differentiation and integration provides the explanation of the description provided by emergence theory.

Underlying Prof Tallis’s objection to emergence theory is his resistance to the idea that the mind can be described and explained in purely materialistic terms. At present there appears to no knock-down argument for materialistic or non-materialistic explanations of the mind and consciousness. It’s true to say that materialists have got to provide definitive proof that consciousness is just a matter of matter. But that is not in itself proof that it is non-materialistic either, anymore than pointing out that science does not have, and may never have, all the answers about existence and the universe, or universes, is proof of the existence of a god. What is going for the materialist is Ockham’s Razor in that it does not require any further explanations, whereas non-materialism does.


False consciousness – or just plain contented?

ONE of the abiding rifts in left/right political philosophy is the approach towards the poorest members of society. The failure of socialism to overthrow capitalism perplexes those on the left of the political spectrum. For those on the right it’s simple: capitalism works, it delivers well-being for most people, so there is no reason to change it.

From their perspective, however, left-leaning, often middle-class intellectuals either come to despise the working class for their weakness or blame it on false consciousness created by the dominant ideology. By this reasoning the working classes have been duped by the ruling clique into supporting Conservatism rather than Socialism – in short they are like turkeys voting for Christmas.

As Nick Cohen wrote in What’s Left: “Margaret Thatcher and Ronald Reagan won repeatedly because large numbers of voters from the skilled working class supported them. They were never forgiven for that because from their different points of view Fabians, liberals and Marxists had hoped the working class would take power under their leadership. When it didn’t, they despised the working class for its weakness and treachery and condemned its members for their greed and obsession with celebrity.” That is quoted in a book by Christopher Snowden called The Spirit Level Delusion which looks to debunk the ideas of Wilkinson and Pickett in The Spirit Level, which purports to show that unequal societies have more social problems than more equal ones. This blog, however, is more concerned by the question of false consciousness and whether it has any explanatory power or is just wishful thinking by the left.

There is no doubt which side Snowden is on of course: “Working class indifference to inequality, so long as their own circumstances are improving, is seen as another example of false consciousness by those on the left politically.” But according to him it has nothing to do with false consciousness. He writes: “Decades of affluence, rising wages and home ownership, made the working class less reliant on paternal socialism and the labour movement.” However, in recent years that does not explain why with stagnant and reducing wages in real terms, particularly in the public sector, underpinned by austerity and the hollowing out of the public sphere, there is still no sign of ordinary people taking up the socialist cause. Worse than that, interest in politics continues to decline apace. At the same time, while the language of class war has largely disappeared from public discourse, it is still very much alive among the super rich. And as investment billionaire Warren Buffon told us: “There’s class warfare, all right…but it’s my class, the rich class, that’s making war, and we’re winning.” And while the government rails against railway workers striking over pay and conditions, according to academics at York University more than £100 billion a year of public money is handed to corporations in various forms in what has been dubbed ‘corporate welfare’. So, what is going on? The simple rhetoric of the right, while seductive, just doesn’t seem to cut it.

A major problem is that our representative government is expressly designed to keep us as witless spectators and to keep us far from the democratic decision-making process. Citizens’ Assemblies might help to counter that problem, but there doesn’t seem to be much appetite for them, perhaps because most people are convinced that what we have is democracy without remainder. But add to that the infantilizing tendency of the advertising industry, which encourages us to abandon critical thinking and delayed gratification; the push to a cashless society with the same effect; and a social media that taps into our psychological vulnerabilities, and you have a much more complicated picture.

There doesn’t appear to be much appetite for Citizens’ Assemblies.

Many would argue that this is exacerbated by our atomized individualism, but, as Herbert Marcuse argues in one-dimensional man, there may be a case for making a distinction between atomization and individualism and to ask – individualism for whom? As Marcuse writes there is a ‘repressive ideology of freedom, according to which human liberty can blossom forth in a life of toil, poverty, and stupidity’. And he continues: “Indeed society must first create the material prerequisites of freedom for all its members before it can be a free society; it must first create the wealth before being able to distribute it according to the freely developing needs of the individual; it must first enable its slaves to learn and think before they know what is going on and what they themselves can do to change it.”

And lest we dismiss Marcuse as just another lefty, Douglas Kellner in his introduction points out that while he does indeed raise the ‘spectre of closing off, or “atrophying”, of the very possibilities of radical social change and human emancipation’ within capitalist society he also ‘depicts trends in contemporary communist societies that he believes are similar to those in capitalist ones’ (one-dimensional man was written in 1964).

If anything this atrophying of the possibility of social change has intensified, as indicated earlier in this blog. Inequality has also increased since then and we know that around 60 per cent of people in poverty have at least one person in their family working. Of course, there are no knock-down arguments but there is at least some plausibility in the view that there is rather more to it than the simplistic right wing approach. However, what we do about it is another matter. It sometimes feels the cause is lost and that if you can’t beat them then just join the ranks of the apathetic.


Russia after the revolution

“THE prohibition of oppositional parties brought after it the prohibition of factions. The prohibition of factions ended in a prohibition to think otherwise than the infallible leader. The police-manufactured monolithism of the party resulted in a bureaucratic impunity which has become the source of all kinds of wantonness and corruption.” You might forgiven for thinking that this quote comes from a Western historian. In fact it is from Leon Trotsky in his The Revolution Betrayed.


Writing in exile in 1936, Trotsky is sniping from the side lines. He writes: “Why now, after the cessation of intervention, after the shattering of the exploiting class, after the indubitable success of industrialization, after the collectivization of the overwhelming majority of peasants, is it impossible to permit the slightest word of criticism of the leader?” Of course, from Trotsky’s point of view, it wasn’t because of an inherent flaw in the Soviet system. For him, the rot set in with the advent of of the Civil War. “The opposition parties were forbidden one after the other. This measure, obviously in conflict with the spirit of Soviet democracy, the leaders of Bolshevism regarded not as a principle, but as an episodic act of self-defence.”

Trotsky was writing at a time of savage Stalinist pogroms. A year after he wrote this the Russian poet Osip Mandelshtam died in a transit camp. He was one of many. It is, however, worth pointing out, as Alan Woods does in the introduction, how things had deteriorated since the revolution in 1917. In The State and Revolution Lenin had stipulated that there must be ‘free and democratic elections and the right of recall for all officials’ and ‘gradually, all the tasks of running the state to be carried out in turn by the workers: when everyone is a “bureaucrat” in turn, nobody is a bureaucrat’ – thus introducing an element of direct, as well as elective, democracy. Remember that this a year before some women won the right to vote in the UK and 11 years before all women could vote.

Leon Trotsky

As Woods writes: “Contrary to the calumnies of the critics of socialism, Soviet Russia in the time of Lenin and Trotsky was the most democratic regime in history.” Woods is obviously a socialist sympathiser but the non-Marxist historian E. H. Carr at this point agrees with Woods. In his epic The Bolshevik Revolution he approvingly quotes Lenin in the second All-Russian Congress of Soviets in 1917 as saying: “As a democratic government we cannot evade the decisions of the popular masses, even if we are not in agreement with them.” However, this appears to be a temporary position, according to Carr, who suggests that there was a ‘dilemma of a socialist revolution struggling retrospectively to fill the empty place of bourgeois democracy and bourgeois capitalism in the Marxist scheme’.


Interestingly, however, Carr makes a startling comparison between Marxism and Adam Smith – the darling of many right-wing thinkers. The latter, writes Carr ‘has not escaped in recent years the charge of utopianism commonly levelled at Marx and Engels and Lenin’. And he continues: “Both doctrines assume that the state will be superfluous in so far as, given the appropriate economic organisation of society, human beings will find it natural to work together for the common good.” And further ‘both doctrines are consistent with belief in an economic order determining the superstructure of political ideology and behaviour’. And let’s not forget that, like Marx, Smith also believed in a labour theory of value.

Returning to Trotsky, we find that he, in 1936, is still claiming that: “Under a nationalized economy, quality demands a democracy of producers and consumers, freedom of criticism and initiative – conditions incompatible with a totalitarian regime of fear, lies and flattery.” Further: “No new value can be created where a free conflict of ideas is impossible.” This could be a criticism levelled by the 19th century liberal philosopher John Stuart Mill and his concept of the market of ideas.

At this point, however, we need a reality check. Despite Trotsky’s often apposite criticisms of Stalin, it is not at all clear whether things would have been any better under his rule. As Ian Thatcher writes in his biography Trotsky there were several ‘profound weaknesses in Trotsky’s writings. Above all, his alternative programme contained no guarantees that the USSR would be any richer or more democratic under Trotsky’s guidance’. And: “Given his strong conviction of the correctness of his political viewpoints, it is questionable how free and open debate would have been under Trotsky’s leadership. In any case, it is doubtful whether he would have submitted to a majority vote against him.”

Finally, however, Thatcher reminds us: “Even should capitalism flourish, there should be good reason to consider Marxism’s, and Trotsky’s, criticisms of its injustices and flaws. If there is no such thing as perfect planning, it is also highly unlikely that there is perfect competition.” And: “The best of today’s Marxists seek to learn from the mistakes of the past, and place far more emphasis on democracy and the importance of the independent initiative of the working class rather than on the tutelage of individuals.”


An indifferent world

WHAT if the universe is completely indifferent to us and to all life on earth? There is no God or gods and no guiding rationale. It’s an idea that runs counter to the age-old search for meaning – the succour that is supposedly offered by a supreme being. But what if a truly meaningless universe is actually liberating? That’s the position taken by Albert Camus.

And in his fascinating book The Meaning of Life and Death, Michael Hauskeller it is, fittingly, Camus’s position that he examines at the end. He points out that it was after the devastation of the two world wars that people began to wonder whether there was something wrong with a world that permitted such horrors, let alone an all good, omniscient God. In his novel The Plague Camus reflects this when he writes: “Cold fathomless depths of sky glimmered overhead, and near the hilltop stars show hard as flints.” It’s a cold, heartless world that Camus paints – no pity, no compassion. But this the ground whence Camus starts.

For him the absurdity of of our existence emerges when our yearning for meaning bumps up against the utter meaninglessness of the universe. According to the second law of thermodynamics the universe is inexorably moving from a state of relative order to ever more disorder, possibly infinitely. And all we can do is hold up this process for a few years before merging into the disorder.

Even if there is some meaning, it will be for ever beyond the limits of our knowledge. In the Myth of Sisyphus Camus writes: “I don’t know whether this world has a meaning that transcends it. But I know that I do not know that meaning and that it is impossible for me to know it.” For Camus, then, meaning does not lie in some transcendent realm beyond our understanding, but as a function of our understanding, which, therefore, is accessible to us.

In this situation, according to Camus, the most important philosophical problem is suicide – as Hamlet put it ‘to be, or not to be’. Is a world in which there is no meaning is there any point in existing? Well, Camus believes there is because this very meaninglessness is liberating and the very foundation of human freedom.

His first move is to claim that the universe is not malign in its indifference but, rather, in its indifference it is actually benign.

And in this benign world we are set free from the shackles of meaning external to existence to live the lives we want to live and to determine how we ought to live. Camus writes: “If the absurd cancels all my chances of external freedom, it restores and magnifies, on the other hand, my freedom of action. That privation of hope and future means an increase in man’s availability.” It might be argued that describing the indifferent universe as ‘benign’ is ascribing it human qualities that are not justified. The universe simply is and it’s up to us to make the best of it. Certainly this doesn’t detract from Camus’s argument, however – indeed, in a way, it might be enhanced by such a view.

But there remains the problem of what we do with our freedom. We may be free if, ultimately, nothing matters. But as Hauskeller points out ‘if the universe does not make any distinction between good and bad, permissible and impermissible, then it is difficult to see why we should not kill people if it suits us’. For Camus, however, this kind of nihilism misses the point of the absurd. “The mark of nihilism is indifference to life, but the absurd is born out of the clash between the indifference that we encounter in the structure of the world and our own desperate desire to live, and to live well.”, writes Hauskeller. “The point is that we are not indifferent to life, certainly not to our own.” If all our ethical life comes from God or the gods or from some rational structure in the universe, then we are entirely dependent on some thing outside of us. But if if there is no guiding principle and no promise of a life after death, only then do we realise how precious life is in the here and now.

Furthermore, humans have the capacity to fight back against the indifference of the universe, to shake its fist at it and demand justice for us and for others by negating its nothingness. As Camus writes in The Rebel: “The moment we recognise the the impossibility of absolute negation…the very first thing that cannot be denied is the right of others to live.”

And, furthermore, while there is no meaning in the universe it is us humans who have the courage to fight back. Camus writes: “I continue to believe that this world has no ultimate meaning. But I know that something in it has meaning and that is man, because he is the only creature to insist in having one.” And, we might add, this also applies to women!

Camus’s idea that humanity finds its own meaning through rebellion against the abyss and the siren call of nihilism while maintaining solidarity with all other humans who are in the same boat is attractive. As Camus says, real rebellion ‘lures the individual from his solitude. Rebellion is the common ground on which every every man bases his first values. I rebel – therefore we exist’. Rebel and live!


The weirdness of rationality!

FOR most of human history the world has been understood by humans through the prism of mythology, superstition, magic and gods. Some would argue that it still is. But the Enlightenment was supposed to change all that, or at least some thought that tempering it with it with a bit of reason wouldn’t be such a bad idea. As D’Alambert wrote in response to Rousseau’s attack on science and rationality ‘even assuming that we might be ready to yield a point to the disadvantage of human knowledge, which is far from our intention here, we are even further from believing that anything would be gained by destroying it’. And, further, that ‘vice would remain with us, and we have ignorance in addition’.

An example of Slavic mythology.

But this idea of at least trying to deploy a little more rationality, even if humans are not always very good at it, has proved to be surprisingly controversial. Typically, opponents of the Enlightenment set up a strawman fallacy by characterizing it as placing the God of Reason above all else and then proceeding to knock it down. But true rationalists are acutely aware of how fragile and precarious it is; how easy it is to succumb to our cognitive bias and sink into our social political comfort zones and echo chambers – and mythology.

Intriguingly, Steven Pinker in his book Rationality acknowledges that no matter how desirable rationality may be, it is not the natural human way. He writes: “We children of the Enlightenment embrace the radical creed of universal realism: we hold that all our beliefs should fall within the reality mindset.”

However, Pinker argues, those who give credence to this creed are the ‘weird ones’. And he adds: “Submitting all of one’s beliefs to the trials of reason and evidence is an unnatural skill, like literacy and numeracy, and must be instilled and cultivated. And for all the conquests of the reality mindset, the mythology mindset still occupies swathes of territory in the landscape of mainstream belief.” As one example, he writes that more than ‘two billion people believe that if one doesn’t accept Jesus as one’s saviour one will be damned to eternal torment in hell’.

The Garden of Earthly Delights by Hieronymus Bosch

And when the so-called New Atheists – Sam Harris, Daniel Dennett, Christopher Higgins and Richard Dawkins – dared to argue robustly that belief in ‘God fell outside the sphere of testable reality’ they became targets of some quite vicious attacks, not only from religious people but from mainstream intellectuals as well. Pinker also has an interesting take on the Trump administration: “The brazen lies and inconsistencies of Trumpian post-truth can be seen as an attempt to claim political discourse for the law of mythology rather than the land of reality.”

As you may have gathered by now Pinker is here to praise reason, not to bury it. Interestingly, he does not believe in progress – at least not as teleological force. He writes: “Progress is shorthand for a set of pushbacks and victories wrung out of an unforgiving universe, and is a phenomenon that needs to be explained.” And the explanation, according to Pinker, is rationality. “When humans set themselves the goal of improving the welfare of their fellows (as opposed to other dubious pursuits like glory or redemption), and apply their ingenuity to institutions that pool it with others, they occasionally succeed.” And when the successes take note of the failures, the benefits can accumulate, and we call the big picture progress.”

Rodin’s The Thinker

Rationality also has a role to play in moral progress, according to Pinker. “My greatest surprise in making sense of moral progress is how many times in history the first domino was a reasoned argument.” Eventually, after going viral, the conclusion would embed itself in society ‘erasing the tracks of the arguments that brought it there’. For example, a logical argument was required, and provided by the French theologian Sebastian Castellio, against the religious intolerance of John Calvin and the practice of burning heretics at the stake. Today, it just seems obvious, just as it seems obvious, to most people at least, that slavery is wrong. But it was Frederick Douglass, himself born into slavery, who used the rules of logic to demolish the case for slavery.

In essence, then, while rationality isn’t a universal panacea, it does have a universal appeal that transcends our individual concerns. As Pinker writes: “Our ability to eke increments of well-being out of a pitiless cosmos and to be good to others despite our flawed nature depends on grasping the impartial principles that transcend our parochial experience.”

And interestingly, from the perspective of Salisbury Democracy Alliance, he argues that while elections ‘can bring out the worst in reasoning’, representative government ‘could be supplemented with deliberative democracy, such as panels of citizens tasked with recommending a policy’.

So, let’s hear it for that fragile capability humans have for reason. It may not be our natural habitat but it is for this very reason that it needs to be nurtured like the most exotic and rarest of plants.


Do humans need to be commanded?

“A new commandment I give to you, that you love one another; as I have loved you, that you also love one another.” So Christ is reported to have said to his disciples in John 13:34. As it happens it is also the commandment that the Venerable Alan Jeans, Archdeacon of Sarum, chose to form the basis of his sermon at the Civic Service in St Thomas’s Church to celebrate the election of the 761st Mayor of the City of Salisbury – Cllr Tom Corbin.

It is an interesting quote that raises the obvious question: does humanity require commanding in order to ‘love one another’? Or is the ability, if not always the practice, to love one another inherent in humanity?

In his sermon the Archdeacon argued that there are many kinds of love. We might say, for example, that we love our car or a painting or a piece of music.

Not the sort of love Christ had in mind

But Christ explicitly says that this commandment to love one another is a ‘new commandment’. Really? What is he saying? That prior to his commandment people didn’t know how to love one another, or if they did know they didn’t practice it enough, so they needed a commandment to enforce it? It’s a bit like the the question that Socrates posed to Euthyphron 2,500 years ago: “Do the gods love holiness because it is holy, or is it holy because they love it?” Does Christ command that we love one another because it is the right thing to do, or is it the right thing to do because Christ commands it? If it it is the latter then do we simply have to take Christ’s word for it? This position appears to be endorsed in John 15:7: “If you abide in me, and my words abide in you, you will ask what you desire, and it shall be done for you.” On the other hand in John 15:6: “If anyone does not abide in me he is cast out as branch and is withered, and they gather them and throw them into the fire, and they are burned.” So, we don’t need to know whether the word of Christ is right, we simply have to follow his word, or suffer the consequences.

But does humanity need a commandment to love one another and the threat of being burned if we do not abide in Christ? It is hard not to sense a whiff of the Original Sin in this need for commandment.

The doctrine that humans inherit a tainted nature through being born, of course, stems from the expulsion of Adam and Eve from the Garden of Eden. As Paul says in Romans 5:12: “Therefore, just as through one man sin entered the world, and death through sin, and thus death spread to all men, because all sinned- “Here we get into the deltoid schisms of Protestant thinking and ideas like ‘total depravity’ in which humans’ motivations, even though they might appear to do good, are always sinful or self-regarding – similar to modern day secular thinking found in egoistic morality. On the other hand some thinkers, like the clergyman Samuel Hoard (1599-1658) argued for ‘partial depravity’, which basically claims that humanity does have some choice in the matter and can choose salvation and God.

In more modern times these positions are starkly represented by the English philosopher Thomas Hobbes, who took a largely dim view of human nature, and Jean Jacque-Rousseau, who was rather more optimistic.

Thomas Hobbes looking suitably grumpy

For Hobbes human life in the state of nature was ‘solitary, poor, nasty, and short’. His answer was to relinquish our freedom into the hands of a ‘solitary sovereign’ – the Leviathan, the name of his magnum opus. Hobbes, incidentally, came from Warminster and there is an early edition of his book in the town’s library.

Rousseau, on the other hand, takes the opposite view. For him, we are naturally good in the state of nature and it is civilization that warps that natural goodness, although it should be said that his state of nature was a thought experiment rather than an actual state.

Rosseau looking decidedly sunnier

Nevertheless, for Rutger Bregman in Humankind Rousseau is largely correct. He argues that for most of human history we ‘inhabited a world without kings or aristocrats, presidents or CEOs’, and problems began about 10,000 years ago. “From the moment we began settling down in one place and amassing private property, our group instinct was no longer innocuous. Combined with scarcity and hierarchy it became downright toxic.”

It is fair to say that this a pretty simplistic view of humanity and this blog will explore a more nuanced approach in a future blog. But for the moment it should be said that Bregman is not advocating a return to a pre-civilized society and he acknowledges that things have become a lot better for millions of people over the last 200 years or so. But, he argues that when you ditch Original Sin and Hobbes you find underneath it all that most people are pretty decent most of the time and don’t need a commandment from Christ – or anyone else for that matter – to love one another.

We could also make the point that many evolutionists now believe that altruism forms a part of our genetic make-up and, in its conceptualized form, helps us to love one another – although of course it will always be in competition with our more selfish instincts. Even Richard Dawkins in his celebrated book The Selfish Gene writes: “However, as we shall see, there are special circumstances in which a gene can achieve its own selfish goals best by fostering a limited form of altruism at the level of the individual animal.” What Dawkins fails to say is that a gene cannot be either selfish or, indeed, pretend to be altruistic. You cannot simply conflate the individual gene and the genome. This sort of thinking comes up with logical absurdities like reciprocal altruism. If part of our make-up is indeed altruistic, then it has to be genuine altruism.

One cannot help feel that Christ’s commandment infantilizes humanity. Indeed, he refers to his disciples as ‘little children’. Is it not time that we grew out of this infantilism and took responsibility for our own lives and actions? Immanuel Kant argued that the Enlightenment represented the maturing of humanity. Perhaps it is time that we took this notion seriously.


Levels of consciousness

IT often feels that we are either conscious or unconscious. But are there, as this blog investigates, more levels of consciousness? The idea that there are varying degrees of consciousness has a long and distinguished history ranging from Plotinus to to Jung and Freud in the 20th century. Jung, for example, identified the mineral world, the plant world and the animal world as degrees of consciousness. Freud identified the oral, phallic and genital stages, while many present day psychologists probe multiple levels of consciousness. And it is in this tradition that Nathan Field lays out his fourfold hierarchy in Breakdown and Breakthrough.

Levels of consciousness

The first level is what he calls One Dimensionality, which is most apparent in very young children. Field writes: “The focus of infant awareness is located in certain physical areas: the skin, the mouth, and the inside of the body which may be comfortably full or painfully distended.” It can also be apparent in some autistic children.

Two Dimensionality is more interesting because it moves out of the Self to the acknowledgement of the Other, but at the expense of an inner life. This two dimensionality is characteristic of schizoid personalities in which ’emotions appear to be skin deep’. Field writes: “There may be a great deal of surface drama, passionate declarations, threats, violent or hysterical gestures but the observer remains strangely untouched, even alienated.”

Two dimensionality

And while there may be surface drama, it is also characterized by opposites that can switch quickly from, for example, love to hate in an instant.

As Field points out ‘politics, news and entertainment are all deeply contaminated by two dimensionality’. In the political arena, reasoned opinions readily degenerate into convictions as political theory becomes fixed in ideology and, finally, crystalized in dogma. Doubt and complexity are difficult to sustain and harden into dead certainties. Here the ‘millions passionately devoted to fundamentalist religions and political beliefs are relieved from the torments of ambivalence and indecision’. There is, however, a positive element to two dimensionality which manifests itself in ‘unwavering loyalty, uncompromising rectitude, unquestioning obedience’, although it’s not difficult to see how these positives might morph into the dark side.

Three dimensionality, says Field, ‘represents all that civilization holds dear: rationality, balance, adulthood, fairness, flexibility, restraint, the ability to listen and to respect the integrity of another’. Field continue: “The intellectual faculty combines with our primary instincts to produce the capacity for imagination, metaphor and symbolisation, which are the basic requirements of all creative endeavour.” And while two dimensionality is characterized by polarity and conviction, three dimensionality is ‘searching, reflective, ambivalent’. As people move from two to three dimensionality they become more rounded and resilient.

A more balanced approach with three dimensionality

More controversial, perhaps, is Field’s conception of Four Dimensionality, which is characterized by awareness of the movement from Self to Other of the sort that can happen between a ‘mother and her baby, between twins, members of the same family, partners, lovers, friends and, not least, enemies’. Although it is often experienced between the Self and the Other, Field stresses that it can also manifest itself as an enriched sense of Self. And he adds: “Whether shared, or experienced in solitude, the four-dimensional state is one that many people have known and tried to convey in art, music, and, most especially, in the paradoxical utterances of mystical literature.”

Spontaneity is important in all of this but Field also points out that it can also be aided by prayer, meditation or therapy.

An important aspect of Field’s theory is that the dimension incorporates the third, the third the second and the second the first, but he insists that each dimension adds something of its own.

But the really controversial aspect of Field’s thought is his interest in shamanism and his belief that it emerges out of the fourth dimension – and that Jung was a shaman: “In so far as Jung was able to assimilate his dissociative and pathological tendencies it places him, like the shaman, in the category of the ‘wounded healer’ or, more precisely, one who heals by virtue of the partial healing of his own wound, since if it had healed completely he might too easily forget how it felt to be sick and the capacity to identify with the patient would be impaired.”

While Freud saw the unconscious as being something to be controlled, Jung embraced it in the form of the collective unconscious, a vast creative force, which also tapped into his research into the medieval tradition of alchemy.

Many people might baulk at the fourth dimension and stick with the third but Field sides with Jung, insisting that the fourth ‘does in fact exist’. He concludes: “It is not a delusion, but carries with it the subjective conviction of being our true state; or at least closer to our true state than everyday consciousness.”


Engage in resistance through dialogue!

IT is often argued, with some truth, that we live in an age of wilful ignorance in which thought is undervalued and we are encouraged to live in the now.

Wilful ignorance

Delayed gratification is discouraged and replaced with the present. Commercial institutions have fuelled this process by encouraging us to think of ourselves as free-standing, self-interested individuals who want to buy things NOW, even if it means going into debt. In the political field it means the rise of populism in the USA, Brazil, the UK, Russia, Hungary, Turkey and India where it has gained power. And in many other countries it lurks beneath the surface.

The rise of populism

But is there another cause for this phenomenon? Well, according to Brazilian philosopher Marcia Tiburi in The Psycho-cultural underpinning of Everyday Fascism – Dialogue as Resistance, yes there is. For her, central to the problem is an absence of shame among many leaders like Bolsonaro, Trump, Johnson, Erdogan, Modi and Putin. It’s this lack of shame which enables, indeed empowers them to lie with impunity. Tiburi writes: “The ridicule of several of the scenes involving these characters sounds to their followers like heroism. Therefore, this strange heroism of the tyrants of our time has become something ‘pop’ in a process of profound ‘political mutation’.”

The death of shame

In this world consumerism fills the vacuum, the emptiness of consumption. “We flee from analytical and cultural thinking through the consumerist emptiness of language and repetitive language. We flee from the discernment that analytical and critical thinking demand. We fall into the language of consumerism.” Now it should be said here that it would be wrong to suggest that there was a time when humans were perfectly rational but have somehow become vacuous idiots. It’s more that certain forces are becoming better at exploiting our inherent irrationality and undermining our counterbalancing ability to think – at least some of the time!

With that important caveat in mind then, Tiburi identifies the ‘great voids’ that have emerged in recent years. One is the ‘void of thought’ which Hannah Arendt identified as characteristic of Adolf Eichmann.

Hannah Arendt

This emptiness of thought in Eichmann entailed the ‘absence of reflection, of criticism, of questioning and even discernment’. Tibury adds: “We can say that, in our time, this is becoming more and more common. More and more people are giving up the ability to think.” And in place of this ability comes ready made ideas or cut-and-paste ideas, as she puts it, largely distributed through social media networks.

Another great void, according to Tiburi, is an emptiness of feeling. She writes: “We live in a world that is increasingly anesthetized, in which people become incapable of feeling and increasingly insensitive.” It’s not that we don’t have emotions but, she argues, we ‘can speak of an emptiness of emotion precisely in the context in which people seek any kind of emotion.’ Further: “The inability to feel makes the field of sensitivity in us a place of despair. From joy to sadness, we want religion, sex, films, drugs, radical sports, and even food to provoke more feeling.”

Despair in emptiness

Not all is lost, however, because for Tiburi at least part of the answer lies in the encouragement of dialogue, very much like the skill we practice in Salisbury Democracy Café. Tiburi argues that: “Dialogue is not just a form of philosophy, rather philosophy in its pure state. Dialogue is the attitude that can alter the spiritual and material condition in which fascism arises.” For Tiburi dialogue is a ‘type of psycho-social resistance, which holds the power of social transformation at its most structuring level – shaping dialogue matters when we want a democratic society’ and it is also the specific ‘form of philosophy as a practice, or as activism’. Furthermore: “We need an education for democracy that is education for art and poetry, for science and critical thinking.”

The anatomy of critical thinking

And she claims that dialogue at all ‘levels is undesirable in authoritarian systems’.

It’s hard not to equate Tiburi’s thoughts with those of deliberative democracy promoted by Salisbury Democracy Alliance (SDA) both in the democracy café and in its campaign for Citizens’ Juries. It’s the very point that SDA made in its highly successful stand in People in the Park last year – and will make again this year – when it argued that without the engagement of ordinary people in real dialogue in general, and Citizens’ Juries in particular, our representative form of government remains just that – representative and not fully democratic. And as, the Tory grandee Lord Hailsham once said, it is always in peril of slipping into an ‘elective dictatorship’.


The pitfalls of oratory

IS it better to suffer wrong than to do wrong? It’s an interesting question and one is rarely, if ever. asked these days. It goes beyond mere altruism, which simply demands that we act with the aim of benefiting others with expectation of reciprocal good. This has more to do with the Bible’s claim that one should turn the other cheek when wronged, rather than seek revenge. Yet is a question that goes back much further in history – to Plato, in fact, in his Gorgias dialogue.


In this famous dialogue Plato writes of Socrates in dialogue with two professional orators – Gorgias himself and Polus, both of whom begin by arguing that the orator need do nothing other than persuade others that they are right, but ultimately baulk at the emptiness of this idea. In Gorgias we have an old and experienced orator who finally concedes that the budding orator should first be tutored in ethical standards before he embarks on oratory. And the younger, less experienced, Polus who cannot bring himself to deny that doing wrong is worse than being wronged.

In our day it’s quite hard to see if this has any resonance. But perhaps it might parallel the politician who wants to speak the truth despite the adverse consequences this might entail, against the one who says what she thinks people want to know. Or the political party that wants to lead electorates, even though it might suffer in the polls, against the one that shifts and changes in order to get elected, regardless of the truth.

But Socrates goes even further: “As a general rule the man who does wrong is more miserable than the man who is wronged, and the man who escapes punishment more miserable than the man who receives it.” And still further: “Whatever the punishment which the crime deserves he must offer himself to it cheerfully, whether it be flogging or imprisonment or a fine or banishment or death.”

Amazingly, Gorgias and Polus seem to be quite happy to accept this conclusion, even though there appears to be a flaw in Socrates’s argument. And that happens when he tries to draw an analogy between money-making curing poverty, medicine curing disease and justice curing ‘excess and wickedness’.


Apart from anything else, Socrates has shifted away from punishment to justice as though the former is equivalent of the former, which it isn’t. Sometimes justice requires something other than punishment, like rehabilitation. And of course punishment is not necessarily a cure at all and it doesn’t always even act as a deterrent. These claims go unchallenged by Gorgias and Polus, who might at least have made a case for a less stringent conclusion like, well, altruism.

Instead Socrates emerges triumphant only then to face the rage of Callicles who asks the largely silent Chaerophon, loyal friend of Socrates. “Tell me Chaerophon, is Socrates in earnest about this or is he joking?”


To which Chaerophon replies in one of his very few utterances: “In my opinion, Callicles, he is utterly in earnest.” We then learn that Callicles is of the opinion that conventional morality – although it is not clear that Socrates’s position is at all conventional – is merely an invention of the weak to undermine the strong. The obvious parallel to Nietzsche’s herd mentality undermining the nobility of the powerful and dynamic Ubermensch is hard to avoid. As a classical philologist Nietzsche is bound to have read Gorgias and was almost certainly influenced by Callicles’s views.


In ant case, Callicles rounds on Socrates: “Nature…herself demonstrates that it is right that the better man should prevail over the weak and the stronger over the weaker.” As a matter of interest this position contravenes antecedently David Hume’s is/ought principle, which states that one cannot infer a value proposition from a factual statement. But Callicles ploughs on: “My belief is that a natural right consists in the better and wiser man ruling over his inferiors and having the lion’s share.”

David Hume; portrait by Allan Ramsay, 1754

Ultimately, however, Socrates succeeds in extracting an important concession from Callicles – namely, that there is a distinction to be made between good and bad pleasures, which allows Socrates to condemn politicians who cravenly pander to citizens’ baser pleasures rather than to their wellbeing and that correction of in dividuals, groups and even states is better than the unrestrained hedonism of the powerful originally advocated by Callicles. The latter then, and somewhat fortuitously for Socrates, then virtually absents himself from the argument as Socrates rams home his advantage at great length. And, finally, he concludes that a politician should only be allowed to enter the public realm after they have had sufficient instruction morality and with the aim of improving the character of the populace.

By such high standards many of our modern day politicians fail dismally. Perhaps today we would talk about improving the conditions of citizens, rather than improving their moral character. But it often seems that our politicians under our representative form of government are more interested in winning elections, with the welfare of their citizens coming almost as an afterthought. Now, this is not true of all politicians or all political parties. Indeed, most of the time it’s not a matter of bad or corrupt politicians but the extent to which a party has to bend its policies in order to be acceptable to a largely indifferent electorate. Indeed, representative government is expressly designed NOT to engage citizens in politics, more to turn them into spectators. Which is why it doesn’t really qualify as a democracy but rather as an elective dictatorship, and it will remain so until a degree of deliberative democracy, including citizens’ juries and assemblies, is introduced – some thing that Salisbury Democracy Alliance has been campaigning for for years. Of course, some politicians are moderate by nature and have no need to moderate their behaviour. But it’s a real problem for less moderate politicians who might want to effect fundamental change changes in society but are forced to moderate their views in order to be electable. But perhaps the solution is not necessarily to moderate one’s views but to continue to hold true to your position while being prepared to compromise in order to get as close as possible to one’s aims.


To humanity and beyond!

WHAT must it be like to reject all of our beliefs? Liberalism, humanism, neoliberalism, socialism, Christianity – indeed, all religion – in fact ALL the characteristics and ideas with which we define ourselves. All gone. Even the category of being human. What if we only care about ourselves and have no interest in others?

Well, that’s the startling position of the maverick philosopher Max Stirner (1806-1856) demands that we do.

Nothingness? Even here there is something

He is anti-moral, anti-political and anti-social philosophy – indeed, at first sight at least, he is anti everything. It does at least have some resonance with our atomistic society. And a common response to this is to build communities again, to build a political and social system that reconnects isolated individuals in order to create the sort of society that enables autonomous individuals to flourish across all demographics. Stirner, however, is utterly contemptuous of such efforts but, surprisingly, he is not a nihilist. According to Jacob Blumenfield in All Things are Nothing to Me Stirner argues that it is ‘only after we learn learn how to care for ourselves can we begin to care for each other as singular equals, and not as generic representatives of groups, classes, identities, and states’.

The maverick philosopher Max Stirner

This, claims Blumenfield, is ‘Stirner’s provocation’. From this hollowing out Stirner ‘defends insurrection, advocates crime, and incites individuals to find each other in free unions or communes that can expand ones power against the state’.

For Stirner ‘any theory which only considers the aggregate of conditions…from which something emerges will never be able to fully show how that emergent something becomes itself in all its singularity’. And he includes all theories including the ones mentioned above but also materialism, empiricism, idealism – even humanness itself. Because, as Blumenfield explains, what Stirner calls the ‘unman’ refers to the uniqueness of the individual ‘which is not explainable by humanness’.

Interestingly, it has been noted that Stirner’s ideas seem to chime with the ancient Hellenistic philosophy of Stoicism, which asks how ‘I should live’ not how ‘I should live in a community’. But, writes Blumenfield, Stirner’s position has nothing to do with egoism. And that is because the ego is a concept, not a real thing. To recycle a phrase by Gilbert Ryle when he was attacking Descartes’s mysterious immaterial self, the ego is just a ‘ghost in the machine’. “In other words,” writes Blumenfield “the actuality of any unique ‘I’ is not identical with its expression in language or thought.” For Stirner, we must bring to the fore all our theories and preconceptions – and then dissolve them.

One of Stirner’s central claims is that freedom cannot be given by a government, state or political party – it can only be taken. Freedom is not a gift. It is, rather, appropriated by the individual. And while political liberalism is a positive development, it is not the answer because it ‘pierces through the veneer of freedom, but goes no further’. Meanwhile, socialism unmasks exploitation and inequality, but is problematic because labour is taken to be the new essence of humanity – and, like humanness, labour is only on of the individual’s properties, not its core essence.

Some thinkers use the communities of creatures like ants and bees to draw parallels with humanity and, in particular, the emergent properties of consciousness. Stirner disagrees.

The key shift is to regard individuals as owning themselves and, as owners, they ‘make themselves individuals’. Blumenfield writes: “To be an owner is to individuate oneself through the appropriation of one’s own conditions and the dissolution of everything alien to them.” But, and this is crucial, owning oneself does NOT mean self-interest or selfishness. So there is no succour here for Ayn Rand or the libertarianism of Robert Nozick. Blumenfield argues that the modern fashion for ‘finding oneself’ often means simply ‘adapting one’s soul to the needs of the market’. It is a critique of ubiquitous mindfulness in the West that it merely helps the individual to adapt to poor working conditions, rather than fighting to change them, thus normalizing the poor working conditions. Stirner exhorts us not to ‘know oneself’ but to ‘own oneself’.

There’s no room for egoism in Stirner’s world.

One of the main criticism of political liberalism is that it often relies on a mythical pre-civilized state of nature in which humanity is either in a brutish and chaotic state (Hobbes) or in a noble one which is destroyed by civilization (Rousseau). But Stirner will have none of this, even though he is often accused of holding such a position. According to Blumenfield, Stirner argues that ‘society precedes individualism, binding us in all sorts of relations of dependency from birth onwards’. Indeed, although Marx was severely critical of Stirner, the latter’s position is actually closer to the former than to the individualism of Hobbes or Rousseau because, for him, society is the state of nature. And leaving society means not alienation but an ‘association of free individuals, building the commune’ sounding now more like the anarchist Kropotkin than Marx or the liberals. As Blumenfield writes: “Breaking social ties allows us to associate ourselves freely and create new forms of intercourse.” And in the process of breaking down the barriers between US and THEM we must, urges Stirner, unite with others to ‘abolish the conditions that constrain us’, again sounding like Marx but moving beyond his insistence that humanity if defined by work.

For Stirner, then, individualism, properly understood, just is communism and Blumenfield bewails that, sadly, ‘but unsurprisingly’ the secret of communism has not been taken up up since Stirner ‘neither by communists not individualists, Marxists nor anarchists’. Maybe that’s because Stirner’s vision is psychologically impossible. Is it really possible for humans to dissolve all their preconceptions without creating new conceptions; to reject humanism as being only a part of what it means to be an individual? Maybe not – and maybe Stirner serves as reminder that communism, properly understood, may be unachievable. But he may still also serve as a purge or corrective to our fondly held beliefs. And, perhaps, an assault on the selfish egoism of Homo Economicus, while reinforcing the need to ‘abolish the conditions that constrain us’.


How the West made Putin

AS Putin sends his troops and tanks in to Ukraine in an appalling piece of unprovoked aggression that beggars belief, it is, perhaps, useful to remember the role that the West had in creating the conditions that made it easier for someone like Putin to take control of Russia.

Ukrainian troop prepare to defend their country

So, let’s recalls what was happening in 1991. In July that year, as the days of the USSR were numbered, Mikhail Gorbachev was still in power. His policies of perestroika (restructuring) and glasnost (openness) were being used by him to lead the Soviet Union towards the kind of representative government enjoyed by Scandinavian countries. The press was free, elections had been held for the Russian Parliament, local councils, president and vice-president. Gorbachev wanted a free market economy but with a strong social security net along the lines of the Scandinavian model.

As Naomi Klein points out in The Shock Doctrine – The Rise of Disaster Capitalism the West was at first very supportive of Gorbachev and ‘on a visit to Prague , Gorbachev made it clear that he couldn’t do it all alone’. He said: “Like mountain climbers on one rope, the world’s nations can either climb together to the summit or fall together into the abyss.” He was about to attend his first G7 meeting.

But, as Klein reports, ‘what happened at the G7 meeting was totally unexpected’. She writes: “The nearly unanimous message that Gorbachev received from his fellow heads of state was that, if he did not embrace radical economic shock therapy, they would sever the rope and let him fall.”

Mikhail Gorbachev in the 1980s

Gorbachev wrote of the event: “Their suggestions as to the tempo and methods of transition were astonishing.”

The shock doctrine was developed by the economist Milton Friedman, the proponent of unfettered capitalism, now often referred to as neoliberalism. He believed that major crises could bring about real change and he thought it was his job, and those of his followers, to strike early and wherever possible to ensure that the mass privatization that was at the heart of the Chicago School of Economics gained traction.

Milton Friedman

But let’s not forget that two years before that G7 meeting relations between the Soviet Union and the West were very different. In his address to the Supreme Soviet on 1 August 1989 Gorbachev said: “Western Europe is realizing more and more how essential it is to achieve mutual understanding and cooperation with the Soviet Union.” It was a different matter after the G7 meeting. According to Klein, Russia was presented with the choice of either carrying on with the reforms of its political set up with representative government or ‘in order to push through a Chicago School economic programme, that peaceful and hopeful process that Gorbachev began had to be violently interrupted, then radically reversed.”

A month after the G7 summit Boris Yeltsin became the hero of the new Russia when he stood on a tank during a failed coup.

Boris Yeltsin addresses the crowd from the famed tank

Not long after that he forced the resignation of of Gorbachev. However, Yeltsin was much more sympathetic towards the Chicago School way of thinking, which had its first run out with Pinochet’s Chile in the 1970s. And a series of violent events unleashed by Yeltsin, culminating in a coup on 4 October 1993, brought him to power. Yeltsin imposed shock therapy on the fledgling representative government but could only defend it by…dissolving representative government, receiving enthusiastic support from the West.

There followed a fire sale of Russia’s public wealth, which led to the rising power of the fabulously wealthy oligarchs. But just as Yeltsin positioned himself as the saviour of representative government, so Putin positioned himself as the stabilizing, reassuring figure in 1999 and, as Klein puts it ‘several oligarchs engineered a quiet handover from Yeltsin to Putin, no election necessary’.

Putin and the oligarchs

Putin was originally seen as a backlash against the shock therapy even as ‘tens of millions of impoverished citizens were still excluded from the fast growing economy’. However, the warning signs were there with a ‘new breed of “state oligarchs” rising around the Kremlin’. Meanwhile a ‘growing number of journalists and other critics die mysteriously, and the secret police enjoy seemingly total impunity’. Nevertheless, as Klein puts it, ‘the memory of the chaos of the nineties has made many Russians grateful for the order Putin has restored’.

Of course, the events we are seeing today might have happened any way without the imposition of shock therapy in the 1990s, but perhaps we should at least acknowledge that without the intervention of the G7 as Russia was peacefully reforming itself under Gorbachev, the world might have been very different.


Why the authorities hate XR

NON-VIOLENT protest or civil disobedience is often thought of in a passive, negative or defensive way. We shuffle along on marches, sit down on roads blocking traffic and annoying people. As Shelley writes in The Mask of Anarchy:

“With folded arms and steady eyes,/And little fear, and less surprise,/Look upon them as they slay/Till their rage had died away.”

These words conjure up an image of passive resistance, not aggression. But then he writes towards the end of the poem:

“Rise like lions after slumber/In unvanquishable number -/shed your chains to earth like dew/Which in sleep had fallen on you -/Ye are many – they are few.”

And that is a very different image. It gives the impression of taking matters into your own hands, of positive, even aggressive action against oppression.

In a way it’s the difference between purely passive protest and the more proactive protest of Extinction Rebellion (XR). Some people argue that people should be allowed to protest as long as they don’t rock the boat. But it is the XR protest that captures the imagination and grabs the headlines precisely because it is annoying and causes disruption, prompting governments to reach for the statute books. And the idea of ‘aggressive non-violence’ or what Albert Einstein called ‘militant pacifism’ is the key concept in The force of non-violence by Judith Butler, the Maxine Elliot Professor of Comparative Literature and Critical Theory at the University of California, Berkeley.

Protesters sitting down outside the Ministry of Justice in Westminster, London, during an Extinction Rebellion (XR) climate change protest.

In this book Butler makes two main claims. Firstly, non-violence has to be understood ‘less as a moral position adopted by individuals…than as a social and political practice undertaken in concert’. And secondly, perhaps more contentiously, ‘non-violence does not necessarily emerge from a passive or calm part of the soul’. As she points out, it is often an expression or rage, indignation, and aggression’. It’s not merely that non-violence can be aggressive. Butler argues that ‘non-violent forms of resistance can and must be aggressively pursued’, while also insisting that aggression should not be conflated with violence.

Not normally the image one might associate with non-violent protest!

In fact, the book’s title is derived from Gandhi, who insisted that satyagraha should be seen as a ‘non-violent force’ that can generate ‘matchless power’. Intriguingly, however, Butler also makes the point that in practice non-violence cannot always be guaranteed because when a protestor puts her body in the way of oppression she is ‘presenting a force against force’. And she adds: “Non-violence is less a failure of action than a physical assertion of the claims of life.”

Butler draws a distinction between liberal individualism and what she calls the inherently social activity of non-violent protest: “Some representatives of the history of liberal political thought would have us believe that we emerge from a state of nature,” she writes. Indeed, it has been argued often in this blog that various theories of liberalism from Hobbes to Rawls have invented the mythical concept of the social contract in largely failed attempts to create a link from the fully formed, free standing context less individual to the collective. Others, of course, and in particular right wing libertarians don’t even attempt to build a bridge. But as Butler writes: “I want to suggest, however, that no-one actually stands on one’s own; strictly speaking, no one feeds oneself.”

Butler takes a strongly collectivist position of the individual, similar to the Marxist view that our consciousness is largely determined by our social and material being, not the other way round. “The individual is not displaced by the collective, but it is formed and freighted by social bonds that are defined by their necessity and their ambivalence,” she writes.

Another key element of Butler’s thesis is equality, which for her resides in the matrix of violence and non-violence, or rather the point at which non-violence morphs into violence. She writes: “For non-violence to escape the the war logic that distinguishes between lives worth preserving and lives considered dispensable, it must become part of the politics of equality.” And it must accept the ‘interdependency of lives’. It has to be said that this probably the least convincing and most perplexing part of the book. In particular there is no reason to suppose that equality lurks somewhere between violence and non-violence, although she does concede that non-violence must be part of the politics of equality rather than the creator of equality as she seems to suggest earlier.

Butler’s analysis of non-violence treads a delicate, often Byzantine and confusing pathway between violence and non-violence and in doing so delineates a nuanced defence of protestors like XR and Insulate Britain.

Some people may bridle their tactics, which can cause considerable disruption to ordinary lives (it should also be said that it is easy to support them until such time as one is actually caught up in one of their protests!) But the protestors might respond by arguing that ‘ordinary life’ might not remain ordinary for long if humanity fails to take effective action against climate change, and, further, a bit of disruption now is worth it if it results in such change. And if it is a measure of effective action that our government has felt it necessary to introduce legislation in the form of the Police, Crime, Sentencing and Courts Bill – which will return to the House of Common on 28 February to consider amendments from the House of Lords – parts of which are designed to make such protests more difficult, then it has already been very effective. And Butler’s book explains why governments hate activists like those in XR – they certainly rock the boat!


From resistance to…POWER!

IT often seems that if we want to change of any kind there is nothing we can do. In our representative form of government we are encouraged not to bother ourselves with politics, except to put a cross on the ballot paper ever few years. After all, so they say, we elect people to do the thinking for us don’t we? Indeed, as Salisbury Democracy Alliance (SDA) continues to campaign for a Citizens’ Jury in the city, this a common refrain we have encountered.

But we also often forget that the so-called cradle of democracy in ancient Athens largely eschewed elections, which they feared would be nothing more than elected oligarchs, or cliques, much as we have now – although we shouldn’t forget that the Athenians also excluded women and slaves from the democratic decision-making process. Which is why, of course, modern day forms of Citizens’ Juries and Assemblies involve the random selection of people and ways of ensuring a cross-section of the community.

Even Tory Grandees are not averse to calling our form of government into question with Ken Clarke recently describing our current government as being close to an ‘elected dictatorship’, echoing the words of Lord Hailsham in the 1960s.

And even the founding fathers of the American republic did not regard the system of representative government as being democracy. Indeed, up until the end of the 18th century elective governments and the sort of direct government pioneered by the Athenians were thought to be incompatible. It is not clear how the word ‘democracy’ came to be attached to representative government, but one theory is that Maximillian Robespierre welded the two together when the French experiment in direct democracy after the French revolution went horribly wrong.

Maximilian Robespierre

But whatever the cause, the unification of representative and democratic politics ensured that the campaigns for change tended to concentrate on extending the franchise. As Matthew Bolton writes in How to Resist: “The Chartists, the Suffragettes and others endured prison and faced death in their struggle for the chance to have a say in the governance of the country.” However, he also argues that it’s a mistake to assume that they were only ‘fighting for for the chance to put a cross in a box every few years’. Rather, they were, writes Bolton, fighting for power – to have more influence. Now we have the vote, however, we seem to be content to sit back and let others run things. We have ‘mistaken politics for Parliament and have come to see democracy as something to watch on television or follow on Twitter – or worse, to switch off from completely; losing trust in politicians, losing trust in the media, losing trust in the system’.

For Bolton, however, and, for that matter, SDA, democracy means ’embedding political action into our day-to-day lives, in our communities and work places’. His book is a rallying cry for a new kind of populism – that is the ‘mass participation of people in politics’ but not ‘populism as an approach by politicians to divide and rule, but populism as democracy, for the people by the people’.

What Bolton wants is for us to take back control from the ruling cliques (certainly NOT elites) who gain power both in politics and business on a comfortable conveyor belt private or grammar schools to a ‘decent university and a great career’, whatever the stripe of political party.

Bolton, who is deputy director of Citizens UK and lead organizer for London Citizens, believes that if you want change you need power. It is not that power in itself is corrupting – rather that it is so ‘unevenly distributed’.

The uneven distribution of power.

And one of his main claims is that in order to change the mindset of powerlessness, we need to understand that, actually, everyone has some power and that those ‘with less power tend to have more than more than they think, or they do not use their power strategically enough’. He is not averse to using self-interest as a tool for effecting change, and he should know because he has used it with great success, including his Living Wage campaign. His book is punctuated with examples of how the seemingly powerless found their power and, while their campaigns may have started with self-interest, they also often turned into the common interest.

One of his key concepts is that protest needs to be turned into action. The difference, according to Bolton, is that protest is often simply reacting to power, as is resistance (which makes one wonder why he chose How to Resist as the title of his book); whereas action means having a plan. If you have a plan you are ‘initiating the changes and someone else is going to have to react’.

You need a plan to effect change, not just protest.

Bolton’s book, while it has some theory, is packed with practical advice on how to effect change and examples of success. As such it is a refreshing booster for anyone jaded by constantly being knocked back by the established order, or having their enthusiasm sucked out of them by energy vampires.

Perhaps the last word should go to Margaret Mead of the Mead Trust, who said: “Never doubt that a small group of thoughtful, committed citizens can change the world. Indeed, it is the only thing that ever has.” Hear, hear!


Judgement versus Reckoning

DEPENDING on what you read Artificial Intelligence (AI) is either the ultimate threat to humanity – always supposing we survive the climate crisis of course – or it’s our great saviour. Some argue that AI is developing so fast that it will take over the jobs currently done by humans, leaving humanity without meaning or purpose.

Will AI take over from humans – does it matter?

Others, while agreeing with that premise respond by saying ‘bring it on’. If the late David Graeber is to be believed, then many of the occupations we have now are nothing more than bullshit jobs that exist only because of the protestant work ethic – work is good regardless of how pointless it is. The argument runs that we should let robots get on with the jobs they are best at, and let humans do the work that robots can’t do – jobs that require compassion and judgement, for example. The second argument is often accompanied by the assertion that we need a Universal Basic Income to compensate for the reduction in paid labour, paid for by increased productivity from the robots.

Underlying the second argument there appears to be a more profound claim being made about the difference between humans and AI. And that is certainly the view of Brian Cantwell Smith in The Promise of Artificial Intelligence – Reckoning and Judgement. He starts the book by writing: “Neither deep learning, nor other forms of second-wave AI, nor any proposals yet advanced for third-wave, will lead to genuine intelligence.” And he draws a distinction between the brute reckoning of AI and the ‘human-level intelligence and judgement, honed over millennia, which if of a different order.

He reserves the word ‘judgement’ for the ‘normative ideal to which I argue we should hold full-blooded human intelligence – a form of dispassionate deliberative thought, grounded in ethical commitment and responsible action’. And although he acknowledges that not all human activity reaches this level, nevertheless it is an ideal to which ‘human thinking should ultimately aspire’ – an aspiration that is beyond AI.

Human intelligence is of a different order from AI

Reckoning of the kind he attributes to AI refers to the kind of ‘calculating prowess at which computer and AI systems already excel – skills of extraordinary utility and importance’. Within this matrix Smith is most concerned that we humans in our admiration of AI attempt to emulate it by relying on ‘reckoning systems in situations that require genuine judgement’ and that by being ‘unduly impressed by reckoning prowess, we will shift our expectations of human mental activity in a reckoning direction’. Rather, he argues that we should indeed use AI to ‘shoulder the reckoning tasks at which they excel’ while we ‘strengthen, rather than weaken, our commitment to judgement, dispassion, ethics, and the world’.

A similar point was made by Prof Stuart Russell during his Reith Lecture on Radio 4 before Christmas. And although things don’t look great on issues like climate change, it is not all doom and gloom because these sorts of human responses have been successful in virtually eliminating the use of chemical and biological weapons.

Smith outlines four main areas in AI research that he thinks could counter the dangers inherent in systems that don’t exhibit these sorts of human attributes: 1 – take the body seriously; 2 – take context and surrounding situations seriously; 3 – consider the possibility that the mind is not just in the brain but extends into the environment as a form of ‘cognitive scaffolding’; 4 – don’t separate thinking from full-blooded participation and action.

Towards the end of the book, after a considerable amount of detailed analysis of AI, Smith returns to his main theme of distinguishing between reckoning and judgement, arguing that ‘ultimately, you cannot deal with the world…without judgement’. And that with judgement comes accountability which ‘can serve as the grounds not just of truth but of ethics as well’. Although he acknowledges that emotion, as well as reason, plays a significant role in our lives, he believes that the sort of judgement to which he refers – that is ‘authentic judgement’ – demands ‘detachment and dispassion‘ which can free us from the ‘very vicissitudes most characteristic of emotional states’. At the same time, however, he rejects the ‘idea that intelligence and rationality are adequately modelled by something like formal logic’.

This is a rich and rewarding book. And although Smith doesn’t refer to it, his position echoes Enlightenment thinkers who, properly understood, did not privilege Reason above all else, but argued that a little more reason in a world dominated by emotion, superstition and blind faith, might not be a bad thing.

Politically, it underpins a more communitarian approach over the individualism of liberalism. It is embeddedness in the world and in our communities that is the key feature in judgement and distinguishes is from the brute fact of reckoning.


Ultimate reality – what if anything is the truth?

“Whether you’re a scientist of not, consciousness is a mystery that matters. For each of us, our conscious experience is all there is. Without it there is nothing at all, no self, no interior and no exterior.”

So writes Anil Seth, Professor of Cognitive and Computational Neuroscience at the University of Sussex in his book Being You. And in writing it, of course, he explodes the myth, as espoused by Tartaglia in the previous blog, that neuroscientists have to abandon the concept of consciousness. This is true of some neuroscientists and neuro-philosophers, but by no means all of them. Of course, Seth still has to make his case that consciousness can be captured in a satisfactory way by materialism, but he is in no doubt that his ‘preferred philosophical position…is physicalism’ or materialism as we have been calling it here. In this the fourth, and final, blog on this subject – for now at least- we explore his materialist approach and try to find a solution to the great materialist v idealist debate.

Anil Seth

In a really simple and effective way he points out that materialism and idealism have similar, indeed, mirror problems. Whereas for materialists it’s the problem of how the mind emerges out of matter, for idealists it’s ‘how matter emerges out of mind’ – although, as we have seen, the philosopher Gilbert Ryle argues that this is a pseudo problem because mind and matter should not have been split asunder by philosophers like Descartes in the first place.

Seth begins his climb up from brute matter to consciousness by claiming that there are actually several levels of consciousness which are linked to the idea that ‘every conscious experience is both informative and integrated, inhabiting the complex middle ground between order and disorder’. He simplifies this claim by writing that ‘a system is conscious to the extent that its whole generates more information than its parts‘. But Seth goes further than this and his position rests on the argument, much like Kant and Schopenhauer as we have seen in previous blogs – and indeed Tartaglia – that the world as we perceive it is a ‘construction of the brain’ or, as Seth puts it, a ‘controlled hallucination’ in order to distinguish it from the uncontrolled hallucination of our dreams.

The mystery of consciousness

In short, Seth argues that the brain is a ‘prediction machine’ and that what we ‘see, hear, and feel is nothing more than the brain’s “best guess” of the causes of its sensory inputs’. Interestingly, if true this would furnish a materialistic account of intentionality, or aboutness, so beloved by some idealists who argue that matter simply can’t accommodate this aspect of consciousness. What Seth is saying is that matter, in its particular manifestation as the brain, can.

According to Seth our perception of a cup of coffee, for example, is caused by our sensory signals but the image itself is constructed by our brains as its best guess of what is out there. He writes: “We never experience sensory signals themselves, we only ever experience interpretations of them.” This, again, is very close to the transcendental idealism of Kant and Schopenhauer, both of whom acknowledge the existence of an ultimate reality that lies forever beyond out knowledge.

Still further Seth argues that what the brain is doing is deploying Bayesian logic or abductive reasoning, often referred to as ‘inference to the best explanation’, the insights from which are central to understanding how conscious perceptions are central to ‘understanding how conscious perceptions are built from brain-based guesses’.

For Seth all this has a profound impact on who we are – or think we are. For him the ‘ground-state of conscious selfhood’ is ‘formless, shapeless, control orientated perceptual prediction about the present and future physiological condition of the body itself. This where being you begins, and it is here that we find the most profound connections between life and mind, between our best machine nature and conscious self’.

Curiously, both idealism of Tartaglia and the materialism of Seth have the same basis – the Self. For Tartaglia that leads to his form of idealism which thinks of ‘ultimate reality as something within each of our perspectives, while materialism does not’. As we have seen, however, this is a mischaracterization of materialism, or at least the positions of materialists like Seth. Tartaglia’s position is also in danger of falling into solipsistic abyss in which each individual flails around unconnected to any other Self. It also ignores the possibility, as Seth suggests, that our internal experience is formed by the physical world, even if we have no direct intuition of that world.

Is the world a construct of the brain?

Seth also dodges the solipsistic problem by acknowledging that even though there is no actual essence of self, part of what gives us the sense of selfhood is what he calls the ‘social self’, which is ‘all about how I perceive others perceiving me’. And he adds: “It is the part of me that arises from me being embedded in a social network.” Accordingly, the ‘social self emerges gradually during childhood and continues to evolve throughout life.’

We have come a long way since the first blog on ultimate reality was posted last year. It has felt sometimes that there is no principled way of deciding between idealism and materialism – and dualism doesn’t seem to solve the problem because it too has the seemingly irresolvable mind/body bifurcation. Maybe there simply isn’t a single ultimate reality. Maybe it’s a bit like the wave/particle dualism of photons or the duck/rabbit illusion.

Is it a duck or a rabbit?

On the other hand either idealism is true or materialism is true – but not at the same time!

But if one were forced to make a decision then it could be argued that materialism just edges it mainly because it avoids the problem of solipsism and acknowledges that Ultimate Reality resides outside of us as individuals. No doubt there are idealists out there who have answers to meet these problems and nothing that has been written here suggests that either side has a knock-down argument. But for now at least that’s all folks, and this blog will move on to other topics.


Narrowing Ultimate Reality

IN the last blog we investigated the somewhat bewildering range of positions on ultimate reality. So, it is now time to narrow things down. And to do that we will be eschewing dualism, or at least remaining agnostic about its truth, simply because of the seemingly unsurmountable problems it has with how two different substances can interact with each other. No doubt its proponents have a way of resolving these problems but that may be for another blog. For now we will be concentrating on trying to determine whether Idealism of Materialism is true – and of course if one or the other is found to be true then dualism falls by default. Helping us through this enquiry are two thinkers – James Tartaglia with his book Gods and Titans, which promotes Idealism, and Anil Seth in Being You, in which he promotes Materialism.

James Tartaglia of Keele University

Surprisingly, perhaps, Tartaglia begins with a plausible account of materialism, which is also where he started as a philosopher. What makes Materialism seem obviously true, he writes, is that we are living in a ‘physical world out there which exists independently of us’. Furthermore, with the rise of science we have an excellent description and explanation of this physical world. And while there are still some philosophers out there who believe there are immaterial things like minds and gods, or God, this is ‘obviously old-fashioned, superstitious nonsense: there are no spooky, immaterial things floating around in the physical world’.

It is true that this is a plausible account of ultimate reality and it is one that has held sway for the last 100 years or so. But it is at this moment that Tartaglia makes a startling claim: “This is just one big misunderstanding. You can pick holes in anything…but there is nothing to be said for any of the above.”

His most important attack is one that he takes to be the materialists’ attempts to ‘discredit our natural, subjective understanding’ of our experience – although, as we shall see later, this criticism does not apply to all materialists.

It’s all subjective

But Tartaglia makes the point that if there really are immaterial things like minds then it cannot be right, as the materialists suppose, to ‘think of them floating around in the physical world’. He continues: “Any philosopher who ever seriously contended that minds or experiences were non-physical was not thinking about them objectively, but rather subjectively – in terms of the subject who has them.”

Tartaglia’s positive argument for Idealism begins with: “Each of us lives through a stream of conscious experience, which is intermittently interrupted by sleep, and then, eventually, permanently ended by death.” It then gets a little more complicated because he argues that the stream of consciousness is all that our lives consist in.

“Everything we know or care about either enters into our experiences or else we believe in it in order to make sense of of what does.” Science, far from describing Ultimate Reality, simply describes and explains what we experience not experience itself. “Experience is all we can be sure of has independent existence. The key difference from materialism is that ‘idealism thinks of ultimate reality as something within each of our experiential perspectives, while materialism does not’. So, for Tartaglia, them, ‘ultimate reality is to be found within each of our individual experiential centres’.

Interestingly, Tartaglia argues that we need to see that ‘mathematical physics does not describe the world we see and touch, but it is rather a means of predicting and controlling our experiences of that world’ and we ‘need metaphysical beliefs rooted in the concrete reality of experience, rather than the abstract predictions of mathematics’.

This argument is intriguing because, as we shall see in the next blog, Seth argues the same point from a materialist’s position except that for him this predicting and controlling that the brain does just is our consciousness.

There are at least four main problems with Tartaglia’s position. One is, as we see here, he assumes that materialists have to deny our individual subjective experience and consciousness to maintain their position; as we shall see this not necessarily true. Secondly, he also assumes that the only alternative to idealism is the objective world as unveiled by science. But there is another reality that he ignores and this is the noumenal of world-as-it-is-in-itself of Kant or the Will according to Schopenhauer, which is, for them, forever beyond the knowledge of scientists or individual experiences.

Arthur Schopenhauer

At the risk of drifting into politics, it could also be argued that Tartaglia’s vision of Ultimate Reality draws us away from the world into our solipsistic selves and downplays our engagement with the world.

It is also odd to suggest that if there is an immaterial mind then it has to exist outside of the physical world. Far from solving the problems of how anything immaterial can exist in a physical world, this simply pushes the problem one step back and risks splitting Tartaglia’s insistence on mono-idealism into the Cartesian dualism he rejects.

In the next, and final, blog in this series on Ultimate Reality we shall look at a modern take on materialism and see if it can solve the problems of Tartaglia’s position and counter some of his attacks on materialism.


The labyrinths of Ultimate Reality

THE first move here following on from the last blog is to is to give a brief definition of Ultimate Reality – and are there are two possibilities. The first is that it is whatever the universe is in itself regardless of our position within it. Secondly, it is what ever presents itself to our understanding regardless of our position within in it. Another important distinction to make is the difference between Ultimate Reality and metaphysics – the latter being the method by which we develop our understanding of Ultimate Reality rather Ultimate Reality itself.

In the last blog we looked at the history of materialism and idealism. So, we now begin with a closer look at what these terms mean. It could be argued with some cogency that the most basic or the purest form of materialism is physicalist materialism, which bases its position of Ultimate Reality as being whatever the discipline of physics asserts Ultimate Reality to be. A variant of this is what is often called emergent materialism within which there are certain complex entities like minds that, while they have their origins in the brain, are not wholly reducible to it.

Of the first type Rex Wilson writes in Philosophy, Neuroscience and Consciousness that ‘conscious properties must somehow be properties of physical things’. Of the second type neuro-philosophers, writes Wilson, argue that although ‘conscious properties’ remain within the ‘framework of science’ they nevertheless retain an ‘open-minded willingness to refrain from inferring that conscious properties are also reducible to micro-physical properties of neural events’.

Turning now to idealism, one of its most important aspects is the concept of an essence both in the abstract and the concrete. So, in this sense, according to many idealists at least, there is the universalism of humanness – or a type – of which all humans are tokens. But many also argue that there is such a thing as concrete essence which relates to a specific class of things like humans or dogs – and because this is, unlike the fixed abstract universal, dynamic and developing it can accommodate the individual. It is likely that these two concepts can be merged as when abstract universal humans rights apply very concretely to individuals.

One of the other aspects of idealism is that the mind cannot be reduced to the brain or a higher level of reality, like consciousness, to the firing of neurons in the brain. In fact idealists go further to argue that while matter can be explained by the mind, matter cannot explain the mind. Indeed, idealists like Arthur Schopenhauer, following on from Immanuel Kant, argue in his The World as Will and Representation that the ‘world is my idea’. That is the ordinary everyday phenomenal world is created by our brains and the world beyond our representation of it – the world as Will as he put it – must forever be beyond our understanding, although it can at least be inferred to be undifferentiated.

Is the world as we know it created by our brains?

In terms of Ultimate Reality many idealists either believe that it rests in the everyday, solipsistic experience of each individual, as Scottish philosopher David Hume of Bishop Berkeley believed, or, at the other extreme, there lies the pantheistic idealism of Baruch Spinoza. Most philosophical idealists, however, attempt to steer a middle course between these extremes.

The third aspect of Ultimate Reality is dualism whose adherents claim that the mind is a non-material entity that not only cannot be reduced to the brain but doesn’t even have its origin there. This distinguishes it from emergent materialism and those idealists who claim that Ultimate Reality is nothing other than purely non-material, although it can come close to thinkers who try to find a middle way. The distinctive aspect of dualism is that they believe that there is indeed a material world in which resides the brain, but the brain is is not part of the immaterial world of the mind. In this sense, then, Ultimate Reality is made up of these two entirely separate and unconnected substances, although it’s not clear whether the mind can be described as a substance as such.

Rene Descartes, of course, is the best know exponent of dualism in the modern era, although it was Gilbert Ryle in the 20th century who memorably and disparagingly dubbed Descartes’s immaterial mind the ‘ghost in the machine’ in his The Concept of Mind.

Karl Marx is often thought of being a pure materialist but as Karl Popper notes in his The Open Society and its Enemies there is a sense in which he too can be described as a dualist. In the third volume of Capital, for example, he clearly equates the Kingdom of Necessity with the material world and humanity’s interaction with matter. But his idea of the Kingdom of Freedom, even though it has its origin in the Kingdom of Necessity, nevertheless transcends it when it breaks away into the immaterial world of the mind beyond the drudgery of material machinations. Not a pure dualist, then, because of the link with matter but Marx does seem to think that Ultimate Reality, is ultimately, dualist. Perhaps Marx could be seen as one of those thinkers adopting a middle way, which, of course, is very far from his political thought!

In the next blog we look at ways of narrowing the field down, to make it more manageable in an attempt to work some sort of conclusion.


What is ultimate reality?

IT’S a big question. Perhaps the biggest that humanity can ask itself. And it’s one that also feeds into our sense of meaning as we shall see. Whatever the answer is, indeed whether there is an answer at all, helps to explain and locate our place in the universe, or multi-verse.

What is ultimate reality?

Even asking the question ‘what is reality?’ says something about the universe itself because it means that creatures have evolved in it – homo sapiens – who are here to discuss the question in the first place – a phenomenon that is sometimes called the anthropic principle.

There are two main strands of thought when we talk about ultimate reality that come under the titles ‘materialism’ and ‘idealism’. Simply put materialism states that the world is entirely made up of material objects without remainder. So, no immaterial or supernatural entities. Everything is contained within the material world – even consciousness. Idealism, in contrast, holds that the immaterial, like the mind and, yes, consciousness, or spiritual, like deities, are at the core of the world and crucial to our understanding of it.

It might be helpful here to give a brief history of the two approaches. It should be pointed out, firstly, that materialism is not a new idea.

All is matter in materialism.

In Western philosophy it can be traced back to Greek thinkers of ancient Athens like Leucippus and Democritus, who were born in the 5th century BCE. Famously, Democritus asserted that ultimate reality consisted of atoms. Epicurus followed and he argued that, although the universe was indeed made up of atoms, there was an indeterministic element to them. According to him atoms fall to earth is parallel lines, but change is explained by chance deviation, causing them to collide.

Materialism largely disappeared from the scene with the rise of scholasticism but was revived by the Catholic philosopher Pierre Gassendi in the 17th century and then by the thoroughly materialistic philosophy of thinkers like Thomas Hobbes (incidentally, Hobbes was born in Warminster and there is an early copy of his magnum opus Leviathan on display in the town’s library).

Charles Darwin’s theory of evolution by natural selection boosted materialistic thinking. A key figure in 20th century materialism is the Oxford philosopher Gilbert Ryle who famously castigated Descartes for his dualism, dismissing his immaterial mind as the ‘ghost in the machine’. He also argued that the traditional problem associated with materialism about reducing the immaterial kind to the brain was really only a pseudo problem because they shouldn’t have been separated in the first place. A modern day materialistic philosopher is Galen Strawson who holds that ‘we are wholly physical beings’.

Idealism also has an ancient history in Western philosophy at least can be traced back to thinkers like Plato with his Theory of Forms or Ideas.

Plato’s idealism

Unlike materialism, this way of thinking flourished during the medieval period, although St Thomas Aquinas in the 13th century did have a materialistic streak in him. As Denys Turner notes in his Thomas Aquinas – A Portrait, although he equated the intellect with the immaterial soul he did at least acknowledge that ‘the human intellect is deeply rooted in, not separate from, the animal and vegetative live of a human person’.

Idealism in its purest form is represented by Bishop George Berkeley in the 18th century for whom the phenomenal world of hard objects only exist while they are being perceived by a subject equipped with sense organs. Later in the 18th century Immanuel Kant tried to find a synthesis in which he replaced Berkeley’s ‘subjective idealism’ with ‘objective idealism’

Immanuel Kant

Kant argued that in his Critique of Pure Reason that the human self creates knowledge from sense impressions on which we impose certain universal concepts, which he called categories. This process creates the world as we know and understand it beyond which lies the world as it is in itself, which is forever and necessarily beyond our knowledge.

Arthur Schopenhauer idolized Kant but also corrected and stream-lined many of Kant’s arguments including reducing his rather cumbersome categories to just three -time, space and causation.

Arthur Schopenhauer

Having given this brief history and introduction to materialism and idealism the next blog will delve deeper into the various nuances involved. It can sometimes feel as though this is an either/or situation – one is either a materialist or an idealist. And although this fundamentally true, there are many degrees to consider with some arguing that there is a range of views from extreme to moderate materialism, which is also true of idealism. Then, of course, there is dualism, which has been touched upon here but not, as yet, explored.

This is going to be a long four-blog journey but one that is, hopefully, worth the ride. At this stage it cannot be stated what the conclusion will be, or even whether there can be a conclusion. So, fasten your seat-belts and get ready for an exploration into – ultimate reality!


From the marshmallow mind to Citizens’ Assemblies

MANY argue that short termism is the curse of representative government. The Taliban, for example, famously said when troops entered Afghanistan in 2001 that while the invaders had watches ‘we have time’. The Chinese have a similar long-term view. But in representative governments everything is geared to the short term – from our electoral cycles to the machinations of the marketing and social media giants who want us to act and buy NOW, even if it sends us into debt. We are dominated by the tyranny of the NOW and that has consequences for the way we deal with – or fail to deal with – climate change and the future itself.

For Roman Krznaricin inThe Good Ancestor our ‘societal attitude is one of tempus nullius: the future is seen as “nobody’s time”, an unclaimed territory that is similarly devoid of inhabitants’. But it doesn’t have to be like this. As Daniel Kahneman noted in Thinking Fast, Thinking Slow while we do think short, we can also think long. This ability could be seen as Homo Sapiens’s greatest asset, along with its tendency to work collectively, an asset that eluded the Neanderthals who were probably more intelligent but less social. So, if our ability to think long-term is crucial to our success as a species, it is particularly perverse that it is being so actively undermined.

Krznaricin graphically describes these two aspects of our brains as Marshmallow versus Acorn.

The marshmallow or…
…the acorn

Obviously, it is the latter that he wishes to tap into rather more than we do. It’s a major problem because, as he writes ‘seeking the instant thrill of a dopamine rush…has been intentionally designed into the technology’ that we use. However, this phenomenon flies in the face of our evolution because, as psychologist Daniel Gilbert says we are ‘the ape that looks forward’.

Accordingly, what we need to do, according to Krznaricin, is recapture our sense of deep time from the dominant ideology that time is money that developed with the ‘growing merchant class in medieval Europe’. He also explores other ways of encouraging acorn thinking, including developing a legacy mindset, cathedral thinking or the art of planning into the distant future; developing a transcendent goal or lodestar for humanity like, for example, the idea that ‘human beings are not separate from the nature but a dependent part of the living planetary whole’ which can act as a ‘compass for humanity’.

One of the big ideas at the heart of the book – and there are a few – is what Krznaricin calls Holistic Forecasting as a way of helping shift our gaze beyond our immediate concerns. He is, of course, well aware that our ability to predict the future in our enormously complicated and fast moving world is limited. A 20-year old study by political scientist Philip Tetlock, for example, shows that predictions of experts from a range of organisations were ‘extremely inaccurate’.

However, Krznaricin is convinced that that is not the end of the matter. This is not necessarily a good thing because we know that the giant tech companies are able to use Big Data gleaned from our own use of the internet to predict our preferences and, ultimately, to manipulate those preferences in both the commercial and political worlds. But Krznaricin reckons there is one pattern that repeatedly pops up in human history that can be beneficial for humanity as a whole – the S-curve.

The S-curve

While the S-curve or sigmoid curve will not tell you specifics, it simply states that nothing grows for ever – and it expressly counters the dominant model that shows growth for ever.

It is in the light of the S-curve that Krznaricin argues for what he calls the ‘transformation path’ in which the aim is to ‘safeguard and promote conditions to allow the flourishing of life on Earth for generations to come’ in a ‘world where the old institutions of representative democracy and growth-dependent economies lose their dominant position and replaced by the new political, economic and cultural forms’. Of course, the relevance of this kind of thinking is shown up starkly during the COP26 talks Glasgow.

Krznaricin suggests a number of ways to get to where we need to be to shift from marshmallow to acorn thinking including ecological and cultural – but one of his most interesting sections is on what he calls ‘deep democracy’. In particular he calls for ‘time rebellion’ to be the ‘vanguard rebel movement to reinvent’ or perhaps to create for the first time a fully functioning democracy. He splits this movement into four: 1 – The guardians of the future who ‘represent and safeguard the interests of disenfranchised youth and future generations’; 2 – Intergenerational rights which involve legal ‘mechanisms to guarantee the rights and well-being of future generations’; 3 – Self-governing city states and; 4 – Citizens’ Assemblies, which is, of course something that Salisbury Democracy Alliance has been campaigning for for years.

As Krznaricin writes: “The rise of citizens’ assemblies signals an extraordinary development in the history of modern democracy: a revival of the ancient Athenian model of participatory democracy.” He points out that citizens’ assemblies are an ‘exercise in slow-thinking, allowing participants the time and space to learn about and reflect on long-term issues facing society’.

He describes his book as being hopeful rather than optimistic but it can be difficult even to be hopeful when one sees the myopia of government and the corporate world. But we have to hold on to the fact that sometimes things do change. Look at the sudden and unexpected collapse of the Berlin Wall and the Soviet Union. Look closer to home to see how the Vienna Circle including Hayek and Mises were political outliers for 30 years or so until the political door opened with Thatcher and Reagan.

We have to hold on to the belief that change is possible, which is why Salisbury Democracy Alliance continues to fight for a Citizens’ Jury in Salisbury. We are the Time Rebels!


In pursuit of beauty

BEAUTY, they say, is in the eye of the beholder – although it’s probably more accurate to say it’s in the visual cortex of the beholder, but that’s a subject for a future blog. However, beauty performs many other functions. An elegantly stroked cover drive for four in cricket is somehow valued more than the hack over cow corner with the same result. The same is true for all sports. In maths the search always seems to be for the elegant solution to a problem. If it isn’t beautiful then the concern is that there must be something wrong with the solution. And as physicist Brian Greene claims, the universe itself is elegant – or at least it will be if string theory turns out to be correct. In ancient Greek philosophy Plato’s Theory of Forms is beautiful in its sheer simplicity. Beauty is often seen to be good, even if some artists try to undermine that concept, but is it morally good? Well, that’s the claim made by Heather Widdows in her new book Perfect Me.

International Pakistani batsman Babar Azam executes the perfect cover drive

In this book Widdows argues that the ‘beauty ideal is increasingly presenting itself as and functioning as an ethical ideal for very many people’. There is, she claims, a ‘duty to be beautiful’ and for those who ‘fall under it the beauty ideal provides a value framework against which individuals judge themselves, and others, as being good and bad’. And she continues: “As such, the beauty ideal is functioning for some, as their overarching moral framework, to which they must conform to think well of themselves irrespective of, and in addition to, other metrics by which they judge themselves.”

This heady stuff and she could easily be accused of ignoring the harm that the beauty ideal can do, if were not for the fact that Widdows recognises this problem and confronts it head on. “I do not mean to underplay the extreme harms that attach to a dominant and demanding beauty ideal. The harms to individuals who engage, individuals who do not engage, and to us all are extensive and devastating,” she writes, but adds that ‘to simply tell women not to engage is unrealistic and ineffective, and, as I will argue, profoundly unethical’. However, she adds: “How we look should not be, as increasingly it is, our very selves.”

Widdows makes the point early on in the book that beauty has long been associated with morality and refers to Plato for whom ‘beauty is the only spiritual thing we have by instinct, by nature, and it is love of beauty that sets us on the moral path towards goodness and moral virtue’. In contrast in many traditional stories and fairy tales ugliness and evil are considered to be one and the same – think of the ugly sisters in Cinderella. And the contrast also plays a significant role in Sergio Leone’s The Good, the Bad and the Ugly. For some, Widdows argues, beauty is the ideal to work towards for and in itself, while for others it may also be a ‘means to other goods, and some may not value beauty at all’.

Widdows spends a considerable amount of time defining what is meant by beauty in the modern context but what it means essentially is that we ‘are good when we have resisted bad food…and eaten healthily’…while ‘not engaging in beauty activity is not merely a prudential failure, an aesthetic failure, or failure to conform to some social norm…but a moral failure’. However, Widdows does not herself ‘endorse beauty as an ethical ideal’ but rather that we ‘should recognise what is happening and part of this is an ethical turn’.

What are we to make of this argument? It may come as a shock to those of us who do not put much stock by beauty that we are not seen as being moral as others who do. But this isn’t really what Widdows is saying. So what is she saying? This is actually quite difficult to determine because although at best this book contains some subtle and nuanced arguments, at worst it can be quite confusing and muddled. But it seems to be that even if the beauty ideal fails as a moral ideal for some, we should at least acknowledge that for others it is. In that context she writes: “While it is the case that beauty matters more, it matters as well as and not instead of all the other qualifications, skills, and achievements necessary for success.” And: “If we carry on regardless, ever more isolated in our quest for the perfect me, the future will be bleak indeed.”

Is beauty really an ethical ideal rather than a delusion? One of the many problems with this assertion is that it only seems to be an ethical ideal for those who are or value beauty and it only them for whom beauty is a an ethical value, which is horribly circular and has no meaningful throughput. Indeed it is this circularity that circles infuriatingly throughout the book. Another major problem is that one cannot help feel that Widdows has simply committed a category error by conflating the sense of feeling good when we seek beauty with actually being good and surely these are not the same at all.

A moral egoist is not troubled by altruism

Further, valuing one’s own beauty, or at least seeking it for oneself, is an extraordinarily self-centred, egoistic activity that excludes other-regarding activities captured in the altruistic ideal. As such, for some at least, it doesn’t qualify as a moral ideal at all and those that think it does are merely deluded. However, it should be added that moral egoists might disagree!


The myth of the Social Contract

Social atomization

IT is a common observation, though no less powerful for being so, that we live in an atomized society where the individual rules supreme and the collective is dead. As Margaret Thatcher once said there is no such thing as society, or words to that effect. The key philosophical definition is provided by methodological individualism in which, as Steven Lukes writes in Debates in Contemporary Political Philosophy, asserts that ‘all attempts to explain social (or individual) phenomena are to be rejected…unless they are couched wholly in terms of facts about individuals’. It was this fundamental principle that was seized on by the likes of Hayek and the philosopher Robert Nozick and eventually entered the political arena with what is commonly called neoliberalism. But as Lukes points out it is an ‘exclusivist, prescriptive doctrine about what explanations are to look like’ and ‘excludes explanations which appeal to social forces, structural features of society’ and ‘institutional features’.

The MI position is anathema to communitarian (which is not the same as communism of course) philosophers like Michael Sandel for whom it expressly excludes people for whom a sense of belonging to a community is constitutive of who they take themselves to be. And in the German Ideology Marx wrote: “The production of ideas, of conceptions, of consciousness, is at first directly interwoven with the material activity and the material intercourse of men – the language of real life.” But note those words ‘at first’ because Marx continues to make the claim that autonomous individuals cannot happen ‘as long as they are unable to obtain food and drink, housing and clothing in adequate quality and quantity’. It should also be noted that in her magnum opus The Origins of Totalitarianism Hannah Arendt argued that atomized societies in which communal networks are shattered are prime breeding grounds for dictators like Hitler and Stalin.

It is relatively easy to see how the Big State versus Small State fits into this philosophical dialogue and it is into this explosive arena that the economist Minouche Shafik dips her toe into what, is has to be said, are very shallow waters indeed with her book What We Owe Each Other.

For her the fundamental aspect is what she calls the ‘social contract’, apparently unaware that this device is in itself an expression of political liberalism ranging from the thoughts of Thomas Hobbes through John Locke and Jean Jacque Rousseau to John Rawls in the 20th century with his Original Position. The social contract is a device for more socially minded liberals to bridge the gap between the fundamental political unit of the individual on the one hand and society on the other. But, as Sandel has pointed out many times, this expressly excludes a communitarian approach to society so constantly fails to build the bridge.

So, Shafik’s discourse is riddled with her underlying and unacknowledged liberalism. As an example she rejects the concept of Universal Basic Income mainly because the recent experiment in Finland failed to ‘help people find work by giving them support to learn new skills or start a new business’. After two years, she notes, the ‘evidence showed no impact on employment – participants were neither more of less likely to find a job than someone on unemployment benefit’.

A common view about UBI but does the evidence back it up?

She seems to be blissfully unaware, as indeed did the Finnish government, that UBIs are not designed to be job creation devices but to create stability in people’s lives and help to address inequality. What it did show, as the more socially minded Ed Miliband points out Go Big, that it does not necessarily result in people dropping out of the jobs market, thus countering a common concern about about UBIs.

Instead Shafik argues that targeted benefits are a better option without apparently understanding that it is precisely this system that has become so unwieldy and punitive in today’s splintered workforce. She argues that the ’empowerment of workers can be achieved through better minimum wages, benefits, unions and retraining programmes, without, again, understanding that unions struggle to survive and recruit in an atomized society. She also seems to be unaware that the work ethic is itself a problem in a society where so much is likely to taken over by artificial intelligence leaving humans to do pointless jobs or, as David Graeber put it, bullshit jobs just to maintain the culture of work for work’s sake

Compare all this with the much more profound problem posed by Sandel in The Moral Limits of Markets when he argues that we have drifted from a market economy to a market society and asks ‘how can we protect the moral and civic goods that markets do not honour and money cannot buy?’ And: “Not only has the gap between rich and poor widened, the commodification of everything has sharpened the sting of inequality by making money matter more.”

Ultimately Shafik is unable to extricate herself from her establishment positions and high-ranking roles in the World Bank, International Monetary Fund and the Bank of England and bases her entire argument on the myth of the social contract that exists only in the minds of political philosophers. She seem to be unaware, also, of the work of Wilkinson and Picket in The Spirit Level and the Inner Level in which they identify inequality, rather than poverty as such, as the main cause of various social ills for everyone – rich and poor. (It should be noted here that both these books have come under severe criticism of late, which will be the subject of a future blog). There is no mention of inequality in the index of her book and, as such, this means that there is a black hole at the heart of the book in addition to her failure to acknowledge her own political and philosophical foundations and undue reliance on the myth of the social contract.


Disobey – and take charge!

SOME argue that we are living in a spectator society – one in which, if people take any interest in society, politics and democracy, they do so from the side-lines. Reality TV sums all this up – and perhaps Gogglebox is its perfect manifestation with us, the viewer, watching other people watching TV. The argument goes that in our consumerist society most people are happy to acquiesce in life as a spectator sport. If true, then this is truly Kafkaesque as people willingly contribute to their own obsolescence.

The philosopher Frederic Gros, however, will have none of this as he explores the history of disobedience and claims that philosophy is, or at least should be disobedient and urges us to refuse to accept the obvious or to acquiesce in anything. In short, he urges us to disobey and take political responsibility – to take back control.

In his appropriately named Disobey, Gros writes that he wants to ‘present the problem of disobedience from the ethics of politics’. Here he acknowledges how difficult it must be for the poverty stricken to fight for themselves ‘while an indecent elite can earn in a few days what they never save up in a lifetime’. It is in this context that there is a tendency to believe the myth that these ‘social inequalities are natural’, a view perpetuated by the super-rich clique because ‘disobedience amounts to anarchy’.

It is in the early stages of the book that Gros starts to get really interesting when he refers to what he calls ‘surplus obedience’ in which people ‘commit to their own submission with energy and desire’.

Stanley Milgram’s classic experiment

This willingness to obey was demonstrated in Stanley Milgram’s classic, if controversial, experiment in the 1960s during which ordinary people from all walks of life were instructed to inflict what they wrongly believed to be steadily increasing electric shocks on ‘learners’ up to levels that would have been fatal had they been real. Although Gros doesn’t refer to this infamous experiment, he does argue that ‘what must be resisted is not power in its established forms, but above all our own desire to obey, our adoration of the leader because it is precisely this desire, this adoration, that gives him his hold’. Before we gain the power to effect the change that Matthew Bolton advocates in his How to Resist, we first need to learn how to disobey. As Gros writes: “To be free essentially means wanting to be free.” And disobedience leads to what Gros calls the ‘right of resistance…a right recognised for people when the laws fail to fulfil their initial purpose: to build concord and work for the utility of all’.

For Gros, obedience to authority is not a given because the citizen always has the ability to take the responsibility implicit in disobedience and, in doing so, truly taking back control. “Rather than individual positions expressed by way of voting slips politely slipped into the ballot box – which we are told is the kernel of democracy – it is a matter of returning to the living essence of the contract: we make the body of society by disobeying collectively.”

At the core of Gros’s position is what he calls the ‘non-delegable subject’, reminiscent of existentialism, which is ‘now threatened by individualism, relativism or subjectivism’. This because the ‘non-delegable point in each of us is precisely the principle of humanity, the demand of a universal’.

The Enlightenment has come in for considerable criticism in recent years for supposedly privileging cold reason over emotion, although this blog has repeatedly stressed the over-simplification of this view. Interestingly, Gros defines the Enlightenment in the sense that Immanuel Kant did as a kind of coming of age for humanity or ‘the capacity of emancipation, independence, autonomy’ or the ‘capacity of dissension. From this Gros develops his theory of taking responsibility through dissent: “Enlightenment = coming of age = critical judgement = examination = care of self = thought.”

Another powerful, and related claim of Gros is that it is not obedience that is the characteristic of responsibility – but disobedience. But there is a problem here as well to the extent that the individual that takes on this responsibility feels a ‘burden weighing on my shoulders’.

Gros responds to this problem by acknowledging that it is ‘impossible to stand for too long in the ethical fire of the subject responsible for everything, without being driven mad’. Nevertheless, this responsibility can act as an ideal or, as he puts it, a ‘necessary provocation’.

All this seems to be far from where we started – watching people watching TV, and it seems inconceivable that our love of obedience can be broke by Gros’s erudition. But as always with political philosophy it is possible to break his thought down into manageable chunks when, for example, he writes that ‘to think is to disobey’ and to stop becoming ‘traitors to ourselves’. And maybe in discovering the ‘politics of disobedience’ we may cause the rich and powerful to quake in their boots!


Land ownership and tax

Was the Roman slave market the origin of our sense of ownership?

DOES it make sense to say that anyone owns land? Ever since the times of the Roman Empire we have had a notion of ownership in terms of absolute dominion over property. But as the late David Graeber wrote in Debt: The First 5,000 Years this idea is ‘really derived from slavery’. “One can imagine property not as a relation between people but as relation between a person and a thing, if one’s starting point is a relation between two people, one of whom is also a thing.” This is how the Romans saw slavery and, according to Graeber, it’s also the origin of the word dominion ‘meaning absolute private property’. In contrast to this view Graeber argues that a better definition of ownership is not ‘really a relation between a person and a thing’ but an ‘understanding or arrangement between people concerning things’ in which we refrain from interfering with one and other’s things.

It is somewhat ironic in these circumstances that what we call ‘landowning’ in the UK actually in law means only that the ‘landowner’ holds land in estate. Absolute possession rests with the Crown. It then becomes relatively easy to imagine this property being socialized, even though this would be, to say the least, politically controversial.

However, there is an alternative approach that does not argue for socializing land but taxing it. In his 2015 book Land Martin Adams argues that the ‘value of land is best shared, and that when we profit from land we profit from society’. This was also the argument used by Henry George in his seminal 1879 book Progress and Poverty. It was written as a way of undermining the Social Darwinism of thinkers like Herbert Spencer which provided the ideological underpinning for the reducing the tax burden on the rich by shifting it on to the poor and the middle classes.

George denied the theory of natural superiority, which also justified the eugenics movement, and argued that economic inequality emerged out of allowing a few people to monopolize natural opportunities and denying them to the rest of society.

For some this would lead them to argue for land nationalisation. But not George: “Recognising the common right to land does not require any shock or dispossession. It can be reached by the simple and easy method of taxing only land values.”, he writes. This, he claimed, would ‘make possible a higher and nobler civilization’. Some of his observations still have a shock resonance with us today: “So long as the increased wealth that progress brings goes to building great fortunes and increasing luxury, progress is not real. When the contrast between the haves and the have-nots grows ever sharper, progress cannot be permanent.” As a matter of interest, George reverts to the labour theory of value proposed by Adam Smith and Karl Marx as opposed to the circular argument identified by Mariana Mazzucato in The Value of Everything in which ‘finance is valued because it is valued, and its extraordinary profits is proof of that value’, which conveniently side-steps the value imparted by the labour and leads to valuing wealth extraction equally with wealth creation.

It is also interesting to note that George identifies the shift from the notion of land being common property in ancient times to one of absolute or exclusive ownership in Roman law. As we have seen his critique of land ownership does not lead him to ‘abolishing titles and declare all land public property’ but to ‘abolish all taxes – except on land values’. George writes that the policy would reduce inequality by distributing one part of the proceeds to ‘individual producers – as wages and interest’ and the other to the ‘community as a whole’. Perhaps somewhat optimistically, George thinks ‘it would become possible to realize the goals of socialism without coercion’.

George addresses many of the objections that are likely to arise against his proposals including the sort of objections raised against Universal Basic Income, particularly the claim that without poverty people would become idle and that ‘labor must be driven, driven with the lash’, while the idle rich simply need to be given more incentives with what has been called corporate welfare. George writes: “Nothing could be further from the truth. Want may be banished but desire would remain.” Humans may only be animals but we are the ‘unsatisfied animal. Every step we take kindles new desires’.

It is easy to read George with a world-weary cynicism – after all we’ve been here before and will be here again and again. We may continue to rail against the same injustices that George railed against only to be frustrated by the forces of reaction.

The forces of reaction continue to frustrate progressives.

But who can deny the force of his argument? “We cannot permit people to vote, then force them to beg. We cannot go on educating them, then refuse them the right to earn a living.” The forces of progress may seem weak at times but George reminds us that we cannot give up the fight.


The myths that justify inequality

HOW did it come to this? “In 1971 Britain was among the most equal societies on earth in terms of both household income and wealth. Today we are one of the most unequal.” So writes Robert Verkaik in Why You Won’t Get Rich. For him it is largely the result of government decisions. For, as Philip Alston, UN rapporteur on extreme poverty and human rights, wrote – the UK government had inflicted ‘great misery’ on its people with ‘punitive, mean-spirited and often callous austerity driven by social engineering rather than economic necessity’. Needless to say that in some quarters he was vilified for his observations.

How did this happen and why do we allow it to persist? Well part of the reason must be that the machinations of the super-rich and their wealth extracting activities are cloaked in secrecy. It takes a dedicated academic at York University to inform us that more than £150 billion is handed to out in ‘corporate welfare’ every year directly or indirectly. Dr Kevin Farnsworth, of the university’s department of social policy and social work estimates that subsidies, capital grants, tax benefits, insurance and advocacy as well as transport, energy and procurement subsidies to be worth about £93 billion a year. He suggests that indirect benefits, including wage subsidies, education and public health care are worth £52 billion while the annual legacy of the 2008 bank bailouts and other crisis measures add a further £35 billion.

Secrecy conceals more than £150 billion in corporate welfare

In contrast every penny is accounted for in welfare payments and according to the government’s own statistics the net rate of loss from overpayments in 2019 to 2020 was 1.9%, or £3.6 billion. This has increased from the 2018 to 2019 rate of 1.5% (£2.8 billion). But it is the obscurity of the source of wealth for the super-rich, combined with the political apathy, or wilful ignorance, that is encouraged in certain corners of the establishment that helps to explain why the process of impoverishment continues. In our consumer society many people are content to be spectators in our government processes rather than engage with it.

Throughout the last decade or so there has also been the apparent paradox of high employment and high poverty. Verkaik argues that the reason for this is twofold – 1) a ‘decade of cuts in benefits directed by policies of austerity’. And 2) the ‘insecure nature of new kinds of low paid work’. This phenomenon was explained in detail by the late David Graeber in his Bullshit Jobs.

Meanwhile, one of the most astonishing aspects of modern life is the complete misrepresentation of the City as the paradigm of wealth creation, competence and probity, which somehow gets conflated with the view that the City enriches us all. The main problem with this view is that it is almost completely untrue. As Verkaik points out the ‘only people getting rich in the City are the people working there’. And as one client wryly observes of one institution: “It took my three years to realise why the partners were the only ones driving the expensive cars.”

Casino Capitalism

Casino Capitalism is an apt description because most people who make money in the City are just, well, lucky. Verkaik writes: “There are plenty of studies to show that a portfolio of randomly stocks can perform as well as a carefully assembled one.” And because the City doesn’t actually produce anything, apart from more money, it has to keep finding ways of creating – or increasingly extracting – wealth from the rest of us. “Every penny the City makes is paid for by people working outside the financial sector,” writes Verkaik and, amazingly, the ‘value of trade in foreign exchange alone is 100 times the value of world trade in stocks and services’. And so it goes on: “The stock market is no longer a means of putting money into companies but a means of getting it out.” All this is regarded as wealth creation because of a shift from the objective definition of value as expressed in terms of land or labour to a subjective definition in which ‘price is a direct measure of value’ as Mariana Mazzucato explains in The Value of Everything.

So, what is Verkaik ‘s answer to all this? Well, he has a three-pronged approach. The first is to have a ‘more efficient and progressive tax system’. The second is our old friend a Universal Basic Income. And the third is a ‘New Green Deal to create more sustainable jobs while also contributing the the arrest of climate change’.

Interestingly, Verkaik tackles the criticism of the ruling political clique that poor people need to be made to work or else they will just get drunk and laze about, while the rich, of course, just need more incentives to work.

Are we all just lazy unless made to work?

“When Canada paid a community in Manitoba a free wage in the 1970s everybody benefited,” writes Verkaik – and other evidence suggests that most people who have received a UBI have continued to contribute to society in one way or another. And: “A Universal Basic Income allows everybody to choose how they want to get rich, whether through the capitalist system or other less directly profitable activity.”

Of course, Verkaik is not alone in prescribing these solutions to the problem of wealth and power inequality. The problem comes when you try to create a groundswell of support among citizens when they are already turned off from politics and democratic decision-making processes. This why Salisbury Democracy Alliance is so keen to establish a Citizens’ Jury in the city as a small step towards engaging more citizens. But these on their own are not enough and a future blog will look at ways of engaging people and encouraging them to run their own campaigns for change.


When philosophers screw up!

IT’S almost a law of nature that great thinkers will be traduced by lesser thinkers. Think of Marx and Adam Smith and Schopenhauer and, well, almost every philosopher! But what happens when a great thinker is grossly misunderstood by other great thinkers? There was one extraordinary and original philosopher who’s thought was so thoroughly misunderstood that it led to a schism in philosophy itself.

Ludwig Wittgenstein

That philosopher is none other than the enigmatic Ludwig Wittgenstein and his equally enigmatic book the Tractatus Logico-Philosophicus. Written in epigrammatic form, this remarkable book has probably led to more misunderstandings than any other in the philosophical canon. It is full of startling phrases like the ‘world is all that is the case’ or ‘what can be shown, cannot be said’ and, famously, right at the end ‘what we cannot speak about we must pass over in silence’.

What Wittgenstein meant by all this was that pretty much everything that matters in life, including ethics, we must remain silent about. But there was a group of philosophers who catastrophically misunderstood him and believed that what he really meant was the exact opposite – that what we can speak about is all that matters. In his Confessions of a Philosopher Bryan Magee writes that this misunderstanding is all the more remarkable because Wittgenstein himself made it clear that ‘ethics cannot be put into words. Ethics is transcendental’.

The group of thinkers – whose philosophical ponderings are described as logical positivism – that blundered into this mistake came to be known as the Vienna Circle and included such luminaries as Robert Carnap and A. J. Ayer. They came up with the Verifiable Principle, which states that only assertions that are in principle verifiable by observations or experience can have meaning. As Magee writes: “Assertions that there could be no imaginable way of verifying must either be analytic or meaningless’. And ‘all discoverable truths about the world were discovered by the methods of science’.

According to Wolfram Ellenberger in Time of the Magicians however: “In Wittgenstein’s view, philosophy was not akin to legal writing, and neither was it intellectual enquiry: in fact, it wasn’t a teachable or thematically definitive science. But these were the precise convictions that lay at the heart of the Vienna Circle.” Amusingly, Ellenberger describes the situation as being akin to a tug-of-war with the Vienna Circle on the one hand asserting that the meaning of an assertion lies in the method of its verification ‘while a famously indefatigable Wittgenstein held his ground at the other end of the rope with Schopenhauer, Tolstoy, and Kierkegaard, waiting for the whole positivist troop to collapse’.

Ellenberger regards this situation as being ‘one of the strangest misunderstandings, not without its comical side, in the history of philosophy’. But there were some serious consequences of this misunderstanding in that two tribes formed – analytical philosophy and continental philosophy which are ‘dedicated to levelling mutual accusations at each other’, thus contributing to the distrust between the continental tradition and the analytical Anglo-Saxons – although it’s probably a bit of a stretch to say that it also fed into Brexit.

As it happens Wittgenstein is also allied by Ellenberger with three other great idiosyncratic thinkers – Martin Heidegger, Walter Benjamin and Ernst Cassirer who, he argues, from 1919 to the emergence of National Socialism remade philosophy. According to Ellenberger, Cassirer wanted us to ‘cast off your anxiety as creative cultural beings, liberate your original constraints and limitations’. Heidegger, meanwhile, urged us to cast off ‘culture as a rotten aspect of your essence, and sink on the groundlessly thrown beings that you are, each in your own way, back into the truly liberating origin of your experience: Nothing and anxiety’. And just as Wittgenstein claimed that a ‘picture is a model of reality’ so Benjamin used the ‘thought picture’ as a ‘tool in order in order to see the world correctly’.

If all this sounds a little esoteric it is nevertheless an object lesson in how even the mightiest intellects can get things horribly wrong – how tribes and echo chambers can evolve in any field and how the highly educated can be just as bias as the rest of us, even if they may be able to express themselves more eloquently. Although to be fair to Ayer he later quipped that ‘the most important’ defect of logical positivism ‘was that nearly all of it was false’.


Out of sight out of (your) mind?

WHEN did mental illness become a stigma, something to hide away – even punish? There was a time when the intellectually challenged member of the village was tolerated. But that’s a far cry from the horror stories we read about in the 19th century and the condition that inmates had to endure in Bedlam. Even in the 20th century we had the terror of Electric Shock Treatment so well exposed in One Flew Over the Cuckoo Nest, and to the barbarism of lobotomies. Thankfully, things are a little more enlightened these days. However, as Dr Peter Kinderman writes in The New Laws of Psychology we still need a ‘wholesale revision of the way we think about psychological distress’. And he adds: “We should start by acknowledging that such distress is a normal, not abnormal, part of human life – that we humans respond to distressing circumstances by becoming distressed.”

So, it appears that there is still a long way to go. But it wasn’t always like this and we only have to recall Erasmus and his metaphor of the Ship of Fools in Praise of Folly to demonstrate this: “Then perhaps we shouldn’t overlook that folly finds favour in heaven because she alone is granted forgiveness of sins whereas the wise man receives no pardon.” (Of course folly is female and the wise male!). Then, of course, we have Dostoevsky’s ‘positively beautiful man’ who clashes with the emptiness of his society in The Idiot. And let us not forget the ingenious gentleman himself Don Quixote.

For Michel Foucault in his Madness and Civilization the last part of Erasmus’s tilt at the theologies and churchmen of his day is ‘constructed on the model of a long dance of madmen’. In this extraordinary book Foucault asks what it means to be mad and traces its history from the 1500 when insanity was considered part of everyday life, to a point when such people came to be seen as a threat and were locked away out of sight and out of mind. “Heavens above doesn’t the happiest group of people comprise those popularly called idiots, fools, nitwits, simpletons, all splendid names according to my way of thinking?” he writes.

According to Foucault it was the classical age that resolved to ‘silence the madness whose voices the Renaissance had just liberated’. He even identifies the moment in the 17th century when confinement became the defining element of mental disorder and combines with a ‘condemnation if idleness’. It was, claims Foucault, the royal edict of 27 April 1656 that ‘led to the creation of the Hopital General’ which set itself the task of preventing ‘mendicancy and idleness as the source of all disorder’, thus replacing leprosy as the great Other to be shunned and locked away. In this way work took on its ‘ethical meaning: since sloth had become the absolute form of rebellion, the idle would be forced to work in the endless leisure of a labour without utility or profit’.

Hence the pointless treadmill seen in 19th century prisons and here too is the start of equating poverty not with lack of resources but with idleness or the ‘weakening of discipline and the relaxation of moral’. We might also point to the late David Graeber and his identification of bullshit jobs in our own day – there just to provide work for the sake of working.

So it came to pass that where once madness and unreason ‘floundered about in broad daylight’ in less than a century it has been ‘sequestered and, in the fortress of confinement, bound to Reason, to the rules of morality and their monotonous nights’. Here we have a full frontal assault by Foucault on the Enlightenment or, as he calls it, the ‘age of reason’ which ‘confined the debauched, spendthrift fathers, prodigal sons, blasphemers…libertines’. It should be pointed out, of course, that the Enlightenment is often mis-portrayed as simply privileging Reason over all else, whereas many Enlightenment thinks were motivated by an acknowledgement that humans were often really rather irrational and it might be a good idea to introduce a little more Reason and a little less superstition. It’s interesting that Kant thought the Enlightenment was a like a coming of age for humanity. Nevertheless, this mis-characterization does not blunt Foucault’s main argument that the Lords of Misrule have been unjustly cut off from society and that too much emphasis can be placed on Reason to the detriment of our mad creativity. The mentally distressed were not seen as having any use except as a spectacle – and as late as 1815, for example, the ‘hospital of Bethlehem exhibited lunatics for a penny every Sunday’. And again: “Madness had become a thing to look at: no longer a monster inside oneself but an animal with strange mechanisms, a bestiality.”

Madness and Civilization is a paean to Unreason and the role it plays in human affairs. But Foucault is not alone. Nietzsche privileged the wild abandon of Dionysius over the cool rationality of Apollo and the former, it seems, has been dominant ever since. Nevertheless, it is possible to go to the other extreme – to over objectify and place too much emphasis on Reason. This was a problem explored by Iain McGilchrist in his classic The Master and His Emissary (which featured in a previous blog) in which he argues that the alienation and abstraction of the left hemisphere of the brain is seen in some circles as being superior to the worldly engagement of the right hemisphere. The answer seems to be not that we should privilege one side over the other but that we should try to unite the two. As McGilchrist writes: “Ultimately, what I have tried to point to is that the apparently separate ‘functions’ in each hemisphere fit together intelligently to form in each case a single coherent whole.”


The truth about truth!

IF, as we saw a couple of blogs ago, reason has taken something of a battering, then the same is true of the very notion of ‘truth’. Therein lies part of the problem, of course. For it is self-contradictory to proclaim that there is no such thing as ‘truth’ because, of course, the proposition ‘there is no such thing as truth’ is in itself either true or not true. Another problem is that if that is all that counts as truth, then we are in danger of disappearing in a puff of our own logic, as Douglas Adams might have said.

We have seen on this blog how Frances Bacon compared truth to climbing a hilltop and Tim Harford in How to Make the World Add Up provided us with 10 rules for navigating the dense thickets of statistics that shape, and, it has to be said, misshape our world. But Harford assumes that there is such a thing as truth. Enter Simon Blackburn and his book called simply Truth.

Our old friend – the Mountain of Truth

Of course, Blackburn makes the point that the ‘god of truth’ is best served by the attendant deities such as reason, justification and objectivity. But what exactly is truth? Well, in his book he takes us through the classical approaches to understanding truth and then applies them to difficult problems like ethics and aesthetics.

In the first instance he adumbrates the correspondence theory, which states that in the same way that a map, in order to be useful, should correspond with what is on the ground, so a justified true belief should, at the very least, correspond with the facts. The problem here is that this may simply be an elaborate way of saying ‘true’ so that that saying ‘true’ and ‘corresponds with the facts’ is a distinction without a difference.

One may draw an analogy within, say, ‘baby swan’ and ‘cygnet’. Nevertheless, recent events have shown us that simply aligning ‘truth’ and ‘corresponding with the facts’ is important when the likes of Trump and Putin do their level best to decouple them. The theory, of course, also assumes that the sense-perception process passively receives facts from the world rather than interacting with the world and, in some sense, constructing a model of the world that isn’t straight-forwardly out there. At its most extreme Kant and Schopenhauer hold that the ‘thing-in-itself’ or the ‘Will’, the world unmediated by our sense-perception, is something other than the phenomenal world in which we live.

But is it?

However, this may not be fatal for the correspondence theory because it may be that to say that the world is divided between the noumenal and the phenomenal is simply a fact about our perception of the world. Of course, the realist and the idealist positions cannot both be true but that does not in itself collapse the correspondence theory – we simply don’t know which one is true.

The second theory is referred to as the coherence theory in which truth is linked to rational enquiry that is a ‘coherent, interlocking structure, a reflective equilibrium in which all our beliefs about a subject matter fit together’. This an attractive theory which requires that we are coherent and consistent in our approach to the world. This idea does, to a certain extent, dovetail with the correspondence theory because a coherent theory must at least consist of statements that correspond with the facts. Nevertheless, it is not a complete theory because we may still be concerned that a ’roundly coherent body of belief’ might just be a ‘giant fiction’.

So, we need a further move and this is provided by the pragmatic theory, which focuses exclusively on successful outcome. The link between truth and success is associated with American pragmatists like C. S. Peirce, William James and John Dewey (who featured in a previous blog called The return of the public). It is founded on the idea that the truth of a theory is dependant on its success.

The pragmatic theory of truth

A classic example is quantum physics, which, while not fully understood, is nevertheless one of the most successful scientific theories ever. Who cares if we don’t understand it if it is so useful?

And finally we have what is called ‘deflationism’ which states that the notion of truth may work in the background but in the end it makes ‘no difference whether we simply assert something or assert it prefacing the assertion with it is true that’. So, by this Blackburn means that we don’t actually need the category ‘truth’ just assertion ‘that X’ and so-on. “Truth is only present as deflationists say as a device for pointing in the general direction in which the real explanation is to be found,” writes Blackburn. The problem with this position is rather similar to the one relating to correspondence theory in that by demoting truth to a signpost to the real explanation one is creating a distinction without a difference in that the proposition ‘real explanation that’ is the same as saying ‘it is true that’.

It is small wonder that postmodernists and populists have had such fun with the notion of truth if none of us can state exactly what truth is. It is a feature of all the theories that they don’t actually say anything about truth itself, rather they furnish us with methods of finding truth – even if they do so imperfectly. The problem with finding the truth about truth is that it is constantly in danger of plummeting down a vicious spiral of circularity. But maybe this is a feature of truth. Just as we can see but not see ourselves seeing or hear but not hear ourselves hearing, may be we can find the truth without knowing what truth itself is. Just as we can find out about the world by deploying the empirical method without being able to prove the method by deploying it, not at least without fatal circularity.

All that we can say is that, in the first instance, we can say with a high degree of probability that if a proposition fails to correspond with reality, is incoherent, is unsuccessful and fails to provide a ‘real explanation’ then it is untrue. Equally, the more of these theories that a proposition does meet then we can have increasing confidence that it is true – or at least as true as we are likely to get.


The logic of freedom

The absurdity of life

FOR some the Universe is simply absurd. This realisation happens when all our searches for meaning disappear into the silent Universe, which is indifferent to our petty struggles. It’s when we suddenly understand that we are not really attempting to save the planet against the ravages of climate change but just the flora and fauna (including humans) as they happen to be configured now – the planet will continue for the next 100 billion years or so before it is absorbed by the cooling sun.

But for philosopher Mariam Thalos this sense of absurdity happens when we step outside of the warmth of our collective lives into the cold of the ‘uncentred perspective’. And in an intriguing move in her book A Social Theory of Freedom she argues that this ‘stepping-out experience’ should actually ‘de-absurdize the life’ one lives because ‘afterwards it should feel warmer to reenter your life’. She adds: “If you feel a de-naturing of the world, upon stepping out of your life, you should feel a re-naturing of it upon your safe return.” Unfortunately for some of us this sense of de-naturing, absurdity and alienation persists if we ‘cannot execute the rentry’. Chillingly she writes: “They are those people who, prior to stepping out, lived without a sense of solidarity with others, for you will have an incentive to reenter and resume a much more enlivened life for the sake of those who mattered before you executed your initial exit.”

It may come as something of a surprise to learn that all this talk about solidarity comes at the end of a book about human freedom. But it is essential to her case because for there to be freedom at all there has to be a Self and it is this initial separation that creates that Self in separation from others. For her, and unlike Mary Midgley (who featured in Escaping the cage of the Self on this blog), the Self emerges out of the collective and we humans are ‘shifting constantly back and forth’ between the two – always supposing one isn’t stuck out in the cold of course.

A solitary trail out in the cold!

Thalos describes herself as a compatibilist, which normally means that you accept the terms of determinism but believe that human freewill is compatible with it – indeed many argue that determinism is essential for freewill. They deploy what is called interventionism, which means that although we are subject to the laws of nature, we are able to intervene and effect our own free actions. Thalos, however, argues for a form of compatibilism but one whose ‘conception of freedom will skirt the problem of determinism’. Instead, Thalos argues that freewill is centred on the Self, embedded in solidarity with Others. But crucially the Self cannot be found in experience – as Gilbert Ryle memorably discovered in The Concept of Mind – because it isn’t there. For her freedom is a logic and, she insists, logic is not subject to the laws of nature. From an existential perspective, she argues that the Self is a concept – more precisely a self-conception that emerges out of the logical ‘fit’ between an agent’s conception of themselves and the facts of their circumstances. It is in this very struggle between her self-conception and the constraints she encounters in society that her freedom emerges. Thalos insists that this concept of freedom is a logical, not empirical, form, even though it seems at times as though she is carrying out a delicate high wire act that is in danger of collapsing into the empirical and, presumably therefore, deterministic world.

Ryle is famously takes a derogatory line against Descartes’s ‘I’ which he brands the ‘ghost in the machine.

The ghost in the machine

But Thalos is much more sympathetic to Descartes. “Bodies, as Descartes envisioned, are under the sovereignty of the laws of motion (that we might refer to today as causal laws or dynamical laws), but minds are not. Mind is in no way a space-filler, subject to the laws of motion. Mind is subject to the laws of thought, to laws of reason, hence the separation between mind and body,” she writes.

Where Descartes went wrong was to jump to the conclusion that the ‘I’ was out there in experience. What in fact he had stumbled upon, according to Thalos, was the ‘logic of experience’. And even if, like David Hume and Ryle, we can find no evidence of the Self in experience, it does not follow that we should dispense with the Self. The logic of experience is ‘a theory of action that speaks of ongoing activity mediated by a Self (constituted in part by a self-conception) that is in turn subject to modification by a variety of interactions between Self and Others’.

One of the problems with this position is that it is difficult to see how the purely logical form of the self-conception can interact with Others without collapsing into the empirical world of causation. It is also in danger of a horrible circularity in that saying that the Self is a self-conception is close to the trivial statement that the Self is the Self. And it sails perilously close to the infinite regress because if we say that the Self creates the Self, who creates the original Self?

Although she doesn’t refer to it herself, the solution to the last two problems may be provided by the multi-disciplinary work of Kristina Masholt in Thinking about Oneself in which the author concludes that the Self proper is preceded by a non-reflective self which is able to develop a sense of the reflective Self through its entanglement with the Other. This idea seems to mitigate our concerns about infinite regress and circularity while establishing the foundations of a self which, on Thalos’s account, only achieves freedom when it steps out of the collective. But there remains two major problems: the first is the ever-present danger of collapsing into determinism and the other is that all this talk about the logic of the Self threatens to make freedom the preserve of the educated elite.

It’s fair to say that Thalos is sceptical about the truth of determinism but, nevertheless, is determined to escape its orbit. She attempts to achieve escape velocity by asserting that we do not ‘need to accept exclusively physicalistic, behaviouristic or biological terminology in the description of human behaviour. Instead, she argues for the social sciences because they are not universalistic like physics and biology but are ‘much more sensitive to the presence of individual variation’.

Her attempt to escape the elitist threat involves the use of what she calls Imitative Reasoning in which role models perform the function of creating the ‘fit’ between an individual’s conception of her self and her social circumstances from which her freedom emerges.

Thalos’s book has more surprises and plot twists than Line of Duty and as a result it is difficult to navigate one’s way through the thicket of ideas. It is not clear whether she has succeeded in freeing freedom from the clutches of determinism or whether her conclusion is as disappointing as that of the long-running TV show.


The return of the public

The traditional image of a dystopian future is belied by the reaction to the pandemic

ONE of the most fascinating phenomena in modern life is the tension between the widespread apathy about what might be called traditional party politics on the one hand and an increasing engagement with community activity on the other. If the pandemic has taught us anything it is that international crises do not necessarily lead to a dystopian society of a war of all against all – to mix popular science fiction and Thomas Hobbes. Indeed, the increasing interest in communal activity and a desire to help out seems to matched by an equal and opposite decline in party politics. People seem to be really engaged as long as you don’t call the activity politics. And yet, if things are to change for the better we need people to be engaged in both politics and community action. So, is there a way of achieving this? According to the pragmatic American philosopher John Dewey there is – and it involves deliberation.

Deliberation is important for democracy, according to Dewey

For Dewey legitimacy was as important in 1927 when he wrote The Public and its Problems as it seems to be today. Indeed, he links the majoritarianism of representative government and deliberation as a way of understanding and justifying democracy, not simply as two ideas that may, or may not, combine. He argues that the fact that there isn’t a conflict after every election so that society isn’t always split into friend and foe, is proof that ‘the governors and the governed’ in representative government are not ‘two classes’ but ‘two aspects of the same truth’. Of course, it may seem to us today that the present state of representative government suggests that there are indeed ‘two classes’ of the governors and the governed and that society really is split into friend and foe. Still, one may hope that this is a temporary state not a permanent one and one that can be tempered by deliberation.

Can deliberation bring us together?

In his introduction to the 2016 edition published by Swallow Press Melvin Rogers writes: “Forming the will of the democratic community, for Dewey, is a process of thoughtful interaction in which the preferences of citizens are both informed and transformed by public deliberation as citizens struggle to decide which policies will best satisfy and address the commitments and needs of the community.” And he adds: “It is no wonder that many see Dewey as an important spokesperson for deliberative democracy.” Dewey himself argues that the very forces that have brought about representative government have also halted the ‘social and humane ideals that demand the utilization of government as the genuine instrumentality of an inclusive and fraternally associated public’ which means that the ‘democratic public is still largely inchoate and marginalized’.

In a moment of pessimism Dewey suggests that his arguments seem ‘close to denial of the possibility of realizing the idea of a democratic public’. Many years before the rise of neoliberalism Dewey writes that ‘one of the many obstacles in the path is the seemingly ingrained notion that the first and last problem which must be solved is the relation of the individual and the social’. According to Dewey, however, ‘an individual whatever else it is or is not, is not just the spatially isolated thing our imagination inclines to take it to be’.

And this ‘demands, as we have also seen, perceptions of a joint activity and of the distinctive share of of each element in producing it’. This does not mean that groups, or indeed political parties, will always exist in harmony and without conflict or that an individual will not have conflicting selves. But what it does mean is that the division between the individual and the social is dissolved. If society can be oppressive it is membership of specific associations that is oppressive not our material and social being per se.

Dewey in a statement which could be the motto of Salisbury Democracy Alliance’s campaign for Citizens’ Assemblies, writes that the essential need ‘is the improvement of the methods and conditions of debate, discussion, and persuasion’. Further: “Ideas which are not communicated, shared, and reborn in expression are but soliloquy, and soliloquy is but broken and imperfect thought.”

It is difficult not to read into Dewey a plea for more deliberative democracy. His view is that the division between society and the individual is a false dichotomy that can lead to the kind of fallacies that demand that if you are not for us you are against us.

The real question, as this blog has noted before, is how the individual emerges as an embedded but critically engaged citizen – and for that we need the right conditions for such an agent to emerge, an agent that recognises its social being but also helps to shape that social being.


Reason versus reason

IT is often thought that the main threat to the kind of rationality so admired by enthusiasts for the Enlightenment is, well, irrationality – faith, alternative medicines and the New Age movement. Indeed this view seems to be cemented by the wild irrationality of Trump and his followers – although one does wonder sometimes whether Trump was actually being supremely clever by discombobulating his opponents. But that disturbing thought aside, what if the greatest threat to reason is not irrationality but misdirected reason?

Is irrationality the greatest threat to reason?

That is the view of Dan Hind in his fascinating book The Threat to Reason: How the Enlightenment was hijacked and how we can reclaim it. When you first open this book you expect an attack on the irrational but Hind startles when he announces that this approach misses the point. He calls the attack on irrationality Folk Enlightenment and he is dismissive of its adherents, including Richard Dawkins and the late Christopher Hitchens, who make a clear demarcation between the pure defenders of reason and its enemies mired in unreason. “It saturates our intellectual culture and informs many of our assumptions about public life. As a consequence political disputes about the distribution of resources are recast as metaphysical clashes between abstract nouns,” he writes. Further: “Some of those who defend the Enlightenment from its irrational enemies offer up ‘the great divide’ between faith and reason, rather than the old conflict between Left and Right, as the central organising opposition of our time.”

The great divide?

For Hind, however, the real division is between what he calls Occult and Open Enlightenment. He uses the word ‘Occult’ in honour of Francis Bacon, the father of experimental science who, nevertheless, ‘drew on the techniques and love of magic’ in much the same way as Isaac Newton, who wrote more about religion than science, drew heavily on the traditions of alchemy – more in tune with the great the great 16th century magus, alchemist and astrologer John Dee than modern day physics. (See this blog for two articles about Dee and his relationship with the Pembrokes at Wilton House).

It should be said that Hind does not downplay the dangers that can arise from the misuse of alternative medicine or the relativism of postmodernism, although he points out that the latter’s ‘concern that Enlightenment and modernity can provide cover for crimes has ample justification’. However, he makes a persuasive case against the kind of Occult Enlightenment that uses reason to undermine reason itself. And he reserves his main artillery for Big Pharma, which uses science to ‘undermine the open, humane science they claim to champion’ by ‘withholding information’ presenting ‘information to the public in misleading ways’ and then ‘punishing those who inform the public’. He adds: “Given that pharmaceutical medicine is fifty times more lucrative, and considerably more lethal than the herbal and homeopathic alternatives, the institutions that control the business might be suspected of posing a greater threat to reason than their Reiki-practising competitors. Other manifestations of Occult Enlightenment past and present include the ‘desire for total knowledge’ in the service of the British Empire and the invasion of Iraq.

According to Hind, the solution to the threat of misdirected reason is what he calls Open Enlightenment, or an enquiry into the world unencumbered by the self-interested ‘reason’ of the state and corporations. He argues that the Open Enlightenment will be met with ‘ridicule or worse’ but this will be worth it because it will allow us ‘to live at least part of the time as truth-loving individuals’ as we ‘become authors of our own Enlightenment’.

The main thrust of Hind’s argument is a powerful one and a corrective to those of us who have, perhaps, been guilty of taking a kind of perverse pleasure in attacking the easy targets of unreason, while underplaying those forces of reason that actually undermine the reason of the Open Enlightenment. In some ways it follows in the footsteps of The Merchants of Doubt by Naomi Oreskes and Erik M. Conway who exposed ‘how a loose-knit group of high-level scientists and scientific advisers, with deep connections in politics and industry, ran effective campaigns to mislead the public and deny well-established scientific knowledge over four decades’.

In a side argument Hind makes the claim that ‘there is no way that reason can cause us to believe in God, but neither can it cause us to believe that it is wrong to kill’. This appears to be based on Davide Hume’s assertion that reason is the slave of the passions, which raises another mighty split between those who follow Hume and others, like the American philosopher Thomas Nagel who argues in The Possibility of Altruism that ‘just as there are rational requirements on thought, there are rational requirements on action’.

Are faith and morality equally distant from reason?

Indeed, there are some evolutionary biologists, including Dawkins, who argue that altruism is part of our genetic make-up. And it is certainly possible to define altruism in terms of necessary and sufficient conditions. There are also various normative ethical theories that attempt to conceptualize and objectify morality. None of this is to say that the passions do not often, perhaps most of the time, trump reason, but nor does it follow that reason can never be deployed when considering ethics.

However, this is all something of a distraction from Hind’s argument, which remains untouched by the truth or otherwise of his moral assertion. It therefore remains a serious challenge for all of us who in one way or another support the ideals of the Enlightenment, not to fall into the trap of Folk Enlightenment but, rather, to have the courage to create a truly Open Enlightenment.


What’s the point of privacy?

The invasion of our privacy

MUCH has been written – not least on this blog – about the perilous state of our privacy. The problem is that over the past 30 years or so humanity has been slowly infantilized as advertisers, powerful lobbyists, think tanks, the state and social media have infiltrated our brains. According to Shoshana Zuboff in The Age of Surveillance Capitalism, it was the psychologist B. K. Skinner who realised the political value of it all and ‘viewed the creative and often messy conflicts of politics, especially democratic politics, as a source of friction that threatened the rational efficiency of the community as a single, high functioning super-organism’.

The philosopher Byung-Chul Han has dubbed the whole phenomenon psycho-politics and, somewhat desperately, called on us to become idiots in the mould of Dostoevsky’s ‘positively beautiful man’ in The Idiot who clashes with the emptiness of 19th century Russian society.

But what if a) it’s too late for us to do anything about our loss of privacy and b) it may not matter too much because we give it too much significance anyway? This is the position of Firmin DeBrabander in Life After Privacy. DeBrabander argues that privacy is both a relatively recent phenomenon and, in a different form, much older than people think. For many, our modern conception of privacy is essential for representative government and requires a ‘legal and physical architecture’ to support it. Drawing on Stoic and early Christian writings, however, DeBrabander claims that the ‘virtues of privacy can be achieved by other means’. He continues: “Stoic philosophy…praises the virtue of emotional resilience and equilibrium.

Battling for emotional resilience

“The Stoics called it ‘constancy’ where one is not over excited or deflated by external events, the opinion of others, or personal interactions.” This is something that some of us at least find difficult to muster, especially when we see injustice. But importantly DeBrabander does not make the mistake of casting the Stoicism as a purely individualistic philosophy recommending that we retreat into ourselves. Rather, the way we interact with our environment and with other people ‘is instrumental to how you transform your mind and behaviour’. It has to be said that for some these principles will be easy to follow but for others…well, less so. But that does mean that they are without value in at least trying to follow them.

This view, writes DeBrabander, is very different from the Liberal turn of mind which ‘conceives citizens as atomistic individuals, responsible for their own values and destiny – who will reason and vote accordingly’. Another narrative, he claims, and one much closer to Stoicism, has it that we ‘develop the competency for autonomy through our social interactions with other persons’.

DeBrabander draws a distinction between the two as being the difference between the ‘negative’ and ‘positive’ sense of privacy. In the first it is assumed that we must ‘protect the space of individual freedom’ where we can do whatever we like as long as we don’t harm others, as John Stuart Mill put in his Harm Principle. In contrast DeBrabander writes: “Rather, we must reconnect to the values, virtues, norms, and habits of democratic life, in order to produce citizens who can better withstand the efforts of manipulation and control.” Indeed, as far as he is concerned, this is the only realistic way of containing the machinations of the State and Big Tech because as atomistic individuals we are vulnerable to manipulation, as Hannah Arendt noted in her masterpiece The Origins of Totalitarianism.

For centuries political liberals have conceived us as being first and foremost individuals and, in one iteration, argue that under social contract theory, society is ‘formed when individuals, after independent reflection, decide to contract together’. But DeBrabander will have none of this: “I suspect, rather, that we are social through and through.” For him the public sphere is where our true humanity thrives – in what the ancient Athenians called the Agora.

The glories of the Agora – apart from the slave bit obviously!

For the Athenians the private space was for the idiotes and was the realm of privation, of the unskilled and ignorant. For sure, we need solitude to restore our reflective selves, but this is far from the isolation and loneliness engendered by classical liberalism and is a by-product of democracy not fundamental to it. As DeBrabander writes: “A democracy worthy of the name requires that people are invested in policy-making decisions, and in the elevation and pursuit of guiding ideals.” Further: “To the extent that we have privacy or anything that approaches it, like the solitude conducive to thought, it relies on public action, interaction, and sustenance.”

The conclusion of all this is not that, as Han have it, that we must become Idiots – and certainly not in the sense that it was meant by the ancient Athenians – but, rather, that we reclaim the public sphere or Agora whence we can seek positive solitude when necessary, not have loneliness foisted on us.

It’s hard not to look to the deliberative democracy practiced every month by Salisbury Democracy Alliance at its Salisbury and Bemerton Heath Democracy Cafés or the Citizens’ Assembly it ultimately want to create and equate that with DeBrabander ‘s notion of people being ‘involved in public policy-making decisions’. At the same time one cannot help but wonder whether it’s too late, in the same way that it’s too late for our current concerns about privacy. But to quote the Ingenious Gentleman himself: “I know not whether I ought to avow myself the good one, but I dare venture to assent that I am not the bad one.” And maybe that’s just enough for us to continue tilting at windmills!


Many lives make hard work!

WHO hasn’t wondered how our lives would have gone if THAT hadn’t happened or, perhaps, something else HAD? Throughout our lives we make decisions, or decisions are made for us, and our narrative unfolds. But in the arts and in science the idea that there could have been other lives lived has gained traction. As Vimes says in Terry Pratchett’s Night Watch: “I know all about that. Like, you make a decision in this universe and you made a different decision in another one.” The idea is front and centre in Matt Haig’s popular novel The Midnight Library (One library. Infinite lives) in which the protagonist Nora Seed is given the chance to ‘live as if she had done things differently’. Yanis Varoufakis uses the idea in his Another Now. Dispatches from an Alternative Present in an extraordinary mixture of fiction and non-fiction to envisage the creation of a ‘non-capitalist world in which work, money, land, digital networks and politics have been truly democratized’.

But it is, perhaps, best expressed in Robert Frost’s enigmatic poem The Road Not Taken. As it happens, it is this work that Andrew H. Miller takes as the starting point of his book On Not Being Someone Else – Tales of our Untold Lives. For Miller it is actually his sense of singularity that makes him think of ‘unled lives’. And he believes that it is the singularity created by neoliberalism that has generated this modern sensibility. “The main engine driving this modern experience has no doubt been market capitalism, with its isolation of individuals and its accelerating generation of choices and chances, moulding behaviour in ever increasing ways.”, he writes.

Apart from the deployment of Robert Frost, Miller also calls on an array of authors and artists ranging from Ian McEwan to Virginia Woolf, who often felt that her singularity felt like ‘solitary confinement’ and that she was both ‘prison and prisoner, trapped in this body and these habits’. Miller writes: “At such moments, the thought of being someone else seems an escape. But who would be escaping? And where would they go?” Where indeed? It is in fact quite hard to see how dreaming about an alternative but unattainable life makes any difference. And as Dr Alexandre Leskanich in issue 141 of Philosophy Now on the the possibility of being someone she’s not writes: “I know this, but I don’t know what it means.” One can imagine trying to imagine who one might have been had things happened differently being an interesting parlour game, but is it any more significant than that other than, perhaps, useful material for a book? Nietzsche uses his thought experiment of eternal recurrence to determine one’s commitment to life which only an ubermensch could achieve. But this, of course, is the affirmation of one life lived over and over again – not many lives.

To be sure, imagining the life not lived may help to define the life one does live, but might it not also lead to the dissolution of the Self.

The dissolution of the Self

Ruminating on the lives not lived can lead to a loosening of one’s singularity as one one realises just how contingent the life one does live actually is. It might lead to the break down of the Self as an indivisible individual into a divisible dividual constructed out of a loose bundle of emotions and character traits which, say, our executive function is constantly trying to corral into a more or less coherently functioning unit. If so, this is not necessarily a bad thing and would, of course, chime with the Buddhist notion of the non-self. But it is hard work to live such a life.

It is curious that Miller never ventures away from literature or art into the world of science where there is at least some credibility for alternative lives (if precious little evidence) – especially in the weird and wonderful world of quantum physics. As Pratchett points out quantum theory posits the idea of multiple universe – indeed, some argue that the truth of quantum physics entails multiverses. The idea of many worlds existed in ancient Greek philosophy but also gained traction among 20th century physicists. Among the most vociferous is Max Tegmark who proposes a taxonomy of four levels of multiverses in his book Our Mathematical Universe in which each level is arranged so that subsequent levels encompass the previous. It should be said, however, that this a highly contentious theory with some critics arguing that such a thesis cannot be tested, bears great resemblance to theological discussions and is just as ad hoc as the creation of an unseen creator.

Undoubtedly, the multiverse hypothesis is a highly contested field but it is, nevertheless, strange that Miller never engages with it in his entertaining book. Is it anything more than just entertaining? Probably not. It is fine to speculate whether one’s odd dreams, for example, are actually glimpses into alternative universes – but it’s not clear whether such speculation is anything more than an indulgence and one that can be hard work. As Miller himself writes: “All you can do is try to see the bright present truly, and in seeing, join it.”


Can there be meaning in a silent universe?

LIVING in a silent universe (or universes) can be dispiriting. A previous article on this blog claimed that it was the reduction in a sense of a higher authority that had led to an existential crisis. If there is no God what meaning is there? In that article Frank Martela in his book A Wonderful LIFE argued that we should shift from trying to find the meaning of life to meaning in life.

The demise of God also plays a major role in A Significant Life – Human Meaning in a Silent Universe by Todd May. But May goes deeper still into the problems we face in a secular universe. He begins at ground zero exemplified by Albert Camus for whom the universe is indeed silent. For Camus thoughts about meaning are ‘symptoms of the absurd’. Todd writes: “The absurd itself is something very precise. It is the confrontation of our need for meaning with the unwillingness of the universe to yield it to us.” Camus draws on the Myth of Sisyphus in his book of the same name to bring out this sense of the absurd while urging us to gain freedom by courageously acknowledging this unavoidable absurdity.

Sisyphus, who was condemned to repeat the same meaningless task for eternity.

May, however, is not finished with meaning and turns his attention to Aristotle for whom the flourishing of human life is an ongoing activity involving the commitment to be ‘intellectually engaged with the world’. But while May admires Aristotle he asks whether a life that is lived well and does good is also a meaningful one. Unlike Camus, for Aristotle the universe is not silent – rather it is a structured, ordered telos which humans can discover. And it is this telos, embedded in the universe, that provides Aristotle with his meaning. But as May points out – we are not Aristotle. The universe is not ‘ordered in such a way that everything has its telos’ and the cosmos is not for us a rational place’.

A rational universe? Maybe not.

So, attractive though Aristotle’s conception of the flourishing human life may be, it lacks the meaning that he sought.

And even if we accept the existence of God, that cannot help us as Socrates makes clear in the Platonic dialogue Euthyphro when he asks: “Is what is holy, holy because the gods approve it, or do they approve it because it is holy.” (This not the first time that this question has featured in this blog). Now, we have to assume that because God is considered to be good he has to conform to what is actually good. “God cannot ground the good because God answers to it,” writes May. And, further, there ‘must be something about the universe independent of God that offers human lives a sense of meaningfulness that God himself must answer to’. As we have seen, however, the idea that some sense of the good forms part of a rational universe does not hold water either.

Rejecting God and Aristotle, May sees if he can also reject Camus’s bleak prospectus – and he begins this task by shifting to Martela’s position of attempting to find meaning in life. His first venture is what he calls the ‘narrative approach’ in which ‘humans must take the resources they are given, develop them into a flourishing life and then sustain (or in the case of great misfortune, restore) that flourishing over the course of their personal histories’. This, of course, sounds a lot like Aristotle but without the rational universe. May also acknowledges that there is another problem with this approach because not all narratives provide meaning in the way that he hopes they will. He writes about the depressed life – and we might also point to narratives that are attenuated by poverty and those for whom the past is something people would rather forget. Another problem is that for some people, like the philosopher Galen Strawson (who has also featured in a previous article on this blog), their lives are not driven by narratives and their self-experience is more episodic than diachronic.

So, in the light of this, May’s next move is to write about what he calls ‘narrative values’ like steadfastness, intellectual curiosity, intensity, integrity and so on.

Is intellectual curiosity the answer?

It still looks as though we are drifting back towards an attenuated Aristotle here but May ploughs on: “In approaching life by way of narrative values then, we find that the meaningfulness does not lie in the narrative itself. Instead, we are asking whether that narrative is characterized by or expresses a theme that would give it value.” With reference to Strawson, he adds: “It might be that although Episodic, his life is nevertheless steadfast in its attention to a small range of important philosophical concerns.” It’s important to point out that May is not referring to moral values here because, obviously, the steadfastness of Eichmann in pursuing the Final Solution is rendered morally worthless. However, he does claim that both moral and narrative values do have the required degree of objectivity to mean something beyond the purely subjective, even if, at the same time, they are made up. As he argues that, because values are subject to reasoning they are not arbitrary. “If this is right, then we can say both that our values are made up and that they are, in an important way, objective,” he claims. To be sure, values operate within a tradition and network of practices.

For May, our values are not ‘assured by the universe’ or in God, but nor or they product of blind whim. For some this might not be enough but for the rest of us, he writes ‘although this may not be all the objectivity we would like, perhaps it is the objectivity we need’. To use an analogy in the world of art, even Malevich’s famous Black Square contains within it traces of meaning, particularly when it is seen in the context of his life’s work.


Life in the void!

WE may often find ourselves in a sort of other world: that moment when we awake and momentarily are not sure where we are, or even who we are. Or perhaps one’s memory of a place does not match reality on a return visit. This may be, of course, that things have actually changed. But often it’s because our memories play tricks on us. Is there a space – a void – somewhere between perceptions? This a notion that occupies the maverick philosopher Slavoj Zizek in his Incontinence of the void in which he explores the spaces between philosophy, psychoanalysis and political economy. As he delves into the realm of pure thought he quotes Hegel thus: “The system of logic is the realm of shadows, the realm of simple essentialities freed from all sensuous concreteness. The study of this science, to dwell and labor in this shadowy realm, is the absolute culture and discipline of consciousness.”

And we are in shadowy world – but without the logic – in Beyond Philosophy by philosophers Nancy Tuana and Charles Scott as they peer at life beyond certainties ‘beyond formations, values, and meaning – and to the libertory power that attunements with beyond can occasion’. They want to ‘rattle the cages of our certainties’ and ask whether we can carry out our commitments ‘without the illusion of fixed certainty’. In doing so they explore the worlds of Nietzsche and his sense of ‘beyond good and evil’, Michael Foucault’s ‘unreason’ and Gloria Anzaldua’s Napantha – a liminal space ‘where you are not this or that but where you are changing’.

A liminal space

For Nietzsche, of course, it meant beyond ‘conformity, beyond those satisfied with their goodness, and beyond the evil created by their God. But not beyond the night sounds of the forest, the deep howls from the darkness’. For him it is the wildness of Dionysus that matters, not the cold, bloodless rationality of Apollo; the ‘joyful affirmation of life with its suffering and tragedies’ rather than the solace of life-denial. In part Nietzsche is valorising the noble warrior – the Ubermensch, although the authors wonder whether we can transfer our admiration to natural leaders or people who have an ‘exceptional energy to take charge’ over all those who are ‘low-minded, common and plebeian’, without losing Nietzsche’s dynamics.

The wildness of Dionysus

Tuana and Scott also home in on Foucault’s concept of unreason, which can refer to any ‘event, state of mind, or manner or behaviour that is beyond reasonable sense or rational authority’. But it not opposed to reason – it’s just different from reason. The authors praise the anarchic freedom ‘which will find shelter in unreason’, which is freed from ‘normal decency’.

The deliberate destabilizing of Nietzsche and Foucault finds its apogee in the thought of Anzaldua who eschews reform in favour of transformations ‘which occur in this in-between space, an unstable, unpredictable, precarious, always-in-transition space lacking clear boundaries’. It’s a state she calls ‘dwelling in liminalities’. More than Nietzsche or Foucault Anzaldua links this state with ‘social, political action and lived experiences to generate subversive knowledge’.

One might be forgiven for feeling a tad queasy after all this but Tuana and Scott attempt to bring it all together with what they call ‘liberatory philosophy’: “We are speaking of profound experiential and social transformation out of which people come to think, feel, desire, and act in ways that were not previously possible.” They are looking not simply to make reforms ‘if that means taking the same forms and expanding them or rearranging them’ but to transform in the liminal spaces’. One of the problems that the authors readily acknowledge is the fear of getting lost in these spaces but their, perhaps not entirely satisfactory answer, is to embrace this fear and a ‘willingness to be undone’. Further: “We wish to influence shifts of habits, affective dispositions, and attunements so as to catalyze transformations in ways of living.”

Liberatory philosophy?

One obvious response to all this is how do you live your life in a world of such instability and uncertainty; how do you break out of the world of unreason, the ‘dwelling in liminalities’ in order to establish a firm foothold for social action sufficiently coherent to effect the transformation one seeks? For those of us already in a permanent liminal state, in which there is no indivisible individual only what might be called dividuals – a bundle of character traits and emotions – and for whom the daily task is to coral all these disparate forces into some sort of coherent action, Beyond Philosophy offers only paralysis. It seems at times as though the authors are guilty of importing their previously held positions – particularly about climate change – without demonstrating why you need to plunge into the depths of liminality to have them. Without some sort of intellectual grounding there doesn’t appear to be any ‘reason’ or bulwark against Trumpism, alternative facts and amoral chaos.

And their answer to concerns like these is: “Perhaps the question is rather: How do we desire to live in the world? Apathetically? Without passion even though passion intensifies peoples living experience?” One is tempted to respond to this by saying that there is already much passion in the world, perhaps too much, and, to misquote D’Alambert, if we get rid of logic and reason and ethics, we would still have plenty of passion and ‘we would have ignorance in addition’.


Back to Eternity!


CULTURE wars and alternative facts have become the battleground of modern politics – or at least they have for some on the right of the political spectrum. It is often said that the problem with the Left is that it still thinks that political thinking is still about, well, politics, while the Right has shifted to culture. It’s not that they are simply on different sides – they are on different playing fields. We all know about the phenomenon of ‘alternative facts’ and the so-called ‘post-truth era’, but is this more than just a political strategy by the Right in order to gain power? According to Benjamin R. Teitelbaum there is much more.

Teitelbaum is a Professor of ethnography at the University of Colorado, but he is also an award-winning expert on the radical right. His new book War for Eternity focuses on an obscure philosophical movement called Traditionalism. It would probably have remained in obscurity had it not been adopted by people in real power, or at least those who have at one time or another attracted the attention of those in power – people like Steve Bannon and advisors to Putin and Bolsonaro. Teitelbaum has been studying Traditionalism for years and argues that it underpins the intellectual justification of much of the populist right, including Nigel Farage. According to Teitelbaum Traditionalists set themselves against modernity – that is, they are opposed to modern secularism, socialism – even capitalism – and universal values like human rights, all of which they see as illegitimate forces working to replace their preferred social, cultural and political hierarchies.

Teitelbaum writes: “Traditionalists follow Hinduism in believing that human history has always cycled through four distinct ages from a gold age to a silver to bronze and to the dark before moving back to gold and starting the cycle again.” Each age belongs to a particular type of person descending from a priestly or spiritual class (gold), down through warrior (silver), merchant (bronze) and, finally the slavery of the dark age.

The golden spiritual age

There are variations in the structure of this hierarchy but all Traditionalists believe that we are currently in the dark ages and, while Hinduism says that this cycle can take millions of years to complete, they believe that it can take place over a much shorter, human timescale. There is a a certain fatalism in all of this, of course, but Traditionalists believe that one can accelerate the decay of the dark ages in order to return to the golden age all the sooner. It is in this context that phrases like ‘creative destruction’ and ‘make America great again’ gain a new resonance.

One of Traditionalism’s leading thinkers is the Italian Julius Evola, who added a layer of cultural bigotry with ‘whiter, Aryan people constituting a historical ideal atop those with darker skins – Semites, Africans, and other non-Aryans’. Chillingly, he saw tyrants like Hitler and Mussolini as a kind of destructive ‘readjustment’. And Bannon saw Trump as a destructive force, hastening the end of the dark ages (the fact that Trump saw himself as a builder may have contributed to the rift between them).

Fundamentally, Traditionalists are opposed to the very idea of progress while cyclical time gives them the intellectual cover for this view because the concept ‘recognises no past, present, or future’.

The cycle of life

Teitelbaum writes: “Those attuned to cyclic time do not attempt to progress toward a a previously unrealized state of virtue, condemning the present and the past in the process.” Further: “The cycle also entails a motion from the central core, away to its edges, and back again – centripetal and centrifugal. It entails movement of departure from the illusion of time and progress, and movement of return back towards the core of eternal truth, on and on.”

Teitelbaum argues that one of the most disturbing aspects of Traditionalism – apart from questions about the truth of cyclic time as such and, in particular its Hindu manifestation- is not its cultural bigotry, which some adherents don’t agree with anyway, but the idea that we can never make progress. The fact that we no longer hang, draw or quarter people or legalize profit from enslaving people is irrelevant. What’s important is a return to the golden age of spirituality, even if that entails a return to barbaric practices and enslavement. Indeed, the very notion of slavery has been turned on its head so that we in our Western ‘modernity’ are not actually free but enslaved by materialism and consumerism. In fact, some of their concerns about what some call the psycho-politics of Big Data and neoliberalism does have some resonance . Those on the Left, however, are more likely to seek ways of countering the wilful ignorance that goes hand-in-hand with psycho-politics in an attempt to encourage more critically aware and engaged citizens; while Traditionalists are more likely to see psycho-politics as a welcome sign of the degeneration of the dark ages on the way to spiritual renewal.

For some this choice is no real choice at all because they will find nothing intellectually attractive about Traditionalism, but in so far as it is important to know how at least some quite influential people think, then Teitelbaum has done us all a favour.


Climb every mountain!

The mountain of truth

IN this world of alternative facts and relativism it’s comforting to know that there is a hilltop far away where the light of truth still flickers – if somewhat dimly. Indeed, towards the end of the 16th century the metaphor of the hilltop of truth was used by Francis Bacon – who was to become Lord High Chancellor in 1617 – to exalt the notion of Truth: “It is a pleasure to stand upon the shore and to see ships tost upon the Sea: A pleasure to stand in the window of a Castle, and to see a Battaile, and the Adventures thereof, below: But there is no pleasure comparable to the standing upon the vantage ground of Truth: (A hill not to be commanded, and where the Ayre is alwaies cleare and serene;) And to see the Errours, and Wanderings, and Mists, and Tempests, in the vale below:”. (By the the way, spellcheck had a field day in that section!).

But this beautiful description of Truth – almost equating it with beauty – has been seriously undermined by what might be called postmodern epistemic relativism and, as Ophelia Benson and Jeremy Stangroom put it in Why Truth Matters, attacks on the ‘canons of coherence, logic, rationality and relevance – which are reminiscent…of counter-enlightenment and reaction’. We can see the results in the shameless lies and, at best, dissembling of the Trumps, Johnsons and Cummings of this world. It should be added that a common tactic of the post-truth brigade is to misrepresent the Enlightenment as privileging reason over everything else and in so doing attempting to eliminate mystery from the world.

The Enlightenment

However, as D’Alambert wrote in response to Rousseau’s contempt for Enlightenment rationality: “In sum, even assuming that we might be ready to yield a point about the disadvantage of human knowledge, which is far from our intention here, we are even farther from believing that anything would be gained by destroying it. Vices would remain with us, and we would have ignorance in addition.” Without an appeal to truth and reason he who shouts loudest wins, so we are ill-advised to deliberately try to undermine reason and respect for truth-seeking because, ultimately, these are the only tools we have against the tyranny of ignorance.

It is easy to despair sometimes in the face of the constant flow of almost wilful ignorance, the siren calls of the tech giants and the effluvial stream of postmodern irony – but there are still beacons of hope out there, standing atop the mountain. The economist Tim Harford is one such beacon and his popular Radio 4 programme More or Less is a paean to Truth, attracting a loyal following. And his book How to Make the World Add Up – which was Book of the Week on Radio 4 towards the end of last year – is an appeal to the sort of rationality that is despised by people like Trump with their ‘alternative facts’.

In this book Harford provides 10 rules to deploy when we are trying to navigate our way through the dense thickets of statistics, and the first is to search your feelings. He writes: “Our emotions are powerful. We can’t make them vanish, and nor should we want to. But we can, and should, try to notice when they are clouding our judgement.”

Or emotions are powerful and we should be wary of them when they cloud our judgement

Other chapters take us through issues like the importance of taking our own experiences into account, as well as the statistics , because both can be true; avoiding jumping to conclusions too quickly; attempting to put statistics into a wider context; checking to see if there is any information missing; demanding transparency; not taking statistical bedrock for granted; and keeping an open mind. But his golden rule, if all else fails or is forgotten, is to ‘be curious’. Curiosity may be the cure for boredom and, as Harford writes, makes us ‘burn with the desire to know more.

Building with statistics

Once we have donned the crampons of Harford’s rules, we can begin to climb Bacon’s hilltop of Truth but what about the muddy waters of ethics? A moment’s thought, however, demonstrates that the ‘canons of cohesion, logic, rationality and relevance’ must also surely apply to ethics, even if our conclusions might be less firm than in in the realms of statistics and science. As Alasdair MacIntyre notes in The Nature of the Virtues even a ‘relatively coherent tradition of thought’ does not necessarily produce any real ‘unity of concept’. On the other hand not all is lost because he is able to come up with three conditions that would have to be met – and potentially found – for the ‘concept of virtue to be made intelligible’. He adds: “The first stage requires a background account of what I shall call a practice, the second an account of the narrative order of a single human life, and the third an account of what constitutes a moral tradition.” Thus, although we haven’t come to any conclusions, Macintyre does provide a ‘coherent, logical, rational and relevant’ framework within which we may be able to make progress.

And indeed progress has been made in other areas of moral theory. For example, Act Utilitarianism focuses purely on those acts which are likely to generate the greatest happiness of the greatest number. Deontology, on the other hand, is usually thought to eschew consequentialism because of the odd conclusions that the latter can, sometimes, throw up. The development of Rule Utilitarianism, however, is an attempt to reduce some of these uncomfortable conclusions while moving closer to Deontology, while some advocates of the latter do acknowledge that it does take some account of consequences.

As we have seen, the search of truth and reason in the uplands is far from dead – but it requires effort, something that the psychopolitics examined in a previous blog tends to undermine as Big Data and neoliberalism seek to turn us all into gibbering infantilized consumers. But surely it is worth the climb – even if we only get as far as the foothills.


How to escape the caged Self

“To teach how to live without certainty, and yet without being paralysed by hesitation, is perhaps the chief thing that philosophy, in our age, can still do for those who study it.” So wrote Bertrand Russell in his History of Western Philosophy in 1946. For Russell philosophy itself dwelt in the uncertain, uncomfortable position between the certain dogmas of theology and the relatively firm empirical ground of science. It may seem odd to write about uncertainty at a time when we seem to be more certain of our political and moral beliefs than ever. One is either a Brexiteer or a Remainer, a pro-lifer or pro-choicer, and individualist or a communitarian and so on and on – and ne’er the twain shall meet. And yet there is also a sense in which this apparent certainty is a cover for a deep-rooted uncertainty without having the inconvenience of having to think too much.

Bertrand Russell

Even Russell wasn’t immune from diving for cover in the form of Descartes’s homunculus or, as Gilbert Ryle put it derogatively his Concept of Mind the ‘ghost in the machine’. In The Problems of Philosophy Russell wrote: “Ultimately one has to come down to a sheer assertion that one does know this or that – eg one’s own existence.” And according to Mary Midgley in her remarkable book Wisdom, Information and Wonder this was the method recommended by Descartes. “Russell’s faith in this approach never faltered, however, and he regarded philosophers who moved away from it with a certain mystified disgust,” writes Mary Midgley, who was one of the quartet of great female philosophers along with Iris Murdoch, Philippa Foot and Elizabeth Anscombe. Rather than questioning everything, as he claimed he was doing, Russell ‘sat very firmly on a particular set of assumptions which dictated just what he was and was not going to question, and which determined also the form of the question’.

Mary Midgley

For Midgley our society has been captured by this solipsistic notion that there is an entity trapped inside a metaphysical cage and from which we have to find a way of escaping. For Descartes and Bishop Berkeley the escape was provided by God. For Kant it was us who effectively created the objective manifold around us – and any knowledge of the world-in-itself was for ever beyond our ken. Meanwhile Schopenhauer, following on from Kant, opened his magnum opus The World as Will and Representation with the words: “The world is my representation.”

Rene Descartes meditates

In his epigrammatic work Tractatus logico-philosophicus Wittgenstein took a similar position. But by the time of his Philosophical Investigations he had completely flipped, acknowledging that we could not have had a concept of the Self in the first place unless we already thought of it as part of the outside world – thus, in one bound, escaping solipsism. And, as Midgley writes: “Certainly, too, we would have no language to speak of it (the world) if we did not conceive those others as able to communicate like ourselves, and living in a public world which could be communicated about.” Further, Descartes’s ‘I think, therefore I am’ is not ‘basic at all. It could not be because it is expressed in language’. And as Prof Raymond Tallis – who has endorsed the work of Salisbury Democracy Alliance (SDA) – writes in the latest edition of Philosophy Now: “When I tell you that no-one exists apart from myself, this is not a logical contradiction – rather, the very act of my asserting it to you makes sense only if what I assert is untrue.”

Prof Raymond Tallis

Meanwhile, in a passionate plea for for communitarianism and against the bleak solitude of the caged Self, Midgley writes: “From the deepest roots of our experience, we are social beings directly inhabiting the world, members of a community and of a species whose faculties have all evolved to fit them for a wide physical and social context, not solitary astronauts.”

And in a section that has particular relevance to today’s divisions, tribal thinking and social media echo chambers, she writes: “Moving away from Cartesianism, we may suggest that theoretical disputes as well as practical ones are best settled, not from on high, but by peaceful negotiation between the parties involved, with background help and advice from outside observers.” It’s a sentiment that could be set as a preamble for the justification of Citizens’ Assemblies, for which SDA is campaigning.


From the collective to the individual

IN some parts of Western society individualism rules supreme and reaches its apogee in neoliberalism in which the only relation that exists between individuals is transactional. This relationship is encapsulated within the mythical figure of Homo Economicus who is supposedly driven solely by rational self-interest and becomes a consumer and spectator in society rather than a participant. Philosophically, it is expressed in its purest form as methodological individualism, which asserts that all attempts to explain social or individual phenomena are to be rejected unless they are couched wholly in terms of facts about individuals. The problem with this position, however, is that it excludes any individual for whom a sense of community is constitutive of how she perceives her being.

It should be said that other political philosophies are available and one such is provided by the economists Paul Collier and John Kay in their book Greed is Dead. They argue that that extreme individualism ‘is no longer intellectually tenable’, raising the question as whether it was ever intellectually tenable. It could be argued, of course, that when the luminaries of the Enlightenment suggested that we should use a bit more reason in our lives, they were also promoting the rights of the individual against the oppressive State. Perhaps this was necessary at the time – and the rise of universal human rights still rightly protects the individual in this sense – but perhaps now the dominance of the individual has gone too far and we need to address the imbalance because, as Collier and Kay point out ‘human nature has given us a unique capacity for mutuality’.

Is greed dead?

The importance of mutuality was also stressed by the Russian thinker and anarchist Kniaz Petr Alekseevich Kropotkin in his 1921 work Mutual Aid: A Function of Evolution in which he writes ‘besides the law of Mutual Struggle there is in Nature the law of Mutual Aid, which, for the success of the struggle of life, and especially for the progressive evolution of the species, is far more important than the law of mutual contest’. The us of the word ‘progressive’ here is significant because it sets Kropotkin up in a communitarian tradition that is very different from that of Collier and Kay, as we shall see.

Kniaz Petr Alekseevich Kropotkin

Nevertheless, the authors do agree with Kropotkin that biology ‘far from lending support to the premises of individualism, undermines them’, and they point to myriad organisations that are neither individualistic nor statist including families, clubs and associations as well as , one might add, political parties and trade unions. Adam Smith, primarily a moral philosopher but with his book The Wealth of Nations regarded as the founding father of modern economics, fully acknowledged the complexity of humans and their relationships – and yet he has been trivialised by neoliberals to mean the benefits of the invisible hand of self-interested individuals, a metaphor that he used just once in his otherwise rich and rewarding book.

Adam Smith

Collier and Kay make their position clear when they write: “And we claim that agency – moral, social and economic – is not polarized between the individual and the state, but that society is made up of a rich, interacting web of group activities through which individuals find that fulfilment.” It’s probably fair to say that the authors fit firmly in what might be called the conservative tradition of communitarianism alongside Edmund Burke’s ‘little platoons’ and Hegel’s ‘civic community’, although they would probably balk at his valorisation of the State. Indeed, their central argument is that there has been too much centralization in the State. Even our much-loved NHS comes in for severe criticism and they argue that health care is ‘well suited to decentralized provision’, although they don’t say how this would happen without health care descending into a postcode lottery .

As we have seen, this is very far from the communitarianism of Kropotkin or, indeed, his fellow anarchist Noam Chomsky who wrote in his book On Anarchism that while he looks forward to a post-capitalist society and the ‘dismantling of state powers’ he also acknowledges that ‘certain aspects of the state system, like the one that makes sure children eat, have to be defended – in fact defended very vigorously’. And Karl Marx explored the notion of communitarianism, which he rooted in our material being in that the ‘sum total of these relationships of production constitutes the economic structure of society – the real foundations on which rises a legal and political superstructure and to which correspond definite forms of social consciousness’. Famously in A Contribution to the Critique of Political Economy he wrote: “It is not the consciousness of men that determine their being, but, on the contrary, their social being that determines their consciousness.”

Lest we forget, and in response to the individualism of Martin Buber in a previous blog, religion can also deliver a more communitarian approach. In his book The Way of St Benedict the former Archbishop of Canterbury Rowan Williams writes: “Or, to pick up our earlier language, it is the unavoidable nearness of others that becomes an extension of ourselves. One of the things we have to grow into unselfconsciousness about is the steady environment of others.”

St Benedict

One of the most appealing aspects of communitarianism, then, is that it appeals to thinkers across the political, moral and religious spectrum. And one of the most appealing aspects of Greed is Dead, at least for Salisbury Democracy Alliance, is that their decentralizing communitarianism leads them to regard Citizens’ Assemblies as being an ‘interesting innovation in democratic practice’.

What needs to be stressed in all of this, however, is that communitarians are not advocating that the collective should crush the individual but, rather, that given sympathetic conditions the individual, properly understood, emerges out of the collective, is shaped by the latter but, in turn, given the opportunity, helps to shape the collective. As Collier and Kay write, it is through the collective that individuals find their fulfilment.


To ambiguity and beyond!

I and Thou was a concept introduced by a German theologian, Martin Buber in his book ‘Ich und Du’ which roughly means I and Thou. Buber believes that these two basic word pairs are essential to understanding how one responds or communicates to another.

HOW the collective emerges out of the individual or how the individual emerges out of the community are questions that go to the heart of modern society. Of course, two possible solutions are either that there is no such thing as community or that there is no such thing as the individual. But for the purposes of this blog we will assume that both exist and attempt to work out how, if at all, the individual becomes part of the collective without losing an ethical perspective.

In I and Thou Martin Buber, as the title of his book implies, starts with the individual. For him the social means ‘the community that is built up out of relation’ but he is aware of the problem that involves the ‘collection of human units that do not know relation – modern man’s palpable condition of lack of relation’. Bearing in mind that this book was written in the 1930s, one wonders what Buber would think about today’s atomized society. Be that as it may, Buber writes: “Then we find only the one flow I to Thou unending, to one boundless flow of the real life.” But then Buber argues that the ‘religious man stands as a single, isolated, separated being before God, since he has also gone beyond the state of the moral man, who is still involved in duty and obligation to the world’. The moral man is ‘still burdened with responsibility for the action of those who act’. Here Buber becomes somewhat opaque as he argues at the same time that although the individual is not ‘freed from responsibility’ he has nevertheless ‘abolished moral judgements for ever’.

Martin Buber

If this sounds familiar it may be because Buber was influenced by Soren Kierkegaard for whom God becomes dispensable if He is drawn into the ethical sphere and will then, eventually, disappear. Interestingly, Buber references Nietzsche and it it’s hard not to draw a parallel with his notion of Beyond Good and Evil, which transfers amorality from God to to a post-God world, a world which morality has no meaning unless it is ruled by the Ubermensch for whom morality means whatever is good or bad for him.

Friedrich Nietzsche

Another profound philosophy of the individual is existentialism, although it is often argued that Kierkegaard and Nietzsche were its precursors. Jean-Paul Sartre was acutely aware of the problem that self-creation and individual freedom posed for ethics. He always intended to address this problem after his magnum opus Being and Nothingness but he never did. It was left to his lover Simone de Beauvoir to tackle this problem and she attempted this in her Ethics of Ambiguity in which she claims that it is the essential tensions that we experience in life – the chief of which is that between life and death – that lead to the place where ethics, politics and metaphysics intersect. What this means concretely is that at the very point that we become aware of our own existence we also become aware of sharing it with others so that, for example, ‘I’ cannot ‘will my own freedom without, at the same time, willing the freedom of others’.

The Ethics of Ambiguity

John Rawls in his A Theory of Justice also attempts to extrapolate from the individual to the sort of society that it would choose if it had no idea what its position in society was. But as we have seen in a previous blog he does this in a question-begging sort of way by excluding any kind of communitarian solution in which a communal role is, in part at least, constitutive of the individual’s identity.

It seems, then, that the very notion of ethics is precarious when one starts with the individual. Either it disappears with the appearance of God (Buber and Kierkegaard) or it disappears with the death of God (Nietzsche). Alternatively, it rests in the restless world of ambiguity (de Beauvoir) or begs the question against anyone who proposes a more communitarian approach.

In the next blog we will focus on communitarianism to see if it can do any better and move on from ambiguity.