Feeds:
Posts
Comments

Archive for the ‘Philosophy’ Category

I’ve just finished reading Antifragile: Things That Gain From Disorder by Nassim Nicholas Taleb. It’s a sprawling, fascinating, maddening book that is badly in need of a copy-editor. But one of Taleb’s pet hates (he has many) is copy-editors.

antifragile

There is a simple and profound central idea. Think of anything at all: a person, an idea, a relationship, a business, a country, a piece of technology, an ecosystem.

Some things are fragile. When some kind of crisis occurs, an unexpected event, a systemic shock – then they break. It might mean a small bit of damage or the destruction of the entire unit. Fragile things are harmed by crises.

What is the opposite of fragile? Our instinct is to use words like robust, strong, solid, resilient, perhaps flexible or adaptable. Robust and flexible things do not break when a shock comes; they can withstand crises and shocks. That’s true. They are unharmed. But this isn’t the opposite of fragile. The opposite would involve something that positively benefits from a crisis or a shock, that comes out better rather than just the same. We genuinely don’t have a word for this, which is why Taleb invents one: antifragilility.

He gives a neat illustration. If you put something fragile in the post, like a teapot, you pack it carefully and put a big sticker on the outside saying, ‘Fragile: Handle with Care’. What is the ‘opposite’ kind of package? You are tempted to say this would be a robust or strong parcel. But if you send something in the post that is more-or-less unbreakable, like a block of wood or a stone, you don’t put a sticker on the outside saying ‘Unbreakable: Don’t Be Anxious About This’, you just send it without any warning signs. The opposite kind of package, with something antifragile inside, would have a sticker saying something like this: ‘Antifragile: Handle Carelessly, Drop Me, Be Reckless With Me, Try To Damage Me’.

What would go into such a package?

Taleb shows how many things in life and society are antifragile. They actually benefit from crises and shocks, at least within certain limits. The human body is one example, it doesn’t benefit from being pampered, it grows stronger through certain shocks and stresses – within limits. Some ideas only develop through challenges and awkward confrontations. Some businesses are perfectly poised to benefit from difficult and unexpected situations, because they are able to adapt and seize new opportunities. Some relationships are able to discover new depths and different kinds of intimacy through problems and difficulties.

What is it that makes some things fragile, some robust, and some antifragile? You’ll have to read the book yourself!

The other big theme is the nature of rationality: how we try to predict the unpredictable, and when we fail and are caught off guard we try to pretend we knew what was going to happen. It’s much wiser, argues Taleb, to admit that many things, especially future crises and disasters, are completely beyond our powers of reasoning (even though they may be rational in themselves). The trick is not to be ready for a particular unexpected event, which is by its very nature unpredictable, but to be ready for something unexpected and unpredictable to happen, so that when it does happen we are able to react in a creative and intelligent way, bringing an unexpected good out of these unexpected difficult circumstances (antifragility), and to create systems that are resilient to major shocks or at least not set up so that they will shatter when the first unpredictable jolt takes place (a certain kind of flexibility and robustness).

It was the perfect bedtime book for me. Easy to read, full of stories, provocative. And it genuinely made me rethink a lot of things I had taken for granted without question before.

Advertisement

Read Full Post »

Fr Philip Miller has an article about Faith and Science in this month’s edition of the Pastoral Review, going over some of the basic history, theology and scientific theory.

Einstein's blackboard

Einstein’s blackboard

In the section on cosmology he writes about the anthropic principle: the way the universe is tuned in such a precise way as to allow the possibility of human life. I’m not sure about this. I’m not saying it’s untrue, I just haven’t done enough to think through whether I find the argument convincing or not.

What speaks to me more is the simple argument from order: that an ordered universe requires some transcendent foundation for its own order (i.e., outside space and time); and that scientific explanation presupposes that the universe can, at least in theory, be explained, and it therefore assumes that the ultimate explanation for the universe has a foundation which is outside the universe itself (at the metaphysical level – that the universe cannot contain the foundation of its own laws; and at the epistemological level – that science cannot justify the foundations of its own scientific principles).

This is how Fr Philip puts it:

The fundamental question remains, for a multiverse just as for a single universe: what is the underlying, unifying cause? The answer is that there must be a necessary being, that is, some sort of ‘God.’ Universes, being complex, law-governed entities, are not simple, and so cannot be metaphysically necessary (since ‘something’ must cause/explain the underlying unity of the complex whole).

Some of Professor Stephen Hawking’s work has been on the nature of the Big Bang, the proposed initial moment of the universe. Some of his more recent hypotheses have been to provide solutions to the complex physics of the early universe that avoid any suggestion that the Big Bang is, in effect, a creation ex nihilo. Hawking’s collaborator, physicist Neil Turok, developed the idea of the ‘instanton’ model of the Big Bang, which has, in simple terms, ‘no beginning.’ And yet, it is highly instructive to note Turok’s own words about their modelling of the universe’s initial expansion phase, termed ‘inflation’:

“Think of inflation as being the dynamite that produced the Big Bang. Our instanton is a sort of self-lighting fuse that ignites inflation. To have our ‘instanton’ you have to have gravity, matter, space and time. Take any one ingredient away and the ‘instanton’ doesn’t exist. But if you have an ‘instanton’ it will instantly turn into an inflating infinite universe.” [Turok, N., commenting online on his own work]

In other words, even in their attempt to define a universe with no beginning, they still have to assume that there is a pre-existing framework of physical laws just sitting there, which the material universe must obey. The universe clearly doesn’t invent its own laws: it requires a law-giver, and that law-giver has to be outside the universe of matter, space and time; it must be spirit, God Himself.

Which raises the child’s question, ‘But who made God?’ To which the answer is: God is not the kind of thing that needs to be made. Or, to put it in the positive: God is precisely that one ‘thing’ that is not made by another thing; God is eternal (outside time), spirit (outside space and matter), simple (outside the complexity of secondary explanations), and necessary (outside the chain of secondary causes).

What do you think?

You can read the full article here.

Read Full Post »

Another Californian self-help craze; part of the booming ‘happiness industry’. It’s called ‘Want-ology’: the science or therapeutic process of discovering what you truly want and setting you free to pursue it.

It's all about Me, by Randy Willis

Rhys Blakely interviews Want-ology’s creator, Kevin Kreitman (a woman…).

For $300 or so, a certified wantologist will quiz you for several hours, subjecting you to a process that is said to draw on psychology, neural science and cybernetics.

“We are only conscious of 3 to 10 per cent of our thought,” she says. “You think that you make decisions consciously, but it’s all underpinned by this hidden system.” When you find yourself in a rut, “it’s usually because all this unconscious stuff is tangled together like a knot”. The job of Want-ology, she says, is to untangle it.

Here is an example of the therapeutic process. A female client came to the therapist, thinking that she wanted a bigger house. The conversation went like this:

What do you want?

A bigger house.

How would you feel if you lived in a bigger house?

Peaceful.

What else makes you feel peaceful?

Walks by the ocean.

Do you ever take walks near where you live that remind you of the ocean?

Certain ones, yes.

What do you like about those walks?

I hear the sound of water and feel surrounded by green.

As Blakely explains:

Instead of moving, she turned a room in her home into a miniature sanctuary, with potted ferns and a table-top fountain. Her wantologist had steered her to a more nuanced understanding of what she really desired – inner peace.

And saved her $400,000 at the same time…

At one level, this is surely a good process. Not losing the $300, but having someone help you work out what you are really seeking, or what’s really bothering you. Our motivations can be incredibly complex, and the heart is a mysterious and sometimes deceitful thing. We think we want something or need someone, and then we realise – perhaps when it is too late – that we were just reacting to something, or acting out of impulse, or trapped in a habit, or replaying an old desire that didn’t actually exist any longer.

Usually, we do this kind of reflecting with a friend, the kind of friend who will be honest enough to say, ‘What’s really bugging you?’ or ‘What do you really want?’ And then we start untying the knots. Or we do it in prayer, in conversation with the Lord.

This is the whole thrust of Sartre’s existential psychoanalysis. Not, like Freudian analysis, to discover some unconscious and therefore unaccepted or repressed motivation. But instead to gain some clarity about the primary motive, the overarching intention, that lies within the muddle of our ordinary desires and actions. It’s not uncovering the subconscious, but making sense of what is within consciousness, seeing the pattern.

And this is not unlike Ignatian spiritual discernment, where you learn to recognise what is the deepest desire of your own heart, and what is God’s deepest desire for you, by reflecting prayerfully on those situations that bring spiritual consolation and light, and those that bring confusion and an unhealthy inner darkness.

None of this means, of course, that you should necessarily follow what you discover to be your heart’s one desire. Clarity is one thing (whether this comes through a Want-ology therapist, existential psychoanalysis, or an Ignatian retreat); but the moral wisdom to work out what you should do with this clarity is another thing. That’s why I wouldn’t endorse this kind of therapy, without knowing what its moral framework is.

It’s good, generally, to know yourself better; as long as the therapist isn’t going the next step and encouraging you to follow your dreams uncritically, heedless of the moral or spiritual consequences, or of the mess they might make to the reality of your present life and relationships. OK, mess can sometimes be good; but not always.

[Rhys Blakely writes in the times2, the Times, March 14 2013, p4]

Read Full Post »

Highclere Castle - Downton Abbey by griffinstar7

I’ve seen half an hour of Downton Abbey and absolutely nothing of Girls, so don’t think I am recommending either of them. But Anand Giridharadas has a very thoughtful piece about how they represent the shift from the socially-determined self of early 20th century Britain to the chaos of total self-determination experienced by the single women of contemporary New York.

On the surface, all they have in common is their Sunday airtime, at least in the United States. One television show is about English aristocrats, crisp, proper, well-dressed even in bed. The other is about four young women, often lost and very often unclothed, in a setting quite different from Yorkshire: Brooklyn, New York.

But “Downton Abbey” and”Girls”, both hugely popular, sometimes seem to be talking to each other. And it is a conversation of richer importance to our politics and culture than the nudity on one show and the costumes on the other might initially suggest.

On issue after issue, Americans continue to debate the limits of individual freedom — whether to abort a fetus or own a gun or sell stocks or buy drugs. And in different ways, the two television shows address the promise and limitations of the modern, Western emphasis on — even sacralization of — the individual.

“Downton” and “Girls” serve as bookends in an era defined by a growing cult of the self. “Downton” is about the flourishing of selfhood in a rigid, early-20th-century society of roles. “Girls” is about the chaos and exhaustion of selfhood in a fluid, early-21st-century society that says you can be anything but does not show you how.

This is Downton, where people still, just about, know who they are:

Set on a manor in which the hierarchy and fixedness of the country — indeed, of the Empire — are especially concentrated, “Downton” is a world where there is a way to do everything, from cleaning spoons to dressing for dinner. Status has been and still seems immovable, and servants must act at least as convinced of their inferiority as the masters are. Novelty and that great leveler, money, are reflexively suspected.

The drama is this world’s cracking under the pressure of new ideas like individualism. Thus the family driver, believing in equality and marrying for love, runs away with the family daughter; thus the men wear black tie instead of white to dinner one night; thus a new generation of servants is less servile, more willing to question.

Mary McNamara, a television critic at The Los Angeles Times, has described “Downton” as “the tale of an oppressive social and economic system that is finally being called into question.” The drama comes from watching our world slowly, inevitably defeat theirs: “the bondage of social bylaws and expectation, the fear of new technology, the desire to cling to old ways.”

This is Girls:

The daughters of the sexual revolution are depicted without much agency: Far from being conquerors, initiators, even equals, the girls of “Girls” are reactors, giving in to an ex who changes his mind, or a gay man wanting to try something, or a financier seeking a threesome that he manages to upgrade to traditionally twosome marriage.

What begins on “Downton” as a welcome questioning of age and status roles has snowballed by the “Girls” era into grave role confusion: parents who cannot teach their children how to live because they feel guilty about parenting, or want to be friends more than guides, or still dress like teenagers and call their offspring “prude.”

Nowhere is this overshooting truer than with the roles of the sexes. If “Downton” shows a world in which women are starting to claim their own sexuality, “Girls” portrays a sexual dystopia in which those women seem to have negotiated poorly: Men now reliably get what they want, while women must often content themselves with scraps, as when the character Hannah celebrates “almost” satiation in bed as the best she is likely to get…

“Girls” is about atoms that desire in vain to form molecules; about sex lives that breed more confusion than excitement; about people with the liberty to choose every day, on various dimensions, whom to be — and who grow very tired of the choosing.

And this is one of the Girls – Marnie:

I don’t know what the next year of my life is going to be like at all. I don’t know what the next week of my life is going to be like. I don’t even know what I want. Sometimes I just wish someone would tell me, like, ‘This is how you should spend your days, and this is how the rest of your life should look.’

Read Full Post »

Personification of a Virtue by Antonio del Pollaiolo

Personification of a Virtue by Antonio del Pollaiolo

In case you missed these, here is Alain de Botton’s list of ten virtues unveiled in his Manifesto for Atheists.

1. Resilience. Keeping going even when things are looking dark.

2. Empathy. The capacity to connect imaginatively with the sufferings and unique experiences of another person.

3. Patience. We should grow calmer and more forgiving by getting more realistic about how things actually tend to go.

4. Sacrifice. We won’t ever manage to raise a family, love someone else or save the planet if we don’t keep up with the art of sacrifice.

5. Politeness. Politeness is very linked to tolerance, the capacity to live alongside people whom one will never agree with, but at the same time, can’t avoid.

6. Humour. Like anger, humour springs from disappointment, but it’s disappointment optimally channelled.

7. Self-Awareness. To know oneself is to try not to blame others for one’s troubles and moods; to have a sense of what’s going on inside oneself, and what actually belongs to the world.

8. Forgiveness. It’s recognising that living with others isn’t possible without excusing errors.

9. Hope. Pessimism isn’t necessarily deep, nor optimism shallow.

10. Confidence. Confidence isn’t arrogance, it’s based on a constant awareness of how short life is and how little we ultimately lose from risking everything.

Why these? Why now? Robert Dex explains:

De Botton, whose work includes a stint as a writer in residence at Heathrow Airport, said he came up with the idea in response to a growing sense that being virtuous had become “a strange and depressing notion”, while wickedness and evil had a “peculiar kind of glamour”.

He said: “There’s no scientific answer to being virtuous, but the key thing is to have some kind of list on which to flex our ethical muscles. It reminds us that we all need to work at being good, just as we work at anything else that really matters.”

My own response, which I sent to the Catholic Herald last week:

I like this list of virtues. It’s not exhaustive, but it’s certainly helpful. It prods you into making a sort of ‘examination of conscience’, and reminds you that there are other ways of living and relating and reacting.

There are obvious borrowings from classical philosophy, the great world religions, English manners, and the self-help books that line the shelves at WH Smiths.

Apart from the obvious absence of ‘God’, they don’t seem to have a particularly atheist spin.

If both believers and non-believers lived by these virtues, the world would be a much happier place; there would be less shouting and more laughter; relationships would be more stable, and we’d get more done in an average day. That’s surely something to celebrate!

But Francis Phillips thinks there is an implicit Pelagianism at work here:

I understand why de Botton is preoccupied with the concept of a virtuous atheist and I do not mock him; indeed I take his yearning to counter the supposedly superior claims of Christianity very seriously. It is a noble ideal and society would indeed be happier and more civilised if more irreligious people of the “Me-generation” were to reflect on his ideas. But just as that selfless quiet heroine of the Great War, Nurse Edith Cavell, realised that patriotism was not enough, so a noble and enlightened atheism, however fine its aspirations, is not enough if individuals or society are to be regenerated or renewed.

The reason, as Catholic theology teaches us, is sin, original and personal, our own and Adam’s. We are not strong enough by ourselves to be good (as opposed to “nice”) without the grace of God. Politeness and resilience – indeed kindness and niceness – are not virtues in themselves; they are attractive characteristics of some people by nature; the rest of us have to fight against being “horrid”, like the little girl with the curl in the middle of her forehead.

It is Pelagianism (and de Botton strikes me as something of a neo-Pelagian) to think we can pull ourselves up by our bootstraps and achieve virtue on our own.

Do you like them? What’s missing?

Read Full Post »

In my recent post about Web 3.0 I used the phrase layered reality to describe the way that information from the virtual world is becoming embedded in our experience of the real world in real-time. Instead of stopping the car, looking at a physical map, memorising the directions, and then starting off again; now you see a virtual map on your sat nav that matches and enhances the physical reality in front of you. It adds another layer. The next step – part of Web 3.0 – is that the technology that delivers the layer is wearable and invisible, so that the layering is seamless. We have had mobile conversations via earpieces for years now.

The best example of this is the Google Glass. Messages and information that up to now would appear on your computer screen or mobile phone now appear on the lens of your glasses as part of your visual panorama. Fighter pilots have had information appearing on their visors for a long time, so that they can read instruments without having to take their eyes off the scene ahead. The Google Glass is just the domestic equivalent of this.

Take a look at this wonderful video demo:

Claire Beale explains more about the implications for mobile technology:

Ever since Tom Cruise showed us in Minority Report a future where reality is a multi-layered experience, gadget geeks have been waiting for technology to deliver on Hollywood’s promise.

Now virtual reality is about to become an actual reality for anyone with the right sort of mobile phone after Telefonica, the parent company of O2, signed a revolutionary deal last week with the tech company Aurasma.

Aurasma has developed a virtual reality platform that recognises images and objects in the real world and responds by layering new information on top. So if Aurasma’s technology is embedded into your mobile phone, when you point your phone at an image it can recognise, it will automatically unlock relevant interactive digital content.

For brands, this type of kit has some pretty significant implications. It means that commercial messages can now live in the ether around us, waiting to be activated by our mobiles. If your phone registers a recognised image such as a building, a poster or a promotional sticker in a store, say, it will play out videos, 3D animations or money-off coupons to entice you to buy.

See this video demo from Layar:

You don’t just see, you see as others see, you understand what others understand, it’s almost like sharing in a universal consciousness. That’s part of the wonder of this new augmented reality, and also the danger; because it all depends on trusting the source, the provider. Who controls the layers?

But the idea of layering reality is not really new, in fact ‘layered reality’ could almost be a definition of human culture. Culture is the fact that we don’t just experience reality neat, we experience it filtered through the accumulated interpretations of previous generations. The primordial example of culture as a layering of reality is language: we speak about what we see, and cover every experience with a layer of language – before, during and after the experience itself.

And writing is literally putting a layer of human interpretation on top of the physical reality before you: carving some cuneiform script into a Sumerian brick; painting a Chinese character onto a piece of parchment; printing the newspaper in the early hours of the morning. Endless layers that stretch back almost to the beginning of human consciousness.

Read Full Post »

OK, the reviewers were right, Total Recall is verging on the truly terrible. [Warning: Plot spoilers to follow] They even had the nerve to steal one of the best scenes from the first Bourne film (you could hardly call it a homage), when a man without a memory finds a code that leads him to a safe deposit box that happens to be full of passports, cash, and lots of other secret and mysterious stuff about his secret and mysterious former identity. I had to see it, of course, because I have an inability to not see (forgive the grammar) any new film involving time-travel or implanted memory. It’s a childhood thing. (See my Five Greatest Time Travel Films of All Time post).

But the great thing about even a terrible sci-fi film is that it still makes you think; in the way that a terrible Western or rom-com or road movie is simply terrible full stop. In case you don’t know the story, Colin Farrell is a guy who may or may not have had his memory completely erased and replaced by another set of artificial memories, making him unsure about his true identity; and this whole ‘who am I’ identity crisis, which is most of the film, may be taking place in the ‘real world’ (whatever that is), or it may be an artificially implanted memory created by an amusement company called Rekall to ease the boredom of his mundane life – a freely chosen escapist fantasy.

This is all very familiar, but I still find it fascinating! And the final scene, despite being so predictable, sent a shiver down my spine – when we think we are in the real world, at the end of a moderately satisfying drama, but we see Farrell catching a glimpse of a poster advertising Rekall, and we wonder whether anything real has happened at all.

So it raises the obvious questions, that have been raised a hundred times in sci-fi short stories: Is there a ‘true self’? Does it matter whether our ideas and memories about the past, and especially our experiences and personal identity, are true or not? Does it change the person we are today if we discover that something we thought was true turns out to be false, or if something we never knew or imagined turns out to be true? There is a nice moment when the baddie asks Farrell: why can’t you just accept who you are in the present, without worrying about who you might have been in the past?

Part of me is attracted to this. The whole notion of human freedom, and conscience, demands that in some sense we are not completely determined by the past, however much it influences us. We can to some extent remake ourselves, re-invent ourselves, make a new start, experience a conversion.

But here is the rub: there is no such thing as the pure present. We are always moving from a past to a future, making sense of the present and future in terms of the past, even if it is a conscious repudiation of that past. But there is no such thing as ‘no past’, because even ignorance or forgetfulness colours how we experience the past, and how we understand our identity.

All of us have moments of remembering things we have forgotten, or finding out that some powerful experience didn’t happen in quite the way we remembered it. Some of us have powerful, liberating, or terrifying moments when we are brought face to face with a truth from the past that so disorientates our world that we are unsure who we are any more. Our identity is fractured and even fragmented, our understanding of ourselves is transformed. This is often the case with deep and dark family secrets, and it’s why – as I understand it – the present philosophy within social work is to let adopted children know that they are adopted, rather than hiding it from them, or springing it on them later in life.

There is something about faith here as well. Part of coming to know God is discovering, perhaps for the first time, that what you thought was your beginning, your identity, is not the whole story. You are not just a random evolutionary product, or the fruit of a human relationship, but child of God, created by him out of love, cared for within his loving providence, and destined for a life with him for all eternity. Baptism is not, like Rekall, the implanting of false memories; it is the uncovering of memories much deeper than our own, and then the creation – through the grace of the sacrament – of a new identity. And this new baptismal identity is not imposed like an ill-fitting mask or a forged passport that has no connection with our former self, it is the fulfilment of that former self, the raising up to new life of a life that was always secretly longing for it.

If you want to see a really good movie about these themes, get hold of Moon, which I saw over the summer for the first time. (Just to make a contemporary London connection, this is by director Duncan Jones, who is the son of David Bowie from his first marriage, who – David Bowie – is the subject of a retrospective at the V&A which is just opening.)

Read Full Post »

I stole the title of my previous post from Fergus Kerr’s book Immortal Longings: Versions of Transcending Humanity. It’s a collection of essays about twentieth-century philosophers whose thought, often indirectly, has touched on the human encounter with the transcendent. Kerr is interested in what lies at the very edge of human experience, in those ill-defined questions about origins and meaning and ends that don’t always get asked. It’s the border between philosophy and theology, between reason and faith.

Kerr was a great help to me when I was trying to find a title for my PhD dissertation eleven years ago. I knew I wanted to study in the general area of ‘philosophical anthropology’ – the philosophy of the human person. I had some initial ideas about focussing on the notion of the self and second nature in contemporary philosophers like Charles Taylor and Alasdair MacIntyre. But more and more I was drawn to the subject of human freedom, not as a particular capacity or skill, but as a reflection of the extraordinary fact that human nature is open-ended and only incompletely defined; and that some of the defining is – strangely – up to us. We are, to some extent at least, self-creating creatures. The rest, in turns of my academic journey, is history. Or more simply, the rest is Aquinas and Sartre

Here is the publisher’s blurb about Kerr’s book.

Daringly extending the agenda of what is usually considered as ‘philosophy of religion,’ Fergus Kerr argues that more religion exists in modern secular philosophy than many philosophers admit.

Examining much-discussed contemporary philosophers such as Martha Nussbaum, Martin Heidegger, Iris Murdoch, Luce Irigaray, Stanley Cavell, and Charles Taylor, Kerr reads their respective stories in the light of Karl Barth’s notion that “transcending our humanity only makes us more human than ever.”

In Kerr’s view, transcendence-the “immortal longings” of his title-plays a central role in many of these philosophers’ systems of beliefs.

Kerr’s brilliant and long-awaited study shows that the theological content of modern philosophy deserves much more attention than it has received in the past.

And here are some comments from the review in the International Philosophical Quarterly.

What does one carry away from this learned and engaging book? Many specifics: insights, aperçus, and good readings of Nussbaum, Barth, and the rest. This alone would justify a close reading by anyone interested in philosophy of religion or in the religious elements in philosophy.

But there is more. One of the delights of this book is Kerr’s humane presence in the text. Through the text shines a person in a certain attunement toward these issues: an attunement which we can admire and learn from.

But finally Kerr does more than catalog a set of concerns and exemplify an orientation toward them. He has named, and lifted up for our attention, the philosophical career of the central theme of religion: what lies beyond us humans, and how do we stand with regard to it? The two conflicting intuitions-that we are at once somehow intrinsically tied to it and yet alienated from it, that we know it and yet do not-seem perennially present in human self-understanding.

To Kerr we owe thanks not only for showing us some fascinating patterns of commonality in surprising places but also for disclosing the problematic unity underlying those patterns.

It’s well worth a read.

Read Full Post »

Yes, there has been a lot of noise over the last few days. I went down to the river on Sunday afternoon, and it was ten people deep on the Chelsea Embankment; I just managed to see the royal party by standing on tip-toe, and quite a few people around me couldn’t see a thing. And walking through Victoria on Monday evening, quite by chance, I caught the post-concert fireworks just a few hundred yards away.

But my abiding sensory memory of the weekend was the early morning silence on Sunday. Battersea Bridge was closed for the flotilla, which meant that our street – which runs down to the Embankment – was also closed to traffic. It was eerie, waking up to silence. No buses, no cars, no sirens. It was as if London itself had been suspended, as I lay on my bed taking in the unusual atmosphere; as if there was less – less noise, less activity; but also more – more presence, more awareness of the place itself and not just what’s happening within it. This is what Sundays used to be like!

#76 - empty streets  by cliff_r

No, this isn’t London! Midtown Manhattan after Hurricane Irene hit the city

I’ve experienced this twice before here in Chelsea. Once was a glorious period of a few months when Battersea Bridge was completely closed for repairs after a boat crashed into one of the arches at high tide. Every morning had this same quality – as if we were living in a cul-de-sac. The other time was during the ash cloud when all the Heathrow flights were cancelled, and the very early mornings – 5 or 6 o’clock – even though I’m not up then – weren’t tarnished by the subconsciously-heard roar of planes overhead.

Another random connection: A Jesuit friend of mine telling me recently that in his community they agreed to completely disconnect the WiFi for one day each month. You might say this isn’t too radical, and perhaps once a week would really hurt. But once a month is better than not at all. And they seem to have appreciated it. Rather than being a burden, it seems to have been a liberation – you simply can’t attend to the emails – they are not ‘there’; sure – they are somewhere, but not there, now, in your computer.

We need a completely car-less day in London once a year. Does anyone know about this? There must be some kind of movement dedicated to this – a campaigning group, or a philosophy/cult – that proposes closing every road within the M25, or at least within the North and South Circular, for 24 hours. To pedestrianise the whole city just for a day. Wouldn’t that be amazing? It could be national street party day, and it could be combined with a bunch of other days that already take place, that would benefit from the no-traffic day, like the Open Gardens day. Let me know any links you know to such a proposal (I just haven’t bothered to look myself yet); and if there isn’t such a proposal, I might start a petition or another Facebook event/group. Does Paris already have an empty street day or something?

Later addition: Two wonderful comments that deserve copying into the main post here. One from David:

This is on a par with Down With Telly Zappers – never mind the elderly and the not so elderly but bed- or chair-bound for whom a  zapper is a god-send. Closing down transport in London may be a bonus for some, but it would be a day’s misery for people on minimum wage or paid by the day. And what about  tourists and all the people who depend on them for a living?

The other from Ttony, whose astonishing memory for 1970s Punch articles, or his clever search techniques, unearthed this:

I don’t know whether there is a campaign today, but this is what Cliff Michelmore wrote in Punch somewhere around 1971-73.

“THAT did it. I know my dream holiday. Not for me the wine dark sea, burning sands and browning bodies, the counting of calories and minks. I shall dream.

By noon on Friday next, all vehicles (except bicycles) will be removed from the precincts of London and taken at least forty miles from Charing Cross and are not to return until noon the following Monday. All aircraft are forbidden to fly within sixty miles of the aforesaid Charing Cross and no chimney has permission to smoke within the same area. There shall be no television or radio transmissions nor shall there be any newspapers, magazines or other such matter published. No cinema shall show any film other than one having a U certificate. All employees of and owners of joints, strip, gambling, clip, bingo etc. to take the weekend off.

All public buildings, including Royal palaces, Government offices to be open to the public free of charge, and at all times throughout the weekend. It is the intention of my dream Government to allow families to see London as it should be, to take a long parting glance at it before the whole lot goes up in blocks, to walk the streets without fear of being knocked senseless by senseless drivers, and to breathe air without fear of being choked to death.

That is my dream holiday, with the family, just drifting around London. I have no great love of London, in truth I find it as comfortable and warming as a damp overcoat, but this weekend of standing and staring and drifting may just halt our idiot rush to nowhere.

And back to the dream for a moment. We have already booked Sir John Betjeman as our guide and companion for the weekend – so hands off!”

Read Full Post »

Jenny McCartney “celebrates” the life of Eugene J Polley, the inventor of the TV remote control, who has recently died. Without him, there would be no such thing as channel-hopping. And who knows, if we hadn’t made the leap from watching to hopping, perhaps we wouldn’t have been psychologically or culturally ready for the next leap from hopping channels to surfing the web.

Polley was an engineer at Zenith, where he worked for 47 years. I put “celebrates” in inverted commas, because McCartney thinks he leaves a dubious legacy.

I am old enough to remember what viewing life was like before the remote control hit the UK, in the days when there were only three channels and you had to make the active decision to haul yourself up from the sofa and press a button to alter them. It was better. If someone wanted to change the channel, etiquette usually demanded that they consult the other people in the room, only moving towards the television once agreement was reached. As a result, you stuck with programmes for longer: since it took a modicum of effort to abandon them, and people are naturally lazy, even slow-burning shows were granted the necessary time to draw you in.

With the arrival of the remote control, the power passed to whoever held the magic gadget in his or her hot little hands. Automatically, the holder of the remote was created king of the living room, and everyone else became either a helpless captive, or an angry dissenter. As the number of channels steadily grew, so did the remote-holder’s temptation to flick between the channels with the compulsively restless air of one seeking an elusive televisual fulfilment that could never be found.

Channel-surfing is a guilty pleasure that should only be practised alone. There is nothing worse than sitting in the same room while someone else relentlessly channel-surfs. It makes you feel as if you are going mad. You hear – in rapid succession – a snatch of song, a scrap of dialogue, a woman trying to sell you a cut-price emerald ring, half a news headline, and an advertising jingle. The moment that something sounds like it might interest you, it disappears. Worse, when you yourself are squeezing the remote, you find that you have now developed the tiny attention span of a hyperactive gnat. Is it any surprise that, now that alternative amusements to the television have emerged, family members are challenging the remote-holder’s solitary rule and decamping to the four corners of the family home with their iPads and laptops?

I know that lamenting the invention of the remote control will – in the eyes of some – put me in the same risibly fuddy-duddy camp as those who once preferred the horse and cart to the motor car, yearned for the days when “we made our own fun”, and said that this email nonsense would never catch on. I don’t care. Listen to me, those of you who cannot imagine life without the zapper: it really was better before.

I think the phrase ‘surfing the web’ is misleading and actually disguises the fragmentary nature of the typical internet experience. If you go surfing (I went once!) you wait patiently and let a lot of inadequate waves pass underneath your board, but as soon as you spot the right wave, ‘your’ wave, you paddle with all your might to meet it properly, leap onto the board, and then ride that wave for as long as you can.

When you find a wave, in other words, you stay with it. You are so with it and trying not to fall off it that it’s inconceivable that you would be looking out of the corner of your eye for a better one. That’s the joy of surfing – the waiting, the finding, and then the 100% commitment to the wave that comes.

That’s why the phrase ‘surfing the web’ doesn’t work for me. The joy of the web, and the danger, is that you can hop off the page at any time, as soon as you see anything else vaguely interesting or distracting. You are half-surfing a particular page, but without any physical or emotional commitment. You can move away to something better or more interesting – that’s the miracle of the web, what it can throw up unexpectedly. But it means that one part of you is always looking over the horizon, into the other field, where to go next; as if non-commitment to the present moment, a kind of existential disengagement, is a psychological precondition of using the internet.

As you know, I am not against the internet. I just wonder what long-term effects it has on us and on our culture. On the internet, everything is provisional. So if we see everything else through the lens of our internet experience, then it all becomes provisional – including, perhaps, even our relationships.

Maybe that’s the word to ponder: ‘provisionality’.

Read Full Post »

It’s an old trick, and a common childhood game – to cut out an adult head from a magazine photograph and paste it onto the body of a baby. Evian use it on their latest bus-stop advertising campaign.

The first visual message, very boring, is that if you drink a litre of Evian water you will be as stunningly beautiful and alarmingly thin as this model. The second message, slightly tongue-in-cheek, together with the Live Young caption in the corner and the baby’s body T-shirt, is that you will retain the youthfulness, innocence, playfulness and perfect skin that you had when you were a little baby.

The subliminal pro-life message, paid for by Evian, is philosophical: whatever you think about the ‘personhood’ of a baby, this baby is you; you are the same human being; it’s one continuous life; looking backwards – once you were a baby and now you have become an adult; looking forwards – this is the baby who will become (if it survives) an adult.

When I look at a photo of myself at 15 years old, or 5 years, or 5 months, or when I look at an ultrasound scan image of myself at 36 weeks, or 24, or 12 – I say ‘this is me’. It’s a hugely different me, but it’s still me. I ‘identify’ (at a personal level) with this image, with this human being, because there is an ‘identity’ (at a biological and philosophical level) between me today and me back then; just as I identify with the me who existed 2 minutes ago. Identity doesn’t undermine difference – of course there are differences. It just allows you to affirm, at a deeper level, a continuity of existence, and gives you a sound reason for saying ‘that’s me’ or ‘we are the same person’.

The poster reminds you of the continuity between the adult ‘you’ and the infant ‘you’. It doesn’t take much to then make the link between the infant ‘you’ and the ‘you’ in the womb. And that reminds you of the importance of remembering that the human being in the womb is another ‘you’ and not just an ‘it’.

Read Full Post »

I’ve just come across this phrase ‘non-religion’ as an academic term.

The two concepts of nonreligion and secularity are intended to summarise all positions which are necessarily defined in reference to religion but which are considered to be other than religious. Thus, the Nonreligion and Secularity Research Network’s research agenda is inclusive of a range of perspectives and experiences, including the atheistic, agnostic, religiously indifferent or areligious, as well as most forms of secularism, humanism and, indeed, aspects of religion itself. It also addresses theoretical and empirical relationships between nonreligion, religion and secularity.

There is a new website to coordinate research in this area.

The Non-religion and Secularity Research Network (NSRN) is an international and interdisciplinary network of researchers; the network was founded in 2008 to centralise existing research on the topic of non-religion and secularity and to facilitate discussion in this area.

This website – launched in December 2011 – is our new home on the internet. To find out more about the changes to the site, please have a look around, or see the ‘About Us’ section for more information. Meanwhile, we hope you enjoy the site and welcome your feedback and suggestions for additions and improvements.

See what you think. I like the inconsistent use of the hyphen; as if there is an unresolved philosophical/sociological debate here.

Read Full Post »

I’d forgotten what a beautiful collection of paintings there is at the Courtauld Gallery. The tag-line on its website reads ‘one of the finest small museums in the world’; and I can vouch that in my small experience of small museums it comes pretty near the top. Do pay a visit if you have never been (information here). It’s housed in Somerset House on the Strand in central London.

It was the Mondrian-Nicholson exhibition that took me there on Friday. I’ve always enjoyed the Mondrian grid paintings, but I came away with a much greater admiration for Ben Nicholson.

The Mondrian paintings feel like studies, ideas, or speculative essays. They make you think about balance, harmony, relation and discord; how a particular colour and shape relates to another; and there is certainly an aesthetic response. But it feels more like thinking than seeing, as if you are somehow detached from your own experience.

[The two pictures here are not from the current exhibition.]

I think it’s the thickness of the black grid lines. It’s as if Mondrian is saying, ‘I’m telling you how the colours relate’, instead of just letting the relationships speak for themselves. I’m not criticising the project – I’m sure he knew what he was doing. I’m just responding to it.

Nicholson’s geometric abstractions, as well using a greater variety of colours, and daring to incorporate the odd circle here and there, are without the black grid lines; so the patches of colour and space touch each other and seem to grow out of each other. The paintings seem more alive, more organic. They seem to have greater presence.

There is an incredible beauty about two or three of the canvases here, and it helps you to understand the significance of the whole abstract movement in art. The relationship between abstraction and realism is like that between metaphysics and the world. In Nicholson’s geometric paintings you can see what it is for something to be there and not here, to be what it is and not what something else is, to support or oppose or surround or frustrate or liberate or oppress – but all of this now without content. It’s like a dance without the dancers.

It’s not just the art itself that becomes abstract; it’s a means of contemplating in abstraction so much that takes place within human experience and so much that is experienced of the world. One painting took my breath away, and held me there almost in suspension – Painting, Version I, 1938 – heartbroken that it is from an anonymous private collection and I may never see it again in my life. I wish I could find an image to show, but it wouldn’t capture it. You will have to go yourself.

It’s wonderful that the two rooms of this temporary exhibition lead into the small but exquisite selection of early German expressionist paintings in the Courtauld collection. You see artists like Jawlensky and Kandinsky around 1910/11 almost slipping into abstraction, seeing the possibilities of actually breaking free from representation and leaving themselves with form alone – the formality of colour, shape and space. And seeing how much could still be ‘said’ and expressed solely with the formal elements.

It’s just a short step from Kandinsky’s Improvisation on Mohogany, 1910, to the Mondrian-Nicholson paintings of the 1930s next door.

This is the wall commentary from that painting:

By 1910 Kandinsky has developed his art to the brink of abstraction… emphasising the sensation of colour, line and form, freed from their descriptive functions. Here, isolated details can be identified, such as the figure of a woman and the outlines of a walled city to the right. However, the textured patches of brilliant colour generate their own energy and harmony.

So I am now a huge Ben Nicholson fan. Does anyone know where I can see some of his other paintings?

Read Full Post »

Don’t worry, this is not going to be a xenophobic rant. I had supper with a German friend at the weekend, who has lived in France for many years, and has just spent a few weeks in London improving her English.

We got onto the difference between the French and the English, and it was interesting having her fairly objective viewpoint as someone who has lived in both countries as an outsider.

She said that the French, in the way they think and argue, are more abstract. They start with first principles and work outwards to the nitty-gritty of reality. The English are more concrete, more empirical. They start with things, stuff, examples, case-studies, and only then try to draw some more general conclusions from the specific instances.

She also put the same point in another way: that the French work by deduction, and the English by induction.

It struck me that this, if it’s true, is exemplified by our measuring systems, metric and imperial. A metre length is just an idea. It’s not based on anything ordinary or everyday or natural. Yes, there is a bar of platinum-iridium in a vault in Paris that used to be the standard measure of a metre, for reference (although this system has been surpassed now). But the bar, the metre, was created by the French mind – a mind imposing order on the world.

The imperial system – take the foot as an example – is based on (wait for it…) the foot! The whole system of measurement is based on the length of a man’s foot (a man’s and not a woman’s…). You see the world, and measure it, and understand it, in terms of something concrete; you see and understand one aspect of reality in the perspective of another aspect of reality. In the imperial system, man is – literally – the measure of all things; not a metal bar in Paris.

It sounds like I am defending the English way. Not really. There are advantages to each way; and the abstraction certainly appeals to me. And anyway, the French won! The metre rules the world. I’m just noticing the philosophical differences in world-view that are embodied in something as benign as a unit of measure; and how that connects with a German’s perception of English-French differences.

[Update: I received some good criticism in the comments, which I wanted to copy here, about my failure to mention the origin of the metre. E.g. this from Roger: ‘Sorry, Fr Stephen, as a physicist I can’t let you get away with that one – the metre was originally intended to be one ten-millionth of the distance from the Earth’s equator to the North Pole. If it’s “just an idea” it’s a very practical one!’ To which I replied: ‘Thanks Roger. OK – the metre, like the foot, starts in the concrete world. I’d still say the way it was arrived at reflects a different mentality, a more abstract kind of reasoning (taking a distance that can only be established by careful scientific investigation and then dividing it by ten million to establish a length that is more connected with everyday human life) – that reflects something about the difference between a more deductive mindset and a more empirical one.’ The metre, despite the geographical origin, is definitely ‘a product of the mind’; the foot is ‘a product of experience’ – I think.]

Read Full Post »

What if there were another you? I don’t mean just an identical twin or a clone with the exact same genes. I mean someone who was like you in every way, the same body and mind and heart, the same past and experiences and memories, the same thoughts and feelings, the same decisions taken and the same mistakes made, standing in front of you now – but not you.

This is the idea at the heart of the film Another Earth, which jumps straight into my Top Ten films of the year. [Major plot spoilers follow – sorry!]

Another planet appears – just a dot in the night sky. As it comes closer it becomes apparent that this planet is the same size as ours, that it even has the same structure of continents and oceans as ours. Then, in a magical sci-fi moment, as the woman responsible for ‘first contact’ with the new planet speaks on a microphone, she realises that the woman talking to her on the other end is herself. [It’s on the trailer here – I’ve ruined it for you!]

So the synchronicity between the two planets and between each corresponding person is absolute, apart from the fact that it inevitably gets broken by the appearance of the other planet – so the woman is not hearing the same words ‘she’ is speaking on the other planet, but actually having a non-symmetrical conversation with her other-self.

First of all, you are simply in sci-fi territory. I love these films. And in fact this film is really a re-make of another film from the ’70s (I can’t remember its name – brownie points for anyone who can help) where the US sent a spaceship to another planet on the other side of the sun, only to discover that the planet was the same as the earth – apart from everything being a mirror image of this earth. So our astronaut lands on the other planet, and another astronaut from that planet lands on our earth, with everyone thinking that our astronaut has come back early – until he sees that all the writing here is in reverse. Anyway – this is classic sci-fi.

But very quickly it becomes philosophical. Looking at this other earth in the sky above, marvelling that we can behold such a world, you realise that this is exactly what we do whenever we reflect on our experience, or use our imaginations, or question what is going on in our own minds. The remarkable thing about human beings is that we can ‘step back’ from our own experience (inner and outer) and view it; that we can ‘see ourselves’. The strangeness of the film brings to light the strangeness of ordinary human life.

We take this ability to reflect for granted, but it really is the key factor that seems to distinguish us from other animals. No-one today would deny that animals can be incredibly sophisticated and intelligent; and on many measures of intelligence they would beat us. But this power of self-reflection seems to be one of our defining characteristics; and it surely connects, in ways that aren’t always clear, with human freedom – the freedom we have to think and imagine and act in ways that go far beyond the instinctual programming we receive as bodily creatures.

So the wonder that Rhoda Williams feels staring up at this other planet is no more than the wonder we should feel whenever we step back and reflect on ourselves.

Then there is a theological angle too. To cut a long story short: Rhoda unintentionally kills the family of musical conductor John Burroughs in a driving accident, soon after the planet is discovered. He is haunted by the loss of his family, and then receives a ticket to travel to the other planet – a ticket that Rhoda has for herself, but she decides to give it to him. Why would he go? Because if the synchronicity between the two worlds was broken when they started to impact on each other, then perhaps the accident did not happen on the other planet, and ‘his’ family is still alive up there.

I call this a theological idea, because it’s about the possibility of redemption, of putting right something that has gone irredeemably wrong in the past. That in some sense this action might not have happened, or it might be possible to go back and undo the harm that has been done. This is crazy of course – in normal thinking. But if it’s crazy, why do we spend so much time imagining/hoping that somehow we could put right what has gone wrong? I don’t think our almost compulsive inability to stop regretting the mistakes we have made is simply a dysfunctional habit that we can’t let go of; it’s a yearning for forgiveness and redemption, for someone to go back in time and allow us to change things, an echo of a possibility of renewal that we can’t justify at a rational or philosophical level – because the past is completely out of reach. It’s about hope.

Or the film is about conscience – the possibility of imagining an action now, as if it were happening, and asking if we really want this parallel imaginative world to unfold into reality, or if we would regret it. So the work of conscience, and of all conscious deliberation, brings us up against another parallel world that is exactly the same as ours – only we have the power to decide whether it shall come into existence or not.

At the very end of the film, in her backyard, Rhoda meets ‘herself’ – we presume she has come from the other planet, with her own ticket, which she didn’t need to give away, because the accident there didn’t happen. All we see is her catching the gaze of the other woman before her, and recognising her to be herself – but not. Then the film ends immediately. It’s incredibly moving. As if a lifelong search, unacknowledged, is finally over; as if, miraculously, I step away and see myself for who I am, and see myself seeing myself. And that, miraculously, is in fact what happens every time we know ourselves through self-reflection, through self-consciousness. Human beings are not just conscious. We are self-conscious. That’s the idea that the film opens up so well.

Read Full Post »

There are different kinds of near-death experiences. There are ‘end of life’ experiences, when people at their death-bed report being drawn towards a certain kind of light, or sent back into the world of the living, or seeing their own bodies lying there from an out-of-body perspective. I’ve never gone through one of these.

There are ‘near miss’ experiences when something happens that could have been catastrophic, but wasn’t. I can think of twice when I have driven at a reasonable speed straight through a red light and only realised when it was too late. Once was years ago at a major crossroads in St Albans – I don’t know why it happened; I must have just been distracted. I could have killed myself and many others with me. I was terrified afterwards with a kind of retrospective shock; the full force of ‘what if?’

The second time was only a few weeks ago when I went through a red light at a pedestrian crossing here in Chelsea. It wasn’t my fault. I saw it as I was going through it, and I couldn’t work out why I hadn’t spotted it before. I was so perturbed that I drove back, to discover that one of those beautiful hanging flower baskets had been hung by the council on a lamp-post just a couple of feet before the light. I guess when it was hung it had no flowers, but they had since grown and completely obscured the red traffic light. It was genuine concern that drove me, like a good citizen, to call the police, and the local council, and whoever else the next person referred me to. But no-one could deal with it before Monday morning (this was Friday night) or felt that it was urgent enough to find a way of sorting it out. I gave up. I should have gone and cut the flowers myself; but then I’d have probably got arrested.

Anyway, the third kind of near-death experience is the much more everyday ‘intimation of one’s own mortality’ that catches us now and then, often for small and unexpected reasons. I had one of these last week. There was a Mass at Westminster Cathedral offered for the deceased clergy of the Diocese. As I processed in with the other concelebrants, we walked past the Book of Remembrance that was open on the relevant day: a single page for each day of the year, with the names of the clergy beautifully inscribed on the page for the day of their death. And it struck me with great force as I walked past: my name will be in there one day. Probably in quite a few years; but possibly in just a few months or weeks or days (who knows?). But however long it takes, there my name will be – in that very book.

I know this isn’t an unusual experience. It was just very concrete. Every so often I think about death; but I don’t usually have such a simple reminder of how thin the line is between now and then – just a few moments away; just a few letters on the page.

I know these everyday reminders of death are more common in rural communities (or at least slightly less urban ones), where you as an individual have a particular link with a particular graveyard. I’m not saying that you meditate on it every day; but it must be similarly sobering just to think, ‘This is the place where my body will lie one day’; that death is not just an abstract idea but a concrete destiny.

It reminds me of a village I visited just outside Salzburg. I’m used to seeing old village churches in England with the graveyard at the side of the church somewhere. But here the parish church was literally surrounded by the graves of the parishioners. There was a row directly around the external wall of the church; then a path around this row; and then more graves extending out to the boundary wall. So as soon as you walked into the grounds of the church you walked past the graves of your parents, your ancestors, your fellow parishioners, the townsfolk; and you knew that you would lie there one day. My friend said this was typical in small Austrian villages. It wasn’t at all oppressive; it was as if the church itself (and everything that happened within its walls) was living within this larger communion; as if you congregated with your neighbours and friends and family to pray each Sunday, and this congregating just continued after death.

There aren’t many Catholic churches with graveyards at their side in Britain today. The nearby parish in Fulham is probably one of the few. I wish we had a few more, and that we were more connected in these concrete ways with those who have gone before us.

Read Full Post »

I have all sorts of philosophical anxieties about disconnecting ‘official time’ from the ‘real time’ that we experience through the rising of the sun and the arc of the stars – I’ll try to post about these anxieties another day.

But there is a huge historical irony in the fact that Greenwich Mean Time will most likely be replaced by Coordinated Universal Time, which is determined by the International Bureau of Weights and Measures (BIPM) in Paris, a city that lost its own right to determine the world’s time to London many years ago.

In case you missed the details of the recent recommendations of the International Telecommunications Union (ITU), Tony Todd reports:

Greenwich Mean Time (GMT) may be consigned to history as increasingly complex communications technologies require a more accurate system of measuring the time.

International clocks are set according to Greenwich Mean Time, a system that measures time against the rotation of the earth according to the movement of the sun over a meridian (north-south) line that goes through the Greenwich district of London.

The problem for the scientific community is that the earth’s rotation is not constant: it slows down by about a second every year.

US Navy scientist Ronald Beard chaired the working group at the ITU in Geneva which that last week recommended GMT be scrapped as the global time standard.

He told FRANCE 24 on Tuesday: “GMT has been recognised as flawed by scientists since the 1920s, and since the introduction of Coordinated Universal Time (UTC) [measured by highly accurate atomic clocks] in 1972 it has effectively been obsolete.”

UTC solved the problem of earth’s uneven rotation by adding the occasional “leap second” at the end of certain years to keep GMT accurate.

But this piecemeal system is no longer suited to the increasingly sophisticated communications technology and the needs of the scientific community.

“With the development of satellite navigation systems, the internet and mobile phones, timekeeping needs to be accurate to within a thousandth of a second,” said Beard. “It is now more important than ever that this should be done on a continual timescale.”

In effect, what the ITU is proposing is that atomic clocks should govern world time. Instead of using the GMT system and adding leap seconds, time should be allowed to be measured without interruption.

Beard explained that large-scale changes could be made (very occasionally) so that, for example, in 40,000 years time people would not be eating their lunch in the sunshine at “midnight”.

Do you notice that phrase: “Time should be allowed to be measured without interruption”? As if the passage of time itself (the spinning of the earth, the passing of days, the passage of seasons) somehow gets in the way of ‘official’ time – the time on the dial of an atomic clock.

OK, I admit it. As an Englishman I reel at the thought that the ultimate reference point for everything that happens, and in effect for the whole of human history, should be a memorandum issued by a committee in Paris, rather than a line carved into the ground in London.

Read Full Post »

Geothermal map of the Netherlands!

Many Dutch Christians are letting go of traditional beliefs, but holding onto the idea that there is ‘something’ out there, something just above the surface of reality, something more. Robert Pigott explains:

Professor Hijme Stoffels of the VU University Amsterdam says it is in such concepts as love that people base their diffuse ideas of religion.

“In our society it’s called ‘somethingism’,” he says. “There must be ‘something’ between heaven and earth, but to call it ‘God’, and even ‘a personal God’, for the majority of Dutch is a bridge too far.

“Christian churches are in a market situation. They can offer their ideas to a majority of the population which is interested in spirituality or some kind of religion.”

To compete in this market of ideas, some Christian groups seem ready virtually to reinvent Christianity.

They want the Netherlands to be a laboratory for Christianity, experimenting with radical new ways of understanding the faith.

Much of this is led by the Dutch clergy, many of whom are professed agnostics or atheists.

The Rev Klaas Hendrikse can offer his congregation little hope of life after death, and he’s not the sort of man to sugar the pill.

An imposing figure in black robes and white clerical collar, Mr Hendrikse presides over the Sunday service at the Exodus Church in Gorinchem, central Holland.

It is part of the mainstream Protestant Church in the Netherlands (PKN), and the service is conventional enough, with hymns, readings from the Bible, and the Lord’s Prayer. But the message from Mr Hendrikse’s sermon seems bleak – “Make the most of life on earth, because it will probably be the only one you get”.

“Personally I have no talent for believing in life after death,” Mr Hendrikse says. “No, for me our life, our task, is before death.”

Nor does Klaas Hendrikse believe that God exists at all as a supernatural thing.

“When it happens, it happens down to earth, between you and me, between people, that’s where it can happen. God is not a being at all… it’s a word for experience, or human experience.”

Mr Hendrikse describes the Bible’s account of Jesus’s life as a mythological story about a man who may never have existed, even if it is a valuable source of wisdom about how to lead a good life.

His book Believing in a Non-Existent God led to calls from more traditionalist Christians for him to be removed. However, a special church meeting decided his views were too widely shared among church thinkers for him to be singled out.

A study by the Free University of Amsterdam found that one-in-six clergy in the PKN and six other smaller denominations was either agnostic or atheist.

None of this is new. When I was studying theology as an undergraduate in the 1980s (before going to seminary) various versions of this ‘agnostic Christianity’ were on offer. I wonder whether the attraction this kind of worldview is rising or declining in our present culture in Britain.

Read Full Post »

I’m dying to see James Marsh’s new film Project Nim, not only because he directed one of my favourite documentaries of recent years (Man on Wire), but because it’s about the question of whether or not human beings have a unique ability to communicate with language.

Marsh documents the attempt by Herb Terrance, a psychology professor at Columbia University in New York, to discover whether chimpanzees can learn a human language.

Mick Brown explains:

Terrace’s idea was to give rise to one of the most idiosyncratic scientific experiments of the era, to take a newborn chimpanzee and raise it as if it were a human being, while teaching it to communicate using American Sign Language (ASL). For a period in the 1970s Terrace’s chimpanzee, named Nim, became a celebrity, featuring in newspapers and magazines and appearing on television chat shows – the tribune, as a New York magazine cover story had it, of a ‘scientific revolution with religious consequences that occurs once every few hundred years’.

Herb Terrace was not the first person to hit on the idea of communicating with an ape through sign language. In 1661 Samuel Pepys described in his diaries encountering ‘a great baboon’ brought from ‘Guiny’ that was ‘so much like a man in most things… I do believe that it already understands much English, and I am of the mind it might be taught to speak or make signs.’ In the 1960s a husband and wife team, Allen and Beatrix Gardner, had raised a chimp named Washoe, claiming to have taught it more than 300 signs.

Terrace’s own experiment was forged in a spirit of heated debate about language and behaviour that was raging through academia in the 1960s and 70s. A disciple of the behaviourist BF Skinner, Terrace wanted to disprove the theory of Skinner’s great rival, the linguist Noam Chomsky, that humans are uniquely ‘hard-wired’ to develop language. Even the choice of his chimp’s name, Nim Chimpsky, was designed to cock a snook at Chomsky.

In search of a surrogate mother for his chimp, Terrace turned to one of his former graduate psychology students – and a former lover – Stephanie LaFarge. ‘Herb wanted to do something equivalent to Galileo and Freud in creating a paradigm shift for human beings,’ LaFarge says. ‘That’s who he is: very arrogant and very ambitious.’

Things didn’t work out as planned – you can read the article or see the film to find out why. But here are the conclusions that Terrace came to about the possibility of chimpanzee-human language:

Terrace remains unrepentant about the experiment and its findings. He is presently working on a new book, with the provisional title of Why a Chimp Can’t Learn Language. Chimps, he believes, as Nim demonstrated, are highly intelligent but they do not have what is called ‘a theory of mind’.

‘No chimpanzee – no animal – has ever engaged in conversation. It’s always been “gimme, gimme, gimme”. They’re very astute readers of body language, as Nim showed. But a chimp does not have any reason to think of its own mind, or that somebody else has a mind.’

Not only would a chimpanzee not be able to construct a meaningful sentence of ‘man bites dog’, Terrace says, but ‘he would have no interest in communicating that. A chimp is never going to say, “This is a beautiful sunset”, or “That’s a lovely suit you’re wearing.”’ In short, they will forever remain a closed book.

Terrace ends up agreeing with Chomsky and concludes that there is something unique about the mental and linguistic abilities of human beings.

Read Full Post »

Charles Guignon has written a lovely book called On Being Authentic. He draws on a number of philosophers and historians, and on examples from contemporary culture, to tell the story of where our modern notions of ‘being authentic’ and ‘being true to oneself’ really come from.

Broadly speaking, according to Guignon, we have seen three types of ‘self’ in the West. In pre-modern times, in the classical and medieval worlds, we had ‘the extended self’. Here, what makes me ‘me’ is that I belong to something bigger than me, something that comes before me, and extends beyond me. I don’t choose or define this larger whole – it defines me. As Guignon writes:

My identity is tied into the wider context of the world, with the specific gods and spirits that inhabit that world, with my tribe, kinship system and family, and with those who have come before and those who are yet to come. Such an experience of the self carries with it a strong sense of belongingness, a feeling that one is part of a larger whole [p18].

It reflects the interwovenness of all reality. I am part of an overarching whole, a cosmic scheme. The meaning of my life is very clear, and it is not at all up to me. There is lots of identity and belonging; but very little freedom.

In modern times, over the last four or five centuries, the idea of individuality and subjectivity has become more prominent. I am a subject with my own experiences, feelings, desires and opinions. I relate to the outside world of course, but that relationship is partly determined by my own decisions about how to construe that relationship.

The key term here is ‘autonomy’, so that the modern self is not so much ‘extended’ as ‘nuclear’ or ‘punctiliar’ – meaning I am the centre, the nucleus, of my own world, and not just the periphery of a socially constructed world. I still have an identity, but it’s one that I have helped to create through my personal choices.

In a post-modern culture, according to Guignon’s summary, the very notion of the stable self or subject has been called into question. Human identity is fluid and contextual. We now have different selves and limited powers of choice. There is no stable centre to the self but multiple centres with different perspectives. We have different masks, different roles, different potentialities. Some we are responsible for and in control of, some not. We absorb the values and visions of others without acknowledging the process.

The nuclear or punctiliar self of modernity gives rise to the fragmented or decentred self of post-modernity.  There is at once a radical freedom, even to go beyond who you are and recreate yourself; and a radical impotence, because you never have the secure foundation of a self from which to move or make a decision.

This is all very familiar to philosophers, but Guignon is a good teacher, and he writes with great insight and wit. And what I find so interesting about today’s Western culture, at least in Britain, is that it is one huge pile up of conflicting notions of the self. It’s not actually post-modern. It’s pre-modern and modern and post-modern all at the same time (and maybe some people would say that this a very definition of post-modernism!). We are longing to belong, and to be true to our inner selves, and to set off in radically new directions – all at the same time. No wonder we are confused!

Read Full Post »

I gave a talk at the weekend about providence. Is it true that God has a plan for us? Is it true that he guides all that happens within creation, and all that happens within our own individual lives? I wasn’t so much looking at the theology or philosophy of how God ‘acts’ in the world, but rather at the instinctive ways we tend to view things when we are struggling to make sense of events.

I think there are three ‘default’ positions about providence, all incorrect; and we usually fall into one of them even without realising it.

First, there is the idea that God is simply not involved in the ordinary events of life. Everything is random. There is consequently no meaning or purpose in anything that happens. There is no plan. This is an atheist, materialist position; but it’s subconsciously held by many Christians – at least at the level of their psychological reactions to things. It’s pretty bleak.

Second, there is the implicit assumption that as a rule things are random and meaningless and out of God’s control, even though he’s there, in the background. He leaves things to unfold in their own way; and every now and then he steps in to ‘intervene’. I don’t mean through miracles (although they could fit in here); I mean the idea that God only acts on special occasions, when he takes a special interest in something; and that he is fairly detached and indifferent the rest of the time.

I think this view is quite common in the Christian life. We battle on with life as if we are in a Godless world – the structure of our life is to all extents pagan. Every now and then we pray for something specific; every now and then we have an ‘experience’ of God helping us, or doing something particularly important or unexpected, and we are grateful for that and our ‘faith’ is deepened. But in a strange way this gratitude reinforces the hidden assumption that God is actually not present and not actively concerned for us all the rest of the time.

The third faulty view of providence goes to the other extreme. In this case we believe that God is indeed in control of all history and all events. We believe that everything has huge meaning, that everything reflects God’s loving and providential purposes – which it does. But for this reason we want to over-interpret the significance of every single event. Why is the train three minutes late? Why is the car in front of me green and not blue? What’s the significance of me spilling my coffee or waking before my alarm goes off or bumping into you in the street yesterday? This kind of reflection can become a form of superstition; a kind of obsessive-compulsive disorder.

It’s true that all these small events are part of God’s providential purposes; and it’s also true that sometimes these small events can have a huge significance for someone. Small and apparently ‘chance’ events lead someone to meet their husband or wife for the first time, or to discover their vocation, or to take a different direction in life.

But here is the theological/spiritual point: not all events are of equal significance; and we won’t necessarily know which event has a particular significancefor us at any moment, or what it’s significance is.

So this is the fourth way, and I think the correct one, of interpreting providence: Everything is in God’s loving hands. He is over all and in all and present to all. Everything does have a meaning, a place in his plan. But we can leave God to do the interpreting and understanding. We won’t always understand, but it makes a huge difference knowing that he understands, that he knows what he is doing. Our response is to trust and to hope; and actively to entrust all that we do and all that we experience to him.

Sometimes, for his reasons, we get a glimpse of why something matters and what it means in the broader picture; and this is very consoling. Sometimes, especially in moments of decision or crisis, we need to come to some clarity about whether something is important for us personally, or for the Church, or for society – and this is why discernment is so important in the Christian life. So trusting in providence does not mean becoming passive or indifferent or fatalistic, or ignoring the call to take responsibility or to work for radical change. It doesn’t mean God takes away our freedom. But our fundamental knowledge that God knows what he is doing and is doing everything for our good takes away the existential anxiety that afflicts the pagan heart, and the obsessive curiosity that afflicts the superstitious mind.

Read Full Post »

Charles Moore writes about the purpose of think-tanks. This is the passage that really struck me:

Very few people are any good at policies. There are people who are good at ideas, and there are people who are good at administration, but you need to translate the ideas into forms that can be implemented.

This is certainly my experience. It’s easy to sit round a table at a meeting, swapping great new ideas about how things should be, but it’s much harder to work out how those ideas can make a real impact on the practical planning that needs to be done. Or the admin crowds out the possibility of new ideas emerging, and as a new project starts, or a new year approaches, we simply copy and paste the various templates we have on file from our previous work because ‘everything seemed to go OK last time…’

I coulnd't find a picture of a think-tank, so here are some classic Manhattan rooftop water tanks instead

Here is the main passage about think-tanks:

Do think-tanks make any difference to anything? I ask because I stepped down this week after six years as chairman of the centre-right think-tank Policy Exchange. In a moving ceremony in the garden of Nick Clegg’s old school (Westminster), David Cameron marked the handing over of the reins from myself to the brilliant and witty Daniel Finkelstein of the Times. He spoke about the importance of the battle of ideas.

He is right. Many of the nicest English people deplore ideology in politics, but the problem is that, if nice people have no ideology, others do not follow their example. Nasty ideology has the field to itself. This is very marked in the sphere of Islamism, in which Policy Exchange does excellent work. One reason that extremists can, almost literally, get away with murder, is that moderates do not have the facts and the contacts with officialdom to counter.

Another value of think-tanks is that very few people are any good at policies. There are people who are good at ideas, and there are people who are good at administration, but you need to translate the ideas into forms that can be implemented. For instance, you encourage the idea of ‘free schools’, but, in order for them not to have perverse effects, you need to give them an incentive to include pupils from poor or bad backgrounds in their number. In this spirit, Policy Exchange invented the ‘pupil premium’.

The knack is to be practical while at the same time being faithful to the original idea. Only think-tanks seem to manage this. They are tiny, but they matter. The few, not the many!

I think I’ll start a think-tank. Great idea! But then I think of the administrative energy required to get one going, and my mind drifts off to another earth-shattering idea…

Read Full Post »

I’d always taken it for granted that palliative care is a good thing when it is available, but I hadn’t gone the extra step to think about whether someone has a right to receive it, or whether it would be a duty for an individual or hospital or state to provide it.

Prof John Keown addressed these issues last month in a meeting at the House of Lords put on by the Anscombe Bioethics Centre. His argument was fairly simple. There are many different ethical systems, and they would lead you to conflicting conclusions about many moral issues. But despite this, there would be a consensus about the importance of the relief of unnecessary human suffering and the provision of holistic support for those with serious health issues. And Keown concluded that it would be unethical to fail to meet the need of palliative care when it can reasonably be met, e.g. in countries like the UK with good healthcare resources.

Here is a definition, from NICE, quoted on the National Council for Palliative Care website:

Palliative care is the active holistic care of patients with advanced progressive illness. Management of pain and other symptoms and provision of psychological, social and spiritual support is paramount. The goal of palliative care is achievement of the best quality of life for patients and their families. Many aspects of palliative care are also applicable earlier in the course of the illness in conjunction with other treatments.

Is it also a human right? Keown argued that there is a duty to provide palliative care because of the internationally recognised right to healthcare. So the lack of access to palliative care should be seen as a global human rights issue. This might seem a bit extreme, but he pointed out that there is already a right to avoid ‘degrading treatment’ inscribed in the European Convention on Human Rights, Article 3. And he went on to explore the different ways in which civil and criminal law in the UK already implicitly recognise the duty of providing palliative care.

At the end of his talk Keown speculated about how much palliative care could be improved if the provisions that presently applied to animals in this country (through the 2006 Animal Welfare Act) could be extended to human beings. This summary is from the Freshfields Animal Rescue site:

Owners have aDuty of care” to the animals they keep which is a legal phrase meaning that owners have an obligation to do something.  Prior to the Animal Welfare Act 2006, people only had a duty to ensure that an animal didn’t suffer unnecessarily. The new Act keeps this duty but also imposes a broader duty of care on anyone responsible for an animal to take reasonable steps to ensure that the animal’s needs are met. This means that a person has to look after the animal’s welfare as well as ensure that it does not suffer.

The Act defines “animal” as referring to any living vertebrate animal, although there is provision to extend this if future scientific evidence shows that other kinds of animals are also capable of experiencing pain and suffering.

Read Full Post »

I was really disturbed by some of the reactions to the recent report into the 2009 Air France crash, which suggested that it would be far better for someone if they had no warning at all about their impending death.

You probably remember hearing about the tragedy: all 228 people aboard were killed when an Air France flight from Rio de Janeiro to Paris crashed into the Atlantic in June 2009. A preliminary report has been written two years after on the basis of information from the aircraft’s black boxes, which were only recovered last month. There is no clear conclusion about what caused the crash – it was partly to do with faulty instrumental readings. The fall took three and a half minutes.

This is the bit that disturbed me, as reported by Elaine Ganley and Jill Lawless:

Some families of victims who said they were given information in a meeting with the agency said it was possible their loved ones went to their deaths unaware of what was happening because there was apparently no contact between the cockpit and cabin crew in the 3 1 / minutes.

“It seems they did not feel more movements and turbulence than you generally feel in storms,” said Jean-Baptiste Audousset, president of a victims’ solidarity association. “So, we think that until impact they did not realize the situation, which for the family is what they want to hear — they did not suffer.”

It’s true that they may not have had to live through the horror of knowing they were falling to their deaths; and I do understand how a relative can find some consolation in knowing this. But surely there are other considerations involved here as well? It must be frightening to know that you are about to die, and I have sat with many people as they face this knowledge and try to come to terms with it – but would you really prefer not to know?

I’m not just writing as a Christian believer now. Yes, as a person of faith, I would rather have a few minutes to pray, to thank God for my life, to say sorry for anything I have done wrong, to offer my life to the Lord, and generally to prepare for my death. But even if I had no faith in God or in a life after death, my impending death would still be a hugely significant horizon, and those last few minutes of life would surely take on an unimaginable significance. I wouldn’t wish for myself that I were left in ignorance. I’d want to know, in order to try to make sense of it, or simply to make the most of it, or at least not to waste it. And I wouldn’t wish for my loved ones to be denied the possibility of knowing that their end was near.

I’m not romanticising death. I’m certainly not pretending that the fear isn’t very real, especially if the knowledge comes quickly and unexpectedly. I’d just rather know. Fear, sometimes, is what helps us to appreciate the significance of some great truth that lies before us; and there aren’t many truths as significant as death.

A film that played with these themes very creatively was Last Night from 1998 (not the new film with Keira Knightley).

Everyone knows that the world is going to end this evening at midnight, and we see how various characters in Toronto react. Their decisions about how to spend the last few hours of their life generally reflect the concerns and priorities of the life they have already lived, the life they have made. Their fundamental intentions are clarified and crystalised in these last moments.

On the other hand, knowing that time is so short, it gives them a chance to make something different of their life. Not so much a moral conversion (although that is also possible), but a reorientation, a new level of authenticity, a sort of redemption – even if the choices some of them made were thoroughly depressing. It’s well worth seeing.

Read Full Post »

The Observer had a piece earlier this month about Britain’s relationship with its intelligentsia, and asked whether we ‘do’ public intellectuals in the way that the French seem to.

Jean-Paul Sartre: the archetypal French intellectual

You can read the views of ten influential thinkers on the topic here. And here are the opening definitions of four of them.

Susie Orbach:

Being able to provoke a different point of view to the standard current ideological or political perspective as played out in conventional newspaper or radio reportage is what a public intellectual does. But it’s not merely about being oppositional, because that’s too negative. Public intellectuals attempt to widen and deepen the public discourse, by adding further analysis and coming at issues in surprising or unexpected ways.

There’s a trend towards soundbites and simplification. We all desire clarity but a way to reach it means understanding at several layers, folding in different kinds of knowledges; in other words complexity. There is a craving for that thoughtfulness which public intellectuals are able to provide.

Will Self:

What the British seem to like are television historians and naturalists, not public intellectuals. You can’t help feeling that’s because one supplies narrative and the other supplies facts, and the British are traditionally empiricists so they/we have a resistance to theory and to theoreticians playing too prominent a role in public life.

Mary Beard:

I think the British have always had this view that France is full of public intellectuals and we are hopeless. I don’t agree. To start with, it’s an awful phrase. Have you ever met anybody who avowed to be a public intellectual? We don’t go in for pontificating to the nation, but if you ask whether we have a vibrant form of political, social and cultural debate in which people who are academic, intellectual, clever – and not just media stars – engage, we have loads of it.

Lionel Shriver:

I guess I understand a public intellectual to be somebody who moves public discourse forward. Someone who either says something new or says something that everybody knows to be true but is afraid to express.

Read Full Post »

Just a follow-up from yesterday’s post about community: Robin Dunbar also writes about the kinds of friendships we form and the number of friends we typically have.

Don’t start over-analysing this and getting depressed about how many friends you don’t have – it’s not a competition or a test of psychological well-being!

On average, we have five intimate friends, 15 good friends (including the five intimate ones), 50 friends and 150 acquaintances. While it is not altogether clear why our relationships are constrained in this way, one possibility is time. A relationship’s quality seems to depend on how much time we devote to it, and since time is limited, we necessarily have to distribute what time we do have for social engagement unevenly. We focus most of it on our inner core of five intimates. Alternatively, it might just be a memory problem: we have a job keeping track of who’s doing what, and can only really keep serious tabs on the inner core of five.

The point about how difficult (and probably unwise) it is to have a large number of ‘intimate friends’ is not different from what Aristotle says about ‘perfect friendship’ in Book 8 of the Nicomachean Ethics.

But it is natural that such friendships should be infrequent; for such people are rare. Further, such friendship requires time and familiarity; as the proverb says, people cannot know each other till they have ‘eaten salt together’; nor can they admit each other to friendship or be friends till each has been found lovable and been trusted by each. Those who quickly show the marks of friendship to each other wish to be friends, but are not friends unless they both are lovable and know the fact; for a wish for friendship may arise quickly, but friendship does not.

Dunbar then connects the question of friendship with yesterday’s question about the ideal size for a community.

But there is one more serious problem lurking behind all this. In traditional small-scale societies, everyone shares the same 150 friends. This was true even in Europe until well into the 20th century, and probably still is true today of isolated rural communities. You might well fall out with them from time to time, but, like the Hutterites, you are bound together by mutual obligation and densely interwoven relationships. And of these, shared kinship was perhaps the most pervasive and important: offend Jim down the road, and you bring granny down on your back because Jim is her second-cousin-once-removed, and she’s got her own sister, Jim’s grandmother, on to her about it.

In the modern world of economic mobility, this simple balance has upset: we grow up here, go to university there, and move on to several elsewheres in a succession of job moves. The consequence is that our social networks become fragmented and distributed: we end up with small pockets of friends scattered around the country, most of whom don’t know each other and, perhaps more importantly, don’t know the family part of our networks. You can offend Jim, and almost no one will care. And if they do, you can afford to move on and leave that whole subset of friends behind. Networks are no longer self-policing.

Because modern geographical communities no longer have the social coherence they had up until the 1950s, it is perhaps inevitable that people become less willing to remonstrate with miscreants because others are unlikely to back them up. Bearing these factors in mind, is it any wonder that some inner-city communities fall victim to gang violence? Our real problem for the future is how to overcome this social fragmentation by recreating a sense of community in our increasingly urbanised and mobile world.

Read Full Post »

An article about bioethics in the Times gives a frightening example of the way language can be distorted to misrepresent the truth and skew an ethical argument (last Friday, 11 March, page 3). It makes you wonder whether it’s just lazy journalism, or whether the Times has some particular interest in slanting the ethical debate in these areas.

 

Painting by David S. Goodsell - mitochondria at top right

The article is about a new ‘therapy’ designed to cure mitochondrial failure, which can cause fatal conditions that affect about 100 children in Britain each year. These are the facts, reported in a ‘How it works’ box at the side, and sifted from the body of the article: two embryos are created, both from the father’s sperm, but one from the mother’s egg, and one from a donor’s egg. Two pronuclei are taken from the ‘mother/father’ embryo, which is then discarded. These are then placed in the ‘donor/father’ embryo (from which the pronuclei have been removed), which has healthy mitochondria. This newly ‘created’ embryo is implanted in the mother’s womb and allowed to gestate.

So let’s be clear: an embryo is harvested (I can’t find a better work) of its pronuclei, then discarded, and another embryo is given new pronuclei and allowed to grow. It’s embryos we are talking about. Leave aside for the moment what you think about the personhood of embryos, or their dignity or worth, or whether they have a soul, etc. The scientific point that no biologist would deny is that an embryo is a human life in its very earliest states; a new creature, at the beginning of its life, biologically/genetically distinct from the life of its parents.

Mark Henderson, Science Editor in the Times, does explain all this. But he peppers the article with ambiguous phrases about what is actually happening. First, in the main article, he writes that ‘the treatment involves merging DNA from two fertilised eggs, one from the mother, the other from a donor’ [my italics here and below]. This is strictly true, but it’s a strange way of referring to embryos. It would be much more natural to talk about two embryos rather than two fertilised eggs, and the suspicion is that this is a way of drawing attention away from the reality that embryos are being harvested and discarded.

Second, in a Commentary box also written by Henderson, he writes, ‘The notion of creating a baby with a small genetic contribution from a third parent is bound to strike some people as controversial’. This is a misleading. The mitochondrial DNA in the new embryo will have been indirectly inherited from the donor – in this limited sense the donor makes a ‘contribution’; but it is actually taken from the embryo that has been created from the donor’s egg and father’s sperm. The ‘small genetic contribution’ is not taken from a third parent (which sounds like a benign piece of information), it is taken from a newly created human embryo.

Notice how Henderson is comfortable calling the finally created healthy embryo a ‘baby’, but never refers to the discarded embryo that has had its two nuclei removed as a baby.

Henderson goes on to say in his Commentary that the new procedure adds a fresh dimension to issues of surrogacy and egg donation ‘because a third person will also contribute a small amount of DNA to the baby’. I presume he is trying to say that the third person contributing the DNA is the donor. Once again, it’s true that the mitochondrial DNA is indirectly inherited from the donor, but the ‘contribution’ is made directly by the embryo not the donor.

Then, in the caption underneath the photograph of a baby’s foot held in an adult’s hand, we read that ‘The technique replaces faulty mitochondria from the mother with a healthy form from a second egg‘. This is completely untrue. The healthy mitochondria do not come from an egg, they come from a newly created embryo, which has its pronuclei replaced with the pronuclei from another embryo.

The ‘How it works’ box is both honest and dishonest at the same time: the text says ‘These [pronuclei] are injected into a healthy embryo‘; yet the caption right beside it, under the illustration, says ‘Egg with healthy mitochondria‘. Perhaps Henderson was not responsible for these captions and boxes.

You may think I’m being obsessive about language. It just frightens me how language can be manipulated in a reputable newspaper to distort the truth and mask both the scientific and ethical reality of one of the most serious issues facing our culture. It makes you wonder whether the Times is seeking to promote a controversial scientific procedure rather than just report it and let the facts speak for themselves.

Here is the full Commentary [subscription required]:

The notion of creating a baby with a small genetic contribution from a third parent is bound to strike some people as controversial.

Yet Professor Turnbull’s team, which has developed the new IVF technique, is driven by the noblest of ethical motives: the desire to help families affected by a devastating burden of disease.

If the procedure is approved by Andrew Lansley, it stands to help women like Sharon Bernardi, from Sunderland, who has seen six children die in infancy because they inherited mitochondrial disorder.

When Professor Turnbull published promising results a year ago, she posed for photographs with her son Edward, then 20, who had a mitochondrial condition called Leigh’s disease.

Mr Bernardi died last week. As scientists began to consider whether the therapy should be used on patients, his death serves to illustrate the terrible impact these disorders can have — and the need for prevention.

When weighing the advice they will give to Mr Lansley, the expert panel he has convened will consider the safety and effectiveness of Professor Turnbull’s procedure.

They will want to see evidence that human embryos created this way appear to be normal, as well as the results of animal studies.

The medical benefits will need to outweigh the risks that are always involved when techniques like this move from laboratory and animal experiments into human reproduction. There are also ethical issues to be considered.

The principle that more than two parents can contribute biologically to the birth of a child is already recognised in Britain, as egg donation and surrogacy are legal. The new procedure adds a fresh dimension, however, because a third person will also contribute a small amount of DNA to the baby.

Embryo-rights groups will oppose the technique, because it involves merging two embryos, one of which is destroyed. It will also concern some people who object to manipulating DNA in irreversible ways, even if there is a medical benefit, or who feel it is wrong to subject a potential child to a procedure to which it cannot consent.

Mr Lansley could approve the work himself, but given its controversial nature he is more likely to give MPs a free vote. This would provide the first test of this Parliament’s attitude towards bio-ethics. David Cameron, whose disabled son Ivan died in 2009, is understood to be privately supportive.

Read Full Post »

Many discussions about freedom try to push you to an extreme position: you are either completely determined and in denial about this, or radically free to determine what you will do and who you will become. [WARNING: minor plot-spoilers coming up]

The film The Adjustment Bureau, based on a short story by Philip K. Dick, has a nice take on this. The visible, historical world – our ordinary reality – is watched over by members of the Adjustment Bureau. Their job is to make sure that the Plan unfolds as it should – a Plan for human civilisation as a whole, and for each individual. But instead of pulling every string, like Ed Harris sitting in his control room in The Truman Show, they let things take their own course, and step in every now and then to make minor ‘adjustments’, carefully planned interventions that nudge our lives in one direction or another, without causing too many ‘ripples’ that might cause us to think we are in hands of a higher power. We experience these adjustments as accidents or chance events, but they are the workings of an invisible fate giving shape to our lives. The plot turns on a wonderful scene when one member of the Bureau misses his cue, and someone doesn’t spill a cup of coffee as they are meant to, so that the Plan unravels.

The film illustrates a simple truth: that the whole course of our lives depends on chance events and unplanned encounters. It takes up these themes from those wonderful films Wings of Desire and Run Lola Run. We think we are, to a certain extent, in control of our lives; yet we are not in control of the insignificant happenings that have most significance for our lives. Is it Fate? Providence? Chance?

It’s a light-hearted thriller-cum-comedy-romance, beautifully executed, with one or two weighty ideas from Dick. It has the feel of a Magritte painting come to life. If you like sci-fi, Matt Damon, Emily Blunt, or casual musings about human freedom, you’ll enjoy it. And if you like all four, as I do, you’ll have a ball.

Read Full Post »

I was teaching philosophical ethics yesterday and came across these quotations I’d saved up about the possibility of human freedom.

Isaiah Berlin

The first, from Thomas Nagel, simply describes what a hard-core version of determinism looks like:

Some people have thought that it is never possible for us to do anything different from what we actually do, in the absolute sense. They acknowledge that what we do depends on our choices, decisions, and wants, and that we make different choices in different circumstances: we’re not like the earth rotating on its axis with monotonous regularity. But the claim is that, in each case, the circumstances that exist before we act determine our actions and make them inevitable. The sum total of a person’s experiences, desires and knowledge, his or her hereditary constitution, the social circumstances and the nature of the choice facing them, together with other factors that we may not know about, all combine to make a particular acting in the circumstances inevitable. This view is called determinism… [quoted in Alban McCoy, An Intelligent Person’s Guide to Christian Ethics, 34-35]

The second quotation, from Isaiah Berlin, is about how freedom is in fact a presupposition of ordinary personal and social life, whether we like to admit it philosophically or not:

The whole of our common morality, in which we speak of obligation and duty, right and wrong, moral praise and blame – the way in which people are praised or condemned, rewarded or punished, for behaving in a way in which they were not forced to behave, when they could have behaved otherwise – this network of beliefs and practices, on which all current morality seems to me to depend, presupposes the notion of responsibility, and responsibility entails the ability to choose between black and white, right and wrong, pleasure and duty; as well as, in a wider sense, between forms of life, forms of government, and the whole constellations of moral values in terms of which most people, however much they may or may not be aware of it, do in fact live. [Liberty, 324]

If you want to follow all this up, you can read Alban McCoy’s very helpful chapter about determinism here on Google Books.

Read Full Post »

Older Posts »

%d bloggers like this: