Feeds:
Posts
Comments

Archive for the ‘Science/Technology’ Category

A weekend for young Catholic nurses and doctors to reflect on issues of healthcare and faith. See post at Jericho Tree.

Read Full Post »

There have been a few articles recently about the advantages of the one-child family and growing up sibling-free.

Where the Wild Things Are graffiti in Streatham  by linniekin

Colin Brazier, Sky News presenter and father of six, puts the other side. The title of his piece is ‘Why having big families is good for you (and cheaper)‘. Here are some highlights.

Some of the most startling literature comes from medical research. It has long been known that siblings – by sharing germs at a young age and mutually priming immune systems – provide some protection against atopic conditions such as hay fever and eczema. But the latest breakthroughs suggest growing up with a brother or sister can also guard against food allergies, multiple sclerosis and some cancers. For reasons that have yet to be fully fathomed, these benefits do not apply to children simply by dint of spending time sharing bugs with other youngsters – as they would, for instance, in day care.

The other “epidemics” of modern childhood, obesity and depression, are also potentially reduced by exposure to siblings. A clutch of major studies from all over the world shows that the more siblings a child has, the thinner they will be. Put simply, siblings help children burn off fat. One American study honed its analysis down to an amazingly precise deduction: with each extra brother or sister, a child will be, on average, 14 per cent less obese. Reductio ad absurdum? We can scoff at such a definitive conclusion, until we realise that no one in medical academia has suggested that having a sibling ever made anyone fatter.

None of this is rocket science. When we compare like with like, regardless of family background, children with siblings tend to enjoy better mental health. Obviously, again, this is to generalise massively. The world is full of jolly singletons. But dig into some of the big data sets out there and unignorable patterns emerge. On experiences on which nation states hold a big corpus of statistics, events such as divorce and death, for example, strong correlations exist.

Cause is not always correlation, but it stands to reason that when parents split up or die, a child will benefit from having a sibling to turn to. That solidarity runs throughout the lifespan. After all, a sibling is for life, not just for childhood.

Indeed, policymakers with an eye to areas beyond elderly care may need to wake up to the shifting sands of family composition. In the late 20th century, the received wisdom among sociologists was that it mattered not a jot to society at large whether more people were sticking to one child. Now that assumption is being questioned. Is the valuable role played by siblings in elderly care factored into the welfare debate? Will an economy with fewer creative middle children be as competitive? How easy will the state find waging war when more parents are reluctant to see their only child march to the front?

More broadly, the last decade has seen a major evolution in academic thinking about siblings. They have ousted parents as being the key driver behind personality development. And where, 30 years ago, academics such as Toni Falbo argued that to be born an only child was to have won the lottery of life, now research is running in the opposite direction.

A slew of reports by serious scholars, such as Prof Judy Dunn of King’s College London, have chipped away at the idea that family size is the product of a consequence-free decision. Researchers have shown that “siblinged” children will have stronger soft skills and keener emotional intelligence than single children. They will be better at gratification deferment (because they have learnt to wait their turn) and hit motor milestones such as walking and talking more rapidly than those without sibling stimulation.

Some of the most recent evidence even suggests that a child with a brother and/or sister will have more evolved language skills and do better at exams. This information is truly revolutionary. For decades, the assumption of academic ideas such as the Dilution Theory has been that less is more.

Have too many children and, as a parent, you will not be able to leverage your resources on to a solitary stellar-achieving child. Indeed, for parents who cannot stop themselves hovering above and over-scheduling their hurried offspring, a sibling for their one-and-only can be the antidote to pushy parenting.

I don’t think this is about a binary ‘right or wrong’, with the consequent stigmatising of one size of family over another. There are many different reasons why some families are larger and some smaller. But it’s good to be aware that some of the alarmist articles about the costs of raising children are extremely one-sided.

Read Full Post »

I managed to get a ticket for the very last day of the Ice Age Art exhibition at the British Museum on Sunday.

At one level, the works are extraordinary. To stand in front of a 40,000 year old Lion Man carved in ivory; to see a flute from the same period made from the bone of a griffon vulture, with six carefully spaced holes waiting to be fingered; to pass from one exhibition case to the next, a succession of statues, figurines, etchings, carvings, tools, weapons, most of them with some form of figurative imagery, thousands and thousands of years old. And to think that for some reason it was in this period in Europe that figurative art first developed.

At another level, it’s extraordinarily ordinary. These are images and carvings that could have been created yesterday, in the local art college, or even the local school. They clearly have a huge and unknown symbolic value, but as examples of figurative art they are simply very graceful and well-kept examples of the human urge to represent what is real.

This is what the human mind does. It produces images of what is out there in the real world (an etching of a lion jumping). It forms imaginary creations by playing with these images mentally and combining and recreating them (the head of a lion on the body of a man). It makes tools (a carefully carved stone core), weapons (a small pouch to launch an arrow), and musical instruments (the vulture bone flute). The mind or imagination works symbolically, and this is what allows us to transform the world, because the symbols don’t just stay in the mind – they change how we relate to the world and what we do in and with it.

It’s the lack of distance between then and now that is so extraordinary. If we could meet these ancestors of ours, and have just a few weeks of contact, perhaps just a few days, we would have learnt their language, and they ours, and we would be communicating as neighbours, as brothers and sisters. And yes, we would be working out whether they were friends or enemies, and the whole of human history would unfold once more…

Read Full Post »

I’ve just come across this Catholics in Healthcare blog, edited by Jim McManus.

health

As well as the regular posts, it has a very useful page of practical resources, and another page of theological resources.

Here is the ABOUT page:

Celebrating and supporting the Catholic contribution to health, social care and social action

Catholics are busy and engaged in Health and Social Care. We see the work of caring for others as a core part of being Catholic. From being informal carers and volunteers to pursuing careers in nursing, medicine, social care, research and policy, Catholics

There are well over 1.000 Catholic agencies and organizations in the UK providing some form of health and social care, from volunteer groups  in parishes to local and national Catholic Charities , Religious Orders which specialise in nursing, health and social care;  and official agencies of the Catholic Church at local level such as Diocesan agencies. The Catholic health and social care presence is large and diverse.

This blog

This blog is created by, about and for Catholic Christians working in Health and Social Care. The Blog will update you on the work of the Healthcare Group of the Catholic Bishops Conference of England and Wales as well as providing you with access to other resources and support.

Our Editor and contacting us

The editor of the Blog is Jim McManus, a member of the Healthcare Reference Group of the Bishops’ Conference.

Read Full Post »

Most of us in the seminary are wearing fluorescent green electronic devices clipped to our belts. You might think they were tagging devices, but we find it easier and cheaper to track seminarians by hacking into their mobile phone signals. (Joke! I can imagine some crazy person reading this post too quickly and saying to a friend, ‘Did you know they tag the students at Allen Hall?!’).

In fact, we have splashed out on a job lot of pedometers. We are divided into teams of five, and the aim is to see which team can ‘walk to Rome’ first. I’ve just looked this journey up on Google Maps, and it comes out as 1,089 miles and 356 hours on foot.

Pedometer by Shopping Diva

This is a much classier version than the ones we have

 

It’s not communal virtue. It’s self-improvement. Trying to get the activity levels slightly higher, to improve our all-round health and well-being, and giving us the time-honoured incentive of a competition to urge us on.

I know this sounds daft, but in the first two days I walked three miles without going anywhere. What I mean is that I spent the whole time in the building here; and the only time I went out was to give a talk in a parish in west London, and I drove there. So without going anywhere, without walking along a street, I clocked up three miles – just going back and forwards from office to dining room to chapel to photocopier etc. It’s not a big house, and it shows how far you can walk just going about your ordinary business.

I did about ten miles in the first few days. Then…disaster struck. Coming out of the chapel, and straightening myself out after Mass, I caught the blasted pedometer with my right hand, it crashed to the floor, AND IT RE-SET ITSELF TO ZERO!! Ten miles down the drain; ten miles for nothing. I rushed to the college ‘Walking to Rome’ arbitrator, and she said she would give me the benefit of the doubt and add these on at the end. But I understand that now everyone is talking about their pedometers crashing and re-setting, when they had 50, 100, 200, 500 miles on them…

It has made me curious about how much I do walk, and walking in general; and I suppose that’s half the point. I chatted to a friend today and she said that when the pedometer craze broke over the UK years ago (we are very behind here), it was suggested that 10,000 steps was a healthy and realistic distance to aim at each day if you are trying to take this walking thing seriously. That’s about 5 miles.

You can tell I am getting pulled in, because now I want to buy a decent pedometer to replace the unreliable one I’ve got. I’ll try to remember to update you. I’m sure you are fascinated by my personal step-count. Maybe I could do a weekly post about this…

Read Full Post »

Fr Philip Miller has an article about Faith and Science in this month’s edition of the Pastoral Review, going over some of the basic history, theology and scientific theory.

Einstein's blackboard

Einstein’s blackboard

In the section on cosmology he writes about the anthropic principle: the way the universe is tuned in such a precise way as to allow the possibility of human life. I’m not sure about this. I’m not saying it’s untrue, I just haven’t done enough to think through whether I find the argument convincing or not.

What speaks to me more is the simple argument from order: that an ordered universe requires some transcendent foundation for its own order (i.e., outside space and time); and that scientific explanation presupposes that the universe can, at least in theory, be explained, and it therefore assumes that the ultimate explanation for the universe has a foundation which is outside the universe itself (at the metaphysical level – that the universe cannot contain the foundation of its own laws; and at the epistemological level – that science cannot justify the foundations of its own scientific principles).

This is how Fr Philip puts it:

The fundamental question remains, for a multiverse just as for a single universe: what is the underlying, unifying cause? The answer is that there must be a necessary being, that is, some sort of ‘God.’ Universes, being complex, law-governed entities, are not simple, and so cannot be metaphysically necessary (since ‘something’ must cause/explain the underlying unity of the complex whole).

Some of Professor Stephen Hawking’s work has been on the nature of the Big Bang, the proposed initial moment of the universe. Some of his more recent hypotheses have been to provide solutions to the complex physics of the early universe that avoid any suggestion that the Big Bang is, in effect, a creation ex nihilo. Hawking’s collaborator, physicist Neil Turok, developed the idea of the ‘instanton’ model of the Big Bang, which has, in simple terms, ‘no beginning.’ And yet, it is highly instructive to note Turok’s own words about their modelling of the universe’s initial expansion phase, termed ‘inflation’:

“Think of inflation as being the dynamite that produced the Big Bang. Our instanton is a sort of self-lighting fuse that ignites inflation. To have our ‘instanton’ you have to have gravity, matter, space and time. Take any one ingredient away and the ‘instanton’ doesn’t exist. But if you have an ‘instanton’ it will instantly turn into an inflating infinite universe.” [Turok, N., commenting online on his own work]

In other words, even in their attempt to define a universe with no beginning, they still have to assume that there is a pre-existing framework of physical laws just sitting there, which the material universe must obey. The universe clearly doesn’t invent its own laws: it requires a law-giver, and that law-giver has to be outside the universe of matter, space and time; it must be spirit, God Himself.

Which raises the child’s question, ‘But who made God?’ To which the answer is: God is not the kind of thing that needs to be made. Or, to put it in the positive: God is precisely that one ‘thing’ that is not made by another thing; God is eternal (outside time), spirit (outside space and matter), simple (outside the complexity of secondary explanations), and necessary (outside the chain of secondary causes).

What do you think?

You can read the full article here.

Read Full Post »

Evgeny Morozov writes about recent advances in ‘predictive policing’. This is not the telepathy of Minority Report. It’s designing algorithms to analyse the ‘big data’ that is now available to police forces, so that hitherto unrecognised patterns and probabilities can help you guess the places where crime is more likely to take place, and the people who are more likely to be criminals.

saveevit

This is a section from his latest book, To Save Everything, Click Here: Technology, Solutionism, and the Urge to Fix Problems that Don’t Exist.

The police have a very bright future ahead of them – and not just because they can now look up potential suspects on Google. As they embrace the latest technologies, their work is bound to become easier and more effective, raising thorny questions about privacy, civil liberties, and due process.

For one, policing is in a good position to profit from “big data“. As the costs of recording devices keep falling, it’s now possible to spot and react to crimes in real time. Consider a city like Oakland in California. Like many other American cities, today it is covered with hundreds of hidden microphones and sensors, part of a system known as ShotSpotter, which not only alerts the police to the sound of gunshots but also triangulates their location. On verifying that the noises are actual gunshots, a human operator then informs the police.

It’s not hard to imagine ways to improve a system like ShotSpotter. Gunshot-detection systems are, in principle, reactive; they might help to thwart or quickly respond to crime, but they won’t root it out. The decreasing costs of computing, considerable advances in sensor technology, and the ability to tap into vast online databases allow us to move from identifying crime as it happens – which is what the ShotSpotter does now – to predicting it before it happens.

Instead of detecting gunshots, new and smarter systems can focus on detecting the sounds that have preceded gunshots in the past. This is where the techniques and ideologies of big data make another appearance, promising that a greater, deeper analysis of data about past crimes, combined with sophisticated algorithms, can predict – and prevent – future ones. This is a practice known as “predictive policing”, and even though it’s just a few years old, many tout it as a revolution in how police work is done. It’s the epitome of solutionism; there is hardly a better example of how technology and big data can be put to work to solve the problem of crime by simply eliminating crime altogether. It all seems too easy and logical; who wouldn’t want to prevent crime before it happens?

Police in America are particularly excited about what predictive policing – one of Time magazine’s best inventions of 2011 – has to offer; Europeans are slowly catching up as well, with Britain in the lead. Take the Los Angeles Police Department (LAPD), which is using software called PredPol. The software analyses years of previously published statistics about property crimes such as burglary and automobile theft, breaks the patrol map into 500 sq ft zones, calculates the historical distribution and frequency of actual crimes across them, and then tells officers which zones to police more vigorously.

It’s much better – and potentially cheaper – to prevent a crime before it happens than to come late and investigate it. So while patrolling officers might not catch a criminal in action, their presence in the right place at the right time still helps to deter criminal activity. Occasionally, though, the police might indeed disrupt an ongoing crime. In June 2012 the Associated Press reported on an LAPD captain who wasn’t so sure that sending officers into a grid zone on the edge of his coverage area – following PredPol’s recommendation – was such a good idea. His officers, as the captain expected, found nothing; however, when they returned several nights later, they caught someone breaking a window. Score one for PredPol?

Click here if you want to read more, especially about the privacy issues, the dangers of reductive or inaccurate algorithms, and widening the scope of the personal data that might be available for analysis:

An apt illustration of how such a system can be abused comes from The Silicon Jungle, ostensibly a work of fiction written by a Google data-mining engineer and published by Princeton University Press – not usually a fiction publisher – in 2010. The novel is set in the data-mining operation of Ubatoo – a search engine that bears a striking resemblance to Google – where a summer intern develops Terrorist-o-Meter, a sort of universal score of terrorism aptitude that the company could assign to all its users. Those unhappy with their scores would, of course, get a chance to correct them – by submitting even more details about themselves. This might seem like a crazy idea but – in perhaps another allusion to Google – Ubatoo’s corporate culture is so obsessed with innovation that its interns are allowed to roam free, so the project goes ahead.

To build Terrorist-o-Meter, the intern takes a list of “interesting” books that indicate a potential interest in subversive activities and looks up the names of the customers who have bought them from one of Ubatoo’s online shops. Then he finds the websites that those customers frequent and uses the URLs to find even more people – and so on until he hits the magic number of 5,000. The intern soon finds himself pursued by both an al-Qaida-like terrorist group that wants those 5,000 names to boost its recruitment campaign, as well as various defence and intelligence agencies that can’t wait to preemptively ship those 5,000 people to Guantánamo…

Given enough data and the right algorithms, all of us are bound to look suspicious. What happens, then, when Facebook turns us – before we have committed any crimes – over to the police? Will we, like characters in a Kafka novel, struggle to understand what our crime really is and spend the rest of our lives clearing our names? Will Facebook perhaps also offer us a way to pay a fee to have our reputations restored? What if its algorithms are wrong?

The promise of predictive policing might be real, but so are its dangers. The solutionist impulse needs to be restrained. Police need to subject their algorithms to external scrutiny and address their biases. Social networking sites need to establish clear standards for how much predictive self-policing they’ll actually do and how far they will go in profiling their users and sharing this data with police. While Facebook might be more effective than police in predicting crime, it cannot be allowed to take on these policing functions without also adhering to the same rules and regulations that spell out what police can and cannot do in a democracy. We cannot circumvent legal procedures and subvert democratic norms in the name of efficiency alone.

Read Full Post »

I learnt a new word for the new year: Disintermediation. It means cutting out the middle man through the use of new digital technology and business models.

Piggy in the middle

Piggy in the middle

Here is John Naughton’s explanation:

But disintermediation is now the mot de jour. It means wiping out the intermediary and that is what the internet does. Remember travel agents? Record shops? Bookshops? Book publishers?

For a long time, publishers maintained that, while the internet was certainly destroying the business models of other industries, book publishing was such a special business that it wouldn’t happen to them. After all, in the end, every author needs a publisher – doesn’t s/he? Only sad people go in for self-publication.

Er, not necessarily. The arrival and widespread acceptance of ebooks, together with on-demand printing and Amazon’s ebook publishing engine have transformed self-publishing from a dream to a reality. If you’ve written something and it’s in Microsoft Word format, then upload it to Amazon’s publishing engine, upload an image for the cover, choose a price and in about four hours it’ll be for sale on the web.

So it’s an important idea, which we have all bought into, even if we haven’t reflected on it very much.

But surely, on a dictionary aside, there is a better word for this? You can see the root: they have taken the word ‘intermediary’ and ‘dissed’ it to create the negative. But the word ‘immediate’ already means ‘with nothing in between, with nothing in the middle’. So I propose the word immediation instead. Let’s see if this takes off and gets me into the Best of 2013 lists at the end of the year.

Read Full Post »

I preached about prophecy this morning at Mass. I was provoked (I won’t say inspired) by the whole non-Mayan non-apocalypse non-event that was Friday 21 December 2012. It shows how even an urban myth that becomes an uber-trending news story can stimulate some helpful reflection.

the end is not for a while by voteprime

Part of the attraction of the ‘crazy religious people waiting for the end of time’ story is that it seems to pit crazy religious people against un-crazy scientific people. But one of my small points this morning was that the desire to believe in prophecy, at least in its slightly over-simplified meaning of ‘telling you something that is going to happen in the future’, is actually one with the scientific instinct. It’s a longing to believe that everything makes sense, that everything happens for a reason, that the future is (through some very mysterious processes of futurology) pre-determined and knowable.

The belief that the world as a whole and every detail within it is meaningful, and that in theory this meaning can be discovered, is a belief that shapes both the worst excesses of superstition and the best endeavours of science. We don’t want to believe that everything is simply chaos; and in fact we have good reasons to think (if our epistemology is sound) that there is a fundamental order to the universe, and that our minds can gradually discover that order.

This hunger for order drives the scientist and the Mayan apocalypse seeker. It also drives the conspiracy theorist, as portrayed so well by Don DeLillo in his novel Underworld, who can’t conceive that a world-changing event like the assassination of JFK or the death of Princess Diana could have been caused by something as banal as a lone gunman or a tragic accident.

Yes, there are crazy prophecies; and there are non-prophecies (it seems that not even the Mayans really believed that this one was coming). But there are true prophecies as well, where God has spoken into history, and promised or predicted (perhaps they mean the same thing from the perspective of eternity) that something would happen in the future.

We see two of them in the scriptures today. First, seven hundred years before the birth of Christ, the prophet Micah promising that a leader would be born in Bethlehem; one who would shepherd God’s people, unite and strengthen them, and bring them lasting security and peace. And second, the Angel Gabriel appearing to Mary, telling her that her cousin Elizabeth was with child in her old age. No wonder she went to visit Elizabeth with such haste; partly to share her joy at the Incarnation, but partly to see with her own eyes a truth she could only hold in faith up to that point.

Prophecy used to be such an important part of the Judeo-Christian imagination. It reminded us that all things – including the course of history – are in God’s providential hands; it showed us his power and his wisdom; it was a sign of his care for us and of our own dignity – that he would speak to us and involve us in the unfolding of his plans; and it was above all a powerful indication of his faithfulness to us, and our need and our duty to trust him because of the objective signs that he has given us in history, as well as the personal signs he has given in our own life story.

I think we have lost our confidence in all this, for all sorts of reasons: historical criticism of the Bible; a loss of the sense of the supernatural; the shift from a historical religion to a personal spirituality, from an objectively founded faith to one based on inner subjective experience; and many others.

Some of the scepticism about prophecy is justified, and it reflects a whole different world view. But some of it is not – it is an unscientific narrowing of the human mind: to think that there is no fundamental order to the universe or to human existence; that God the creator is unable to guide his creation or direct the events of history; that he cannot in his infinite wisdom know what he ‘is’ doing or what he ‘will’ do; or that he cannot share his knowledge of what he will do through revelation in general and through the prophetic word in particular.

This is our faith as Christians, that these things are possible for God. And it’s not just a credulous, superstitious faith; it’s based on our rational understanding of what it means for there to be a universe at all, and our conclusion that some transcendent power and wisdom must lie behind this creation, a power that we have discovered – in the Old Testament and ultimately in Jesus Christ – to be personal and loving.

Prophecy still matters. The fact that God has spoken through the prophets and fulfilled his promises is one of the factors that allows us to believe with more confidence. It may not provide a proof that what we believe is true, but it is a good stimulus to belief, and an ongoing support.

This is how the First Vatican Council put it, a teaching that is as relevant today as it was in the nineteenth century (Dei Filius, Chapter 3):

4. Nevertheless, in order that the submission of our faith should be in accordance with reason, it was God’s will that there should be linked to the internal assistance of the Holy Spirit external indications of his revelation, that is to say divine acts, and first and foremost miracles and prophecies, which clearly demonstrating as they do the omnipotence and infinite knowledge of God, are the most certain signs of revelation and are suited to the understanding of all.

5. Hence Moses and the prophets, and especially Christ our lord himself, worked many absolutely clear miracles and delivered prophecies; while of the apostles we read: And they went forth and preached every, while the Lord worked with them and confirmed the message by the signs that attended it [18]. Again it is written: We have the prophetic word made more sure; you will do well to pay attention to this as to a lamp shining in a dark place [19].

6. Now, although the assent of faith is by no means a blind movement of the mind, yet no one can accept the gospel preaching in the way that is necessary for achieving salvation without the inspiration and illumination of the Holy Spirit, who gives to all facility in accepting and believing the truth [20].

7. And so faith in itself, even though it may not work through charity, is a gift of God, and its operation is a work belonging to the order of salvation, in that a person yields true obedience to God himself when he accepts and collaborates with his grace which he could have rejected.

Read Full Post »

If you liked yesterday’s post about making time for creative projects, see the website it’s from: 99u.com – “Insights on making ideas happen”. It’s got a really good mix of posts about management, creativity, using time well, productivity, self-help, etc.

This is from the About section:

99U is Behance’s research and education arm. Taking its name from Thomas Edison’s famous quote that “Genius is 1% inspiration, 99% perspiration,” the 99U includes a Webby award-winning web magazine, an annual conference, and the best-selling book Making Ideas Happen. Through articles, tips, videos, and events, we educate creative professionals on best practices for moving beyond idea generation into idea execution.

And this is the blurb for the book:

Making Ideas Happen is the national bestseller from Behance and 99U founder Scott Belsky. Based on hundreds of interviews and years of research, the book chronicles the methods of exceptionally productive creative leaders and teams – companies like Google, IDEO, and Disney, and individuals like author Chris Anderson and Zappos CEO Tony Hsieh – that make their ideas happen, time and time again.

See especially the TIPS section here.

Read Full Post »

There are so many reports in the press and adverts on the tube for IVF that you’d think it was the only form of fertility treatment on offer to couples who are struggling to conceive a child.

A friend of mine,  Leonora Paasche Butau, has been studying bioethics, theology of the body, and fertility management for the last few years. I recently read this report from her on the ICN website about the Pope Paul VI Institute for the Study of Human Reproduction, and the pioneering alternatives to IVF that they have been developing.

The Pope Paul VI Institute is the brainchild of the bold and courageous Dr Thomas Hilgers, MD and his wife Sue Hilgers who founded the institute in 1985 as a response to the encyclical letter Humanae Vitae. Pope Paul VI, in this encyclical letter, expressed the Catholic Church’s longstanding tradition on marital life and love and called on “men of science” to direct their research to reproductive healthcare which fully respects life and the dignity of marriage and women. Dr Hilgers, as a young medical student in 1968, felt that the Church was speaking directly to him through this letter and by December of that same year he started his first research project to better understand natural fertility regulation and women’s health care.

The results of years of study and research have been phenomenal. The Pope Paul VI Institute has developed a new and superior approach to women’s reproductive health care which embodies the best principles of medicine and builds up the culture of life in a world which finds its solutions in contraception, sterilisation and abortion.

The Institutes 30+ years of research has seen the development of the highly successful Creighton Model Fertility Care System (CrMS) and NaProTechnology (Natural Procreative Technology) which has reached 14 countries around the world.

NaProTechnology allows a couple to observe certain biological markers to determine when they are naturally fertile and infertile so that they can either avoid or achieve pregnancy. In addition to this, it is a very effective tool in identifying and treating underlying causes of infertility with success rates up to three times higher than In Vitro Fertilisation (IVF). It would seem that the current philosophy of reproductive medicine does not seek to treat underlying diseases meaning that millions of women suffer from infertility without ever knowing the reason. Although IVF is by far the most common approach to the treatment of infertility, the women who undergo treatment using IVF would still remain with the underlying diseases which are causing the infertility to begin with.

As well as being used to treat infertility, NaProTechnology helps to obtain proper diagnosis and effective treatment for a range of other health and gynaecological problems and abnormalities such as recurrent miscarriage, premenstrual syndrome, postpartum depression and abnormal bleeding ‒ offering great hope to women.

Another of the unique contributions of NaProTechnology is the empowerment of women that comes with the knowledge and self-awareness of their bodies and their reproductive cycles.

Dr Anne Carus, a NaPro Specialist doctor from Life Fertility Care in Leamington Spa, states: “with NaProTechnology couples cycle charting empowers them through education.  We find couples value the active contribution that they are able to make to the diagnostic and treatment process. NaProTechnology provides an individualised medical support. Our annual audit indicates that 89% of our clients would have found it helpful to receive information about NaProTechnology from their GP practice. Couples find it difficult to find real support to natural conception within the NHS.”

The research of Dr Thomas Hilgers – at a time when it is difficult for many obstetrician-gynecologists to practice their profession without prescribing oral contraceptives, carrying out sterilisations or referring patients for procedures such as IVF ‒ is testament to his faith in Christ and commitment to responding to the challenges of Humanae Vitae.

For more information see the website of the Institute here. See the articles here from the UK Life Fertility Care site. And for more general issues about fertility and for practical help in the UK see the Life Fertility Care site itself.

Read Full Post »

This is a couple of weeks old now, but it didn’t get as much traction in the news as I expected. Isn’t it an absolutely astonishing historical landmark, that over one billion people are now voluntarily connected on a social networking site?

Yes, there are more people in China, in India and in the Catholic Church; but these ‘groupings’ (I can’t find a good generic term that covers a nation-state and the Catholic Church) have taken a few years to get going, and a large number of their members were born into them.

Facebook doubled it’s size from a half billion users to one billion in just three years and two months!

See this report by Jemima Kiss.

And watch this very clever promotional video, entitled “The Things that Connect Us”, directed by Alejandro González Iñárritu, whose film credits include Amores Perros and 21 Grams. Notice the beautiful bridge images, very close to my blogging heart.

And remember Susan Maushart’s warning in her book The Winter of Our Disconnect (p6):

So… how connected, I found myself wondering, is connected enough? Like many other parents, I’d noticed that the more we seemed to communicate as individuals, the less we seemed to cohere as a family… I started considering a scenario E. M. Forster never anticipated: the possibility that the more we connect, the further we may drift, the more fragmented we may become.

Read Full Post »

Remember all the fuss about embryonic stem cells? About how the only way to offer hope to millions of people suffering from a plethora of diseases and medical conditions was to harvest stem cells from embryonic human life? About how the destruction of the human embryo was a sad but necessary price to pay for the incalculable advances that could be achieved? Remember the accusations that were hurled against those who opposed this utilitarian reasoning on ethical grounds, and dared to suggest that there might be an alternative and ethically acceptable route to medical progress?

It has just been announced that Sir John Gurdon of Cambridge University shares this year’s Nobel prize for physiology or medicine with Japanese scientist Shinya Yamanaka. Why? Because they have been at the forefront of research proving that adult cells can be reprogrammed and grown into different bodily tissues.

Sir John Gurdon on the right

Ian Sample reports. This is the ethical perspective from the end of the article:

For Julian Savulescu, Uehiro professor of practical ethics at Oxford University, the researchers’ work deserved particular praise because reprogrammed cells overcome the moral concerns that surrounded research on embryonic stem cells.

“This is not only a giant leap for science, it is a giant leap for mankind. Yamanaka and Gurdon have shown how science can be done ethically. Yamanaka has taken people’s ethical concerns seriously about embryo research and modified the trajectory of research into a path that is acceptable for all. He deserves not only a Nobel prize for medicine, but a Nobel prize for ethics.”

And here is some of the scientific background:

The groundbreaking work has given scientists fresh insights into how cells and organisms develop, and may pave the way for radical advances in medicine that allow damaged or diseased tissues to be regenerated in the lab, or even inside patients’ bodies…

Prior to the duo’s research, many scientists believed adult cells were committed irreversibly to their specialist role, for example, as skin, brain or beating heart cells. Gurdon showed that essentially all cells contained the same genes, and so held all the information needed to make any tissue.

Building on Gurdon’s work, Yamanaka developed a chemical cocktail to reprogram adult cells into more youthful states, from which they could grow into many other tissue types.

In a statement, the Nobel Assembly at Stockholm’s Karolinska Institute in Sweden, said the scientists had “revolutionised our understanding of how cells and organisms develop”…

Gurdon’s breakthrough came in 1962 at Oxford University, when he plucked the nucleus from an adult intestine cell and placed it in a frog’s egg that had had its own nucleus removed. The modified egg grew into a healthy tadpole, suggesting the mature cell had all the genetic information needed to make every cell in a frog. Previously, scientists had wondered whether different cells held different gene sets.

Yamanaka, who was born in the year of Gurdon’s discovery, reported in 2006 how mature cells from mice could be reprogrammed into immature stem cells, which can develop into many different types of cell in the body. The cells are known as iPS cells, or induced pluripotent stem cells

Some researchers in the field hope to turn patients’ skin cells into healthy replacement tissues for diseased or aged organs…

Interesting that one of the scientists who missed out this year was James Thompson. He was a pioneer in human embryonic stem cells, being the first to isolate them in the lab in 1998. And more recently, Thompson has shown that mature human body cells could be reprogrammed into stem cells.

Read Full Post »

In my recent post about Web 3.0 I used the phrase layered reality to describe the way that information from the virtual world is becoming embedded in our experience of the real world in real-time. Instead of stopping the car, looking at a physical map, memorising the directions, and then starting off again; now you see a virtual map on your sat nav that matches and enhances the physical reality in front of you. It adds another layer. The next step – part of Web 3.0 – is that the technology that delivers the layer is wearable and invisible, so that the layering is seamless. We have had mobile conversations via earpieces for years now.

The best example of this is the Google Glass. Messages and information that up to now would appear on your computer screen or mobile phone now appear on the lens of your glasses as part of your visual panorama. Fighter pilots have had information appearing on their visors for a long time, so that they can read instruments without having to take their eyes off the scene ahead. The Google Glass is just the domestic equivalent of this.

Take a look at this wonderful video demo:

Claire Beale explains more about the implications for mobile technology:

Ever since Tom Cruise showed us in Minority Report a future where reality is a multi-layered experience, gadget geeks have been waiting for technology to deliver on Hollywood’s promise.

Now virtual reality is about to become an actual reality for anyone with the right sort of mobile phone after Telefonica, the parent company of O2, signed a revolutionary deal last week with the tech company Aurasma.

Aurasma has developed a virtual reality platform that recognises images and objects in the real world and responds by layering new information on top. So if Aurasma’s technology is embedded into your mobile phone, when you point your phone at an image it can recognise, it will automatically unlock relevant interactive digital content.

For brands, this type of kit has some pretty significant implications. It means that commercial messages can now live in the ether around us, waiting to be activated by our mobiles. If your phone registers a recognised image such as a building, a poster or a promotional sticker in a store, say, it will play out videos, 3D animations or money-off coupons to entice you to buy.

See this video demo from Layar:

You don’t just see, you see as others see, you understand what others understand, it’s almost like sharing in a universal consciousness. That’s part of the wonder of this new augmented reality, and also the danger; because it all depends on trusting the source, the provider. Who controls the layers?

But the idea of layering reality is not really new, in fact ‘layered reality’ could almost be a definition of human culture. Culture is the fact that we don’t just experience reality neat, we experience it filtered through the accumulated interpretations of previous generations. The primordial example of culture as a layering of reality is language: we speak about what we see, and cover every experience with a layer of language – before, during and after the experience itself.

And writing is literally putting a layer of human interpretation on top of the physical reality before you: carving some cuneiform script into a Sumerian brick; painting a Chinese character onto a piece of parchment; printing the newspaper in the early hours of the morning. Endless layers that stretch back almost to the beginning of human consciousness.

Read Full Post »

I’ve just given a study day about the internet and new media, and it forced me to get my head around some of the jargon and the ideas. Here is my summary of what these terms mean and where the digital world is going.

Web 1.0: The first generation of internet technology. You call up pages of text and images with incredible speed and facility. It’s no different from strolling through a library, only much quicker. The operative verb is I LOOK. I look at pages on the screen just as I look at pages in a book. All content is provided for you – it’s a form of publishing. It may be updated in a way that is impossible when a solid book is sitting on your shelf, but you can’t change the content yourself.

Web 2.0: The second generation of internet technology allows for user-generated content. You don’t just look at the pages, you alter them. You write your own blog; you comment on someone else’s article in the comment boxes; you edit an entry on Wikipedia. And then, by extension, with basically the same technology, you share your thoughts on a social networking site, which means you are commenting not on a static site, but on something that is itself in flux. You have moved from action to interaction; from connection to interconnection. If Web 1.0 is like a digital library, Web 2.0 is like a digital ‘Letter to the Editor’, a digital conference call, a digital group discussion. The verb here is I PARTICIPATE.

Web 3.0: People disagree about the meaning of Web 3.0, about where the web is now going. I like John Smart‘s idea of an emerging Metaverse, where there is a convergence of the virtual and physical world. In the world of Web 2.0, of user-generated content and social networking, you stand in the physical/natural/real world and use the new media to help you around that world – the new media are tools. You talk to friends, you share ideas, you buy things that have been suggested and reviewed by others. But in Web 3.0 the new media become an essential part of the world in which you are living, they help to create the world, and you live within them.

The border between Web 2.0 and Web 3.0 is not tidy here, because Web 3.0 is partly about Web 2.0 becoming all-pervasive and continuous, so that your connection with the web and your social network is an essential part of every experience – it doesn’t get switched off. The mobile earpiece is always open to the chatter of others; the texts and status updates of your friends are projected into the corner of your Google Glasses (like those speedometers that are projected onto the car windscreen) so that they accompany what you are doing at every moment – the connection between real and virtual, between here and there, is seamless; the attention you give to every shop or product or street or person is digitally noted, through the head and eye movement sensors built into your glasses and the GPS in your phone, and simultaneously you are fed (into the corner of your glasses, or into your earpiece) layers of information about what is in front of you – reviews of the product, reminders of what you need to buy from the shop, warnings about the crime rate on this street, a note about the birthday and the names of the children of the person you are about to pass, etc. This is augmented reality or enhanced reality or layered reality.

It’s no different, in essence, from going for a stroll in the mid-70s with your first Walkman – creating for the first time your own soundtrack as you wander through the real world; or having the natural landscape around you altered by neon lights and billboards. But it is this experience a thousand times over, so that it is no longer possible to live in a non-virtual world, because every aspect of the real world is already augmented by some aspect of virtual reality. The verb here is I EXIST. I don’t just look at the virtual world, or use it to participate in real relationships; now I exist within this world.

Web 4.0: Some people say this is the Semantic Web (‘semantics’ is the science of meaning), when various programmes, machines, and the web itself becomes ‘intelligent’, and starts to create new meanings that were not programmed into it, and interact with us in ways that were not predicted or predictable beforehand. It doesn’t actually require some strict definition of ‘artificial intelligence’ or ‘consciousness’ for the computers; it just means that they start doing new things themselves – whatever the philosophers judge is or is not going on in their ‘minds’.

Another aspect of Web 4.0, or another definition, concerns plugging us directly into the web: when the boundary between us and the virtual world disappears. This is when the virtual world becomes physically/biologically part of us, or when we become physically/biologically part of the virtual world. When, in other words, the data is not communicated by phones or earpieces or glasses, but is implanted into us, so that the virtual data is part of our consciousness directly, and not just part of our visual or aural experience (the films Total Recall, eXistenZ, and the Matrix); and/or, when we control the real and virtual world by some kind of brain or neural interface, so that – in both cases – there really is a seamless integration of the real and the virtual, the personal/biological and the digital.

If this seems like science fiction, remember that it is already happening in smaller ways. See previous posts on Transhumanism, and the MindSpeller project at Leuven which can read the minds of stroke victims, and this MIT review of brain-computer interfaces. In this version of Web 4.0 the verb is not I exist (within a seamless real/virtual world), it is rather I AM this world and this world is me.

Watch this fascinating video of someone’s brainwaves controlling a robotic arm:

And this which has someone controlling first a signal on a screen, and then another robotic arm:

So this is someone making things happen in the real world just by thinking! (Which, come to think of it, is actually the miracle that takes place whenever we doing anything consciously!)

Any comments? Are you already living in Web 3.0 or 3.5? Do you like the idea of your children growing up in Web 4.0? What will Web 5.0 be?

Read Full Post »

I was searching for information about the ‘population explosion’ and came across the Spiked campaign entitled “No to Neo-Malthusianism: Why We Oppose Population Control”. There is a string of articles exposing the prejudices and undermining the arguments of contemporary neo-Malthusians, many of them occasioned by the celebration at Spiked of the birth of Baby Seven Billion last year.

This article here by Brendan O’Neill is already three years old, but it’s a good summary of the alarmist arguments put forward by those who fear for the future of the planet and the future of humanity because of the population growth. And then, as you expect from O’Neill, a trenchant critique of their position.

First of all the facts (as they were in November 2009):

In the year 200 AD, there were approximately 180 million human beings on the planet Earth. And at that time a Christian philosopher called Tertullian argued: ‘We are burdensome to the world, the resources are scarcely adequate for us… already nature does not sustain us.’ In other words, there were too many people for the planet to cope with and we were bleeding Mother Nature dry.

Well today, nearly 180 million people live in the Eastern Half of the United States alone, in the 26 states that lie to the east of the Mississippi River. And far from facing hunger or destitution, many of these people – especially the 1.7million who live on the tiny island of Manhattan – have quite nice lives.

In the early 1800s, there were approximately 980 million human beings on the planet Earth. One of them was the population scaremonger Thomas Malthus, who argued that if too many more people were born then ‘premature death would visit mankind’ – there would be food shortages, ‘epidemics, pestilence and plagues’, which would ‘sweep off tens of thousands [of people]’.

Well today, more than the entire world population of Malthus’s era now lives in China alone: there are 1.3billion human beings in China. And far from facing pestilence, plagues and starvation, the living standards of many Chinese have improved immensely over the past few decades. In 1949 life expectancy in China was 36.5 years; today it is 73.4 years. In 1978 China had 193 cities; today it has 655 cities. Over the past 30 years, China has raised a further 235 million of its citizens out of absolute poverty – a remarkable historic leap forward for humanity.

Then the general critique:

What this potted history of population scaremongering ought to demonstrate is this: Malthusians are always wrong about everything.

The extent of their wrongness cannot be overstated. They have continually claimed that too many people will lead to increased hunger and destitution, yet the precise opposite has happened: world population has risen exponentially over the past 40 years and in the same period a great many people’s living standards and life expectancies have improved enormously. Even in the Third World there has been improvement – not nearly enough, of course, but improvement nonetheless. The lesson of history seems to be that more and more people are a good thing; more and more minds to think and hands to create have made new cities, more resources, more things, and seem to have given rise to healthier and wealthier societies.

Yet despite this evidence, the population scaremongers always draw exactly the opposite conclusion. Never has there been a political movement that has got things so spectacularly wrong time and time again yet which keeps on rearing its ugly head and saying: ‘This time it’s definitely going to happen! This time overpopulation is definitely going to cause social and political breakdown!’

There is a reason Malthusians are always wrong. It isn’t because they’re stupid… well, it might be a little bit because they’re stupid. But more fundamentally it is because, while they present their views as fact-based and scientific, in reality they are driven by a deeply held misanthropy that continually overlooks mankind’s ability to overcome problems and create new worlds.

Then the analysis:

The first mistake Malthusians always make is to underestimate how society can change to embrace more and more people. They make the schoolboy scientific error of imagining that population is the only variable, the only thing that grows and grows, while everything else – including society, progress and discovery – stays roughly the same. That is why Malthus was wrong: he thought an overpopulated planet would run out of food because he could not foresee how the industrial revolution would massively transform society and have an historic impact on how we produce and transport food and many other things. Population is not the only variable – mankind’s vision, growth, his ability to rethink and tackle problems: they are variables, too.

The second mistake Malthusians always make is to imagine that resources are fixed, finite things that will inevitably run out. They don’t recognise that what we consider to be a resource changes over time, depending on how advanced society is. That is why the Christian Tertullian was wrong in 200 AD when he said ‘the resources are scarcely adequate for us’. Because back then pretty much the only resources were animals, plants and various metals. Tertullian could not imagine that, in the future, the oceans, oil and uranium would become resources, too. The nature of resources changes as society changes – what we consider to be a resource today might not be one in the future, because other, better, more easily-exploited resources will hopefully be discovered or created. Today’s cult of the finite, the discussion of the planet as a larder of scarce resources that human beings are using up, really speaks to finite thinking, to a lack of future-oriented imagination.

And the third and main mistake Malthusians always make is to underestimate the genius of mankind. Population scaremongering springs from a fundamentally warped view of human beings as simply consumers, simply the users of resources, simply the destroyers of things, as a kind of ‘plague’ on poor Mother Nature, when in fact human beings are first and foremost producers, the discoverers and creators of resources, the makers of things and the makers of history. Malthusians insultingly refer to newborn babies as ‘another mouth to feed’, when in the real world another human being is another mind that can think, another pair of hands that can work, and another person who has needs and desires that ought to be met.

So the population panic is rooted in bad sociology, bad science, and bad anthropology. And this is leaving aside the question of whether the world’s population will, in fact, keep increasing, or whether we are more likely to face a crisis of an imploding population over the next hundred years (e.g. see this article by David Brooks).

Read Full Post »

I can’t believe it – this is my 500th post! (I’m not counting, but by chance I saw the ’499′ pop up on the last one). 500 scintillating insights; 500 pieces of finely wrought prose, where ‘every phrase and every sentence is right’ (almost Eliot); 500 breathtakingly beautiful bridges and unexpectedly daring tangents.

OK, maybe the prose is moving from finely wrought to overwrought; I could also have said: 500 half-formed ideas at the end of the day.

Let’s celebrate with some decent writing, about writing itself - with one of my favourite passages from TS Eliot’s Little Gidding:

What we call the beginning is often the end
And to make an end is to make a beginning.
The end is where we start from. And every phrase
And sentence that is right (where every word is at home,
Taking its place to support the others,
The word neither diffident nor ostentatious,
An easy commerce of the old and the new,
The common word exact without vulgarity,
The formal word precise but not pedantic,
The complete consort dancing together)
Every phrase and every sentence is an end and a beginning,
Every poem an epitaph. And any action
Is a step to the block, to the fire, down the sea’s throat
Or to an illegible stone: and that is where we start…

And how to celebrate and reflect for this 500th post? Well, we certainly need a magnificent bridge. The banner image you have been looking at for the last three years, at the top of each page, is a shot over New York with Hell Gate Bridge in the background. Here it is in a much better shot:

And in order to allow a little bit of self-analysis for this 500-post celebration, here is the ‘tag cloud’ from these 500 posts. Remember, this doesn’t analyse the words I have used in the writing itself, but the number of times I have chosen to tag a particular post with one of these labels. Anything that has come up twelve times makes the cloud, so the tags with the smallest fonts below represent 12 posts each, and the largest numbers of posts (as you can see below) are about: internet (35), love (37), faith (38) and freedom (44). You can send in your psychoanalytical conclusions on a postcard.

If you want to actually search for these tagged topics, see the proper and updated tag cloud in the right-hand column.

Thanks for your support over these nearly three years, your loyal and devoted reading (or your random ending up here through an accidental search or a false tap on the iPad), your occasional comments. Thanks to all those whose beautiful images I have borrowed (legally I hope, and with due accreditation, usually via creative commons). Apologies that I haven’t always had the time to enter into dialogue properly with all the comments, as they deserve.

I’ve nearly always enjoyed the thinking and writing (and choosing pictures). I’ve sometimes felt the obligation to keep going for consistency’s sake – but soon I’ve been glad that I have. I’ve always wished I had more time to ponder and shape the ideas, and the words themselves.

It’s a strange thing, ‘airing your thoughts’. Strange for being both personal and public; the inner life and the life outside; the quiet of the computer screen as you compose the blog, and the clatter of each post landing on several hundred other screens and phones around the world.

I won’t say ‘Here’s to the next 500 posts’, because I’d hate to make that kind of commitment. But I’ll keep going for the moment.

Read Full Post »

As you know, I’m a ‘late adopter’ when it comes to new technology. I hear about things late; I wait around cautiously to see where something is going; I tell myself how happy I have been for so many years of adult life without this dazzling piece of equipment; I hang on until the price drops a bit further; then – sometimes – I take the plunge. So it was with the Kindle, which I bought about six months ago.

What’s remarkable is how quickly it has become a normal, boring and almost indispensable part of daily life. In many ways it’s incredibly retro, even more so after the Google Nexus 7 comes out – dropping the price and raising the stakes for a decent 7 inch tablet. And I betray my own retro-ness in remembering the tipping point that got me pressing the BUY button: it was when I became convinced that the electronic ink pages really were as easy to read as a paper book.

Why do I like it? More to the point, why is it so normal that I have already forgotten it was ever a buying issue? Three main reasons.

(1) Legibility: I was worried it would strain the eyes, and it doesn’t. I can sit in bed and read the Kindle for 2 hours not noticing that I am reading an electronic screen rather than a book (not that I read in bed that long very often…). In fact it is even easier because you can change the font size.

(2) Portability: It goes in the inside pocket of a light jacket, so instead of taking a shoulder bag or a man bag out with me for the sake of carrying a book, I just take the Kindle. So it’s easier than carrying just one book, let alone a whole library of books and journals.

(3) Versatility: I mean the range of stuff that I am reading, and that slips into my pocket so easily. I knew I would use the Divine Office (from Universalis), and the ubiquitous e-Books – a mixture of freebies and paid for. But I’m also downloading journals and websites. And one of the most helpful features is the way you can email documents to your Kindle that then appear as short texts. There are documents, talks, websites, sermons, etc, that I keep thinking I’ll read one day, but never want to read on the computer screen. So I email them to the Kindle, and read them on the bus or tube. I’m actually catching up on piles of interesting reading without having to make an effort.

I’m sorry this sounds like an advert. I’m just delighted when something does what it says, and does what you want it to do, and also does much more.

My fear now is that my present version of the Kindle will be replaced by a higher spec, and the very reason I like it – it’s simplicity – will disappear. I know they have the touch screen versions, which I dislike, because I’d rather a simple click to turn the page than having to tap the screen; that’s why I bought the Kindle rather than the Kobo [correction: apparently there are clickable Kobos as well!]. My fear is that the ‘Retro’ Kindle (my version), like the magnificent, groundbreaking and never bettered Palm, will be overtaken by smart technology. Strange how technology can regress as well as go forward, or at least lose the simplicity and sophistication of its primary purpose in the search for secondary thrills. I said the Kindle was dazzling, but it’s actually the dullness that I like…

Read Full Post »

Yes, there has been a lot of noise over the last few days. I went down to the river on Sunday afternoon, and it was ten people deep on the Chelsea Embankment; I just managed to see the royal party by standing on tip-toe, and quite a few people around me couldn’t see a thing. And walking through Victoria on Monday evening, quite by chance, I caught the post-concert fireworks just a few hundred yards away.

But my abiding sensory memory of the weekend was the early morning silence on Sunday. Battersea Bridge was closed for the flotilla, which meant that our street – which runs down to the Embankment – was also closed to traffic. It was eerie, waking up to silence. No buses, no cars, no sirens. It was as if London itself had been suspended, as I lay on my bed taking in the unusual atmosphere; as if there was less – less noise, less activity; but also more – more presence, more awareness of the place itself and not just what’s happening within it. This is what Sundays used to be like!

#76 - empty streets  by cliff_r

No, this isn’t London! Midtown Manhattan after Hurricane Irene hit the city

I’ve experienced this twice before here in Chelsea. Once was a glorious period of a few months when Battersea Bridge was completely closed for repairs after a boat crashed into one of the arches at high tide. Every morning had this same quality – as if we were living in a cul-de-sac. The other time was during the ash cloud when all the Heathrow flights were cancelled, and the very early mornings – 5 or 6 o’clock – even though I’m not up then – weren’t tarnished by the subconsciously-heard roar of planes overhead.

Another random connection: A Jesuit friend of mine telling me recently that in his community they agreed to completely disconnect the WiFi for one day each month. You might say this isn’t too radical, and perhaps once a week would really hurt. But once a month is better than not at all. And they seem to have appreciated it. Rather than being a burden, it seems to have been a liberation – you simply can’t attend to the emails – they are not ‘there’; sure – they are somewhere, but not there, now, in your computer.

We need a completely car-less day in London once a year. Does anyone know about this? There must be some kind of movement dedicated to this – a campaigning group, or a philosophy/cult – that proposes closing every road within the M25, or at least within the North and South Circular, for 24 hours. To pedestrianise the whole city just for a day. Wouldn’t that be amazing? It could be national street party day, and it could be combined with a bunch of other days that already take place, that would benefit from the no-traffic day, like the Open Gardens day. Let me know any links you know to such a proposal (I just haven’t bothered to look myself yet); and if there isn’t such a proposal, I might start a petition or another Facebook event/group. Does Paris already have an empty street day or something?

Later addition: Two wonderful comments that deserve copying into the main post here. One from David:

This is on a par with Down With Telly Zappers – never mind the elderly and the not so elderly but bed- or chair-bound for whom a  zapper is a god-send. Closing down transport in London may be a bonus for some, but it would be a day’s misery for people on minimum wage or paid by the day. And what about  tourists and all the people who depend on them for a living?

The other from Ttony, whose astonishing memory for 1970s Punch articles, or his clever search techniques, unearthed this:

I don’t know whether there is a campaign today, but this is what Cliff Michelmore wrote in Punch somewhere around 1971-73.

“THAT did it. I know my dream holiday. Not for me the wine dark sea, burning sands and browning bodies, the counting of calories and minks. I shall dream.

By noon on Friday next, all vehicles (except bicycles) will be removed from the precincts of London and taken at least forty miles from Charing Cross and are not to return until noon the following Monday. All aircraft are forbidden to fly within sixty miles of the aforesaid Charing Cross and no chimney has permission to smoke within the same area. There shall be no television or radio transmissions nor shall there be any newspapers, magazines or other such matter published. No cinema shall show any film other than one having a U certificate. All employees of and owners of joints, strip, gambling, clip, bingo etc. to take the weekend off.

All public buildings, including Royal palaces, Government offices to be open to the public free of charge, and at all times throughout the weekend. It is the intention of my dream Government to allow families to see London as it should be, to take a long parting glance at it before the whole lot goes up in blocks, to walk the streets without fear of being knocked senseless by senseless drivers, and to breathe air without fear of being choked to death.

That is my dream holiday, with the family, just drifting around London. I have no great love of London, in truth I find it as comfortable and warming as a damp overcoat, but this weekend of standing and staring and drifting may just halt our idiot rush to nowhere.

And back to the dream for a moment. We have already booked Sir John Betjeman as our guide and companion for the weekend – so hands off!”

Read Full Post »

Jenny McCartney “celebrates” the life of Eugene J Polley, the inventor of the TV remote control, who has recently died. Without him, there would be no such thing as channel-hopping. And who knows, if we hadn’t made the leap from watching to hopping, perhaps we wouldn’t have been psychologically or culturally ready for the next leap from hopping channels to surfing the web.

Polley was an engineer at Zenith, where he worked for 47 years. I put “celebrates” in inverted commas, because McCartney thinks he leaves a dubious legacy.

I am old enough to remember what viewing life was like before the remote control hit the UK, in the days when there were only three channels and you had to make the active decision to haul yourself up from the sofa and press a button to alter them. It was better. If someone wanted to change the channel, etiquette usually demanded that they consult the other people in the room, only moving towards the television once agreement was reached. As a result, you stuck with programmes for longer: since it took a modicum of effort to abandon them, and people are naturally lazy, even slow-burning shows were granted the necessary time to draw you in.

With the arrival of the remote control, the power passed to whoever held the magic gadget in his or her hot little hands. Automatically, the holder of the remote was created king of the living room, and everyone else became either a helpless captive, or an angry dissenter. As the number of channels steadily grew, so did the remote-holder’s temptation to flick between the channels with the compulsively restless air of one seeking an elusive televisual fulfilment that could never be found.

Channel-surfing is a guilty pleasure that should only be practised alone. There is nothing worse than sitting in the same room while someone else relentlessly channel-surfs. It makes you feel as if you are going mad. You hear – in rapid succession – a snatch of song, a scrap of dialogue, a woman trying to sell you a cut-price emerald ring, half a news headline, and an advertising jingle. The moment that something sounds like it might interest you, it disappears. Worse, when you yourself are squeezing the remote, you find that you have now developed the tiny attention span of a hyperactive gnat. Is it any surprise that, now that alternative amusements to the television have emerged, family members are challenging the remote-holder’s solitary rule and decamping to the four corners of the family home with their iPads and laptops?

I know that lamenting the invention of the remote control will – in the eyes of some – put me in the same risibly fuddy-duddy camp as those who once preferred the horse and cart to the motor car, yearned for the days when “we made our own fun”, and said that this email nonsense would never catch on. I don’t care. Listen to me, those of you who cannot imagine life without the zapper: it really was better before.

I think the phrase ‘surfing the web’ is misleading and actually disguises the fragmentary nature of the typical internet experience. If you go surfing (I went once!) you wait patiently and let a lot of inadequate waves pass underneath your board, but as soon as you spot the right wave, ‘your’ wave, you paddle with all your might to meet it properly, leap onto the board, and then ride that wave for as long as you can.

When you find a wave, in other words, you stay with it. You are so with it and trying not to fall off it that it’s inconceivable that you would be looking out of the corner of your eye for a better one. That’s the joy of surfing – the waiting, the finding, and then the 100% commitment to the wave that comes.

That’s why the phrase ‘surfing the web’ doesn’t work for me. The joy of the web, and the danger, is that you can hop off the page at any time, as soon as you see anything else vaguely interesting or distracting. You are half-surfing a particular page, but without any physical or emotional commitment. You can move away to something better or more interesting – that’s the miracle of the web, what it can throw up unexpectedly. But it means that one part of you is always looking over the horizon, into the other field, where to go next; as if non-commitment to the present moment, a kind of existential disengagement, is a psychological precondition of using the internet.

As you know, I am not against the internet. I just wonder what long-term effects it has on us and on our culture. On the internet, everything is provisional. So if we see everything else through the lens of our internet experience, then it all becomes provisional – including, perhaps, even our relationships.

Maybe that’s the word to ponder: ‘provisionality’.

Read Full Post »

Wow! It is absolutely breathtaking, and well worth a detour if you are passing nearby on the tube, or even a dedicated trip! The new Kings Cross concourse, stuck on the side of the station in the most unlikely manner, somehow works; and of course it’s all in the roof. I wandered round with neck craned upwards like a child seeing stars for the first time. It’s awe-inspiring, and intimate, and gloriously silly and funny at the same time.

Here are some of pictures:

Here is a more sober but equally positive reflection from Rowan Moore:

With the new western concourse at King’s Cross station, designed by John McAslan and Partners, the big metal roof is coming home. It is sited between two famous examples of the genre, King’s Cross station of 1852 and the later, more daring, St Pancras station, of 1868, and it is part of the £500m creation of a “transport super-hub”, completed in time for the Olympics, when hundreds of thousands will pass through here on their way to the Javelin train from St Pancras to Stratford.

It is a large semi-circular addition to the flank of the old station, with a basic if essential purpose: to allow enough space for increasingly large numbers of passengers to move freely and smoothly as they emerge from the underground or enter from the street, buy tickets and catch their trains. It is a departures space only, as in airports, with arriving passengers exiting through the original front door of the station. It replaces the existing concourse, a low, crowded 1970s structure of dim design, that has never been loved for the way it blots the view of the plain, handsome twin-arched front of the original station. This structure will disappear later this year, allowing the creation of a new forecourt.

The concourse distributes people in one direction to the main line platforms, in another to suburban lines, and also allows a more leisurely route up some escalators, along a balcony where you can dally in various restaurants and on to a footbridge across the tracks of the old station, from which you can descend to your platform. It smooths out knots and anomalies in the previous arrangements and triples the space available for circulation. It also has space for shopping, without which no contemporary public work would be complete.

Meanwhile, the original glass roof has been cleaned up and had its glass restored, while unnecessary clutter in the space below has been removed, making it more bright and airy than it has looked at any time since it opened, 160 years ago. The effect is dazzling, of seeing this familiar, eternally grubby place transformed. It is as if you had just popped a perception-enhancing pill or been granted an extra faculty of sight.

But the main event of the new work is the half-cylinder of the new concourse and its roof, which has a span of 52 metres. Its structure, engineered by Arup, rises up a great steel stalk in the centre and then spreads into a tree-like canopy of intersecting branches, before descending into a ring of supports at the circumference. In so doing, it avoids the need to drop columns into the ticket hall of the underground station underneath the main space. Beneath the canopy, a sinuous pavilion in glass and tile takes care of the retail.

“It is the greatest station building, ever,” declares architect John McAslan, who is not shy of speaking things as he sees them, and it is certainly impressive. Its main effect is a mighty oomph as you enter, from whatever direction, caused by the abundance of space and the unity of the structure. It is big and single-minded and has a generosity to which we have grown unused.

Read Full Post »

The answer to all these questions (which I know have been troubling you for many years) is: sort of.

I’m sure you spotted this years ago, but I have only just discovered the ‘Traffic’ box on the right-hand side of Google Maps, where you can tick the Public Transport option, and – hey presto – see exactly where the tube lines run in relation to street-level reality. I’ve seen these ‘real geography’ (there must be a technical term for this) maps before, and I know that the very first tube maps – like the present Paris Metro maps – were more or less real, without the present simplification, and so with the kinks and the corners and the vast expanses between suburban stations left in. But I haven’t played around and explored the detail in this way.

What it doesn’t show is the zillions of miles you have to unknowingly walk when changing between lines that are theoretically at the same station – e.g. Green Park, Kings Cross, etc. At least Paddington, Bank, etc, have the honesty to have multiple white ‘station dots’ (more technical vocabulary needed please)  linked with the white lines to announce that they are not really the same tube station but no-one has had the nerve to admit it yet.

There must be some site or app that brings to light these dark secrets of the Underground system. Do post one in the comments if you can find it.

Read Full Post »

OK, you are not narcissistic (see Saturday’s post about Facebook and narcissism). You are at ease in your own virtual skin; you love yourself just the right amount but not too much; and your Facebook updates are an uncomplicated and unselfconscious way of sharing your life with others. You are terrifyingly undysfunctional!

But it still begs the question: how much do you use the internet each week? That’s not a loaded question, just a factual enquiry.

Paul Revoir reports that adults in Britain now spend on average over 15 hours online each week. That’s five hours more than six years ago.

Eight out of ten adults go online through a different array of devices, an increase of 20 per cent on 2005, a survey by media regulator Ofcom reveals.

A combination of older generations getting online, the continuing rise of social networking sites and new technologies such as smartphones are being credited for the rise.

Research showed that 59 per cent of adult internet users have a profile on a social networking site. Of those, two-thirds visit the sites every day, up from a third in 2007.

The report suggests that while the take-up of the internet has slowed among younger generations, as most are now already online, growth is being driven by older age groups such as 45 to 54-year-olds, part of the ‘silver surfer’ phenomenon.

Internet access for this group has shot up by 10 percentage points in a year to 87 per cent.

Experts said older people were increasingly  buying smartphones. The research found the overall estimated weekly internet use had increased from an  average of 14.2 hours in 2010 to 15.1 hours last year.

Despite the array of portable devices available to access the internet, home usage also increased, from 9.4 to 10.5 hours.

The report did reveal that the most elderly members of society were being left behind in the online revolution.

Nearly nine in ten of over-75s do not use the internet on any device and these are thought to make up a large number of the more than 20 per cent of the population which has no internet.

What about you?

Read Full Post »

It’s here! The new Routemaster bus took to the streets this week.

I blogged about this two years ago, as a matter of existential concern for Londoners:

Perfect freedom is being able to step off the back of a London bus whenever you want, whatever the reason, and walk into the sunset without a bus-stop in sight.

Here are some pictures:

And here is the new all-important platform at the back:

And a few thoughts from the BBC:

The mayor called the bus “stunning” and “tailored to the London passenger”.

Following the new driver-and-conductor vehicle was a “protest” bus covered in slogans attacking the rise in public transport fares in London.

Mayor Boris Johnson has been criticised by the Labour, the Lib Dems and Green Party over the cost of the buses.

Mr Johnson announced plans for the new buses, which run on a hybrid diesel-electric motor, in his 2008 election manifesto.

In total, eight buses with an open “hop-on, hop-off” platform at the rear, costing £11.37m, will run on route 38. They will be staffed with conductors and will not run at night or during the weekends.

The last of the popular, open-platform Routemasters was withdrawn from regular service in December 2005, although some still run on tourist routes.

It costs a fortune:

In an open letter to the mayor, Labour MP for Tottenham David Lammy said each new bus costs £1.4m compared with the conventional double-decker bus which costs about £190,000.

The original Routemaster buses were withdrawn from regular service in 2005

“Riding this bus is surely the most expensive bus ticket in history,” he said.

“With 62 seats at a cost of £1.4m, the cost per seat is £22,580. At £22,695, you can buy a brand new 3 series BMW.”

But Mr Johnson defended the new bus, saying: “When ordered in greater numbers it will make a significant economic contribution to the manufacturing industries, while also helping deliver a cleaner, greener and more pleasant city.”

“It’s not just a pretty face,” he added.

“The green innards of this red bus mean that it is twice as fuel efficient as a diesel bus and the most environment-friendly of its kind.”

TfL’s surface transport director Leon Daniels said: “This vehicle really has set a new standard.

“It utilises the latest cutting edge engine technology to deliver phenomenal fuel economy and emission performance.”

It’s on my agenda, together with the new fourth plinth, for when I am in central London next.

Read Full Post »

We’ve just finished our half-term break, and for various random reasons I spent the week North of the Watford Gap, an exhilarating experience for a southerner.

Due praise, before anything else, to the Victorian engineers and railway men whose vision and graft allowed me to travel from London to Elgin (near Inverness) on – in effect – an unbroken piece of track, via Lancaster, Manchester, Leeds, Edinburgh, Leuchars (for St Andrews), Dundee and Aberdeen.

OK, I didn't travel on a stream train - but this captures some of the romance...

You could tell I was in that trainspotter’s twilight zone by the wad of rail tickets stuffed into my wallet. There was a magic moment in Lancaster when I was sorting through them to find the time of the next train to Manchester, and one of my friends who would be on the ‘danger zone’ end of the geekiness scale when it comes to all things public transport couldn’t resist swanning up beside me to note how many journeys I had timetabled for one holiday trip. I impressed myself that I managed to impress him.

Anyway, it wasn’t for love of trains that I set off, but – more or less – for love of the faith. Last Saturday, as I wrote about earlier, was the ordination of John Millar, one of our seminarians, at Lancaster Cathedral; with a great crowd of friends, family, parishioners, priests and fellow seminarians.

That afternoon I got to Leeds, via Manchester, for the evening event of the ‘Love@Leeds’ Youth 2000 retreat for young adults. It was the first time a Youth 2000 retreat had been held in the city, and by all accounts it was a huge success. Notre Dame Catholic Sixth Form College proved to be a great venue. The school hall provided a dignified place for the worship and services (the chapel would have been far too small), and the dining room was a place not just to eat but to socialise and talk the night away.

For the Reconciliation Service (with individual confessions) and Exposition of the Blessed Sacrament that evening there were over 200 young people there, mainly of university age; and I’d guess that a good 150 stayed over for the talks and Mass the following day.

After a couple of days to myself in Edinburgh (I’d never been before) I went to St Andrews as a guest of the Catholic Chaplaincy. I did all the touristy stuff, and went down on one knee to pat the 18th green (it’s all public). I’m not big into golf, but I wanted to experience the moment and have something to tell my golfing friends.

It was great to be in the chaplaincy there, and to meet the students and Fr Andrew the chaplain and parish priest. It has been a powerhouse for vocations over the years, as well as being just a friendly and solid formative environment for young Catholics; and I have known many priests who studied at St Andrews and identify it as the place where their vocation really crystallised.

My talk was entitled, ‘Is there a difference between human happiness and Christian joy?’ I’ll try to post about my reflections sometime soon.

Then, after a huge cooked breakfast in my B&B, I got the train to Aberdeen, had time for a brief look at the Catholic Cathedral, where Abbot Hugh Gilbert has recently been installed as bishop; and ended my journey at Pluscarden Abbey, where Bishop Hugh was from, to catch up with two old friends who are now ‘juniors’ in the monastery. It was my first visit, and I want to post about that later as well, to give it some proper space on the blog.

So that’s my week! Praise to the rail network, which was cheap, and mostly on time. And praise, above all, to the vitality of Catholic life in this country – which is the main reason for posting. An ordination of a man in his young twenties in Lancaster, giving his life to the Lord and to the service of God’s people. A powerful retreat for university students in the heart of Leeds, who chose to be there to deepen their faith when there are so many other pulls on their time and attention. A Catholic chaplaincy, forming its students, sustaining them, as it has done for many years. And a thriving Benedictine monastery in a place of breathtaking beauty that is simply doing what it has always done, and for that reason attracting young men to join it.

Thank God for these wonderful signs of faith in Britain!

Read Full Post »

Don’t worry, this is not going to be a xenophobic rant. I had supper with a German friend at the weekend, who has lived in France for many years, and has just spent a few weeks in London improving her English.

We got onto the difference between the French and the English, and it was interesting having her fairly objective viewpoint as someone who has lived in both countries as an outsider.

She said that the French, in the way they think and argue, are more abstract. They start with first principles and work outwards to the nitty-gritty of reality. The English are more concrete, more empirical. They start with things, stuff, examples, case-studies, and only then try to draw some more general conclusions from the specific instances.

She also put the same point in another way: that the French work by deduction, and the English by induction.

It struck me that this, if it’s true, is exemplified by our measuring systems, metric and imperial. A metre length is just an idea. It’s not based on anything ordinary or everyday or natural. Yes, there is a bar of platinum-iridium in a vault in Paris that used to be the standard measure of a metre, for reference (although this system has been surpassed now). But the bar, the metre, was created by the French mind – a mind imposing order on the world.

The imperial system – take the foot as an example – is based on (wait for it…) the foot! The whole system of measurement is based on the length of a man’s foot (a man’s and not a woman’s…). You see the world, and measure it, and understand it, in terms of something concrete; you see and understand one aspect of reality in the perspective of another aspect of reality. In the imperial system, man is – literally – the measure of all things; not a metal bar in Paris.

It sounds like I am defending the English way. Not really. There are advantages to each way; and the abstraction certainly appeals to me. And anyway, the French won! The metre rules the world. I’m just noticing the philosophical differences in world-view that are embodied in something as benign as a unit of measure; and how that connects with a German’s perception of English-French differences.

[Update: I received some good criticism in the comments, which I wanted to copy here, about my failure to mention the origin of the metre. E.g. this from Roger: 'Sorry, Fr Stephen, as a physicist I can’t let you get away with that one – the metre was originally intended to be one ten-millionth of the distance from the Earth’s equator to the North Pole. If it’s “just an idea” it’s a very practical one!' To which I replied: 'Thanks Roger. OK – the metre, like the foot, starts in the concrete world. I’d still say the way it was arrived at reflects a different mentality, a more abstract kind of reasoning (taking a distance that can only be established by careful scientific investigation and then dividing it by ten million to establish a length that is more connected with everyday human life) – that reflects something about the difference between a more deductive mindset and a more empirical one.' The metre, despite the geographical origin, is definitely 'a product of the mind'; the foot is 'a product of experience' - I think.]

Read Full Post »

What if there were another you? I don’t mean just an identical twin or a clone with the exact same genes. I mean someone who was like you in every way, the same body and mind and heart, the same past and experiences and memories, the same thoughts and feelings, the same decisions taken and the same mistakes made, standing in front of you now – but not you.

This is the idea at the heart of the film Another Earth, which jumps straight into my Top Ten films of the year. [Major plot spoilers follow - sorry!]

Another planet appears – just a dot in the night sky. As it comes closer it becomes apparent that this planet is the same size as ours, that it even has the same structure of continents and oceans as ours. Then, in a magical sci-fi moment, as the woman responsible for ‘first contact’ with the new planet speaks on a microphone, she realises that the woman talking to her on the other end is herself. [It's on the trailer here - I've ruined it for you!]

So the synchronicity between the two planets and between each corresponding person is absolute, apart from the fact that it inevitably gets broken by the appearance of the other planet – so the woman is not hearing the same words ‘she’ is speaking on the other planet, but actually having a non-symmetrical conversation with her other-self.

First of all, you are simply in sci-fi territory. I love these films. And in fact this film is really a re-make of another film from the ’70s (I can’t remember its name – brownie points for anyone who can help) where the US sent a spaceship to another planet on the other side of the sun, only to discover that the planet was the same as the earth – apart from everything being a mirror image of this earth. So our astronaut lands on the other planet, and another astronaut from that planet lands on our earth, with everyone thinking that our astronaut has come back early – until he sees that all the writing here is in reverse. Anyway – this is classic sci-fi.

But very quickly it becomes philosophical. Looking at this other earth in the sky above, marvelling that we can behold such a world, you realise that this is exactly what we do whenever we reflect on our experience, or use our imaginations, or question what is going on in our own minds. The remarkable thing about human beings is that we can ‘step back’ from our own experience (inner and outer) and view it; that we can ‘see ourselves’. The strangeness of the film brings to light the strangeness of ordinary human life.

We take this ability to reflect for granted, but it really is the key factor that seems to distinguish us from other animals. No-one today would deny that animals can be incredibly sophisticated and intelligent; and on many measures of intelligence they would beat us. But this power of self-reflection seems to be one of our defining characteristics; and it surely connects, in ways that aren’t always clear, with human freedom – the freedom we have to think and imagine and act in ways that go far beyond the instinctual programming we receive as bodily creatures.

So the wonder that Rhoda Williams feels staring up at this other planet is no more than the wonder we should feel whenever we step back and reflect on ourselves.

Then there is a theological angle too. To cut a long story short: Rhoda unintentionally kills the family of musical conductor John Burroughs in a driving accident, soon after the planet is discovered. He is haunted by the loss of his family, and then receives a ticket to travel to the other planet – a ticket that Rhoda has for herself, but she decides to give it to him. Why would he go? Because if the synchronicity between the two worlds was broken when they started to impact on each other, then perhaps the accident did not happen on the other planet, and ‘his’ family is still alive up there.

I call this a theological idea, because it’s about the possibility of redemption, of putting right something that has gone irredeemably wrong in the past. That in some sense this action might not have happened, or it might be possible to go back and undo the harm that has been done. This is crazy of course – in normal thinking. But if it’s crazy, why do we spend so much time imagining/hoping that somehow we could put right what has gone wrong? I don’t think our almost compulsive inability to stop regretting the mistakes we have made is simply a dysfunctional habit that we can’t let go of; it’s a yearning for forgiveness and redemption, for someone to go back in time and allow us to change things, an echo of a possibility of renewal that we can’t justify at a rational or philosophical level – because the past is completely out of reach. It’s about hope.

Or the film is about conscience – the possibility of imagining an action now, as if it were happening, and asking if we really want this parallel imaginative world to unfold into reality, or if we would regret it. So the work of conscience, and of all conscious deliberation, brings us up against another parallel world that is exactly the same as ours – only we have the power to decide whether it shall come into existence or not.

At the very end of the film, in her backyard, Rhoda meets ‘herself’ – we presume she has come from the other planet, with her own ticket, which she didn’t need to give away, because the accident there didn’t happen. All we see is her catching the gaze of the other woman before her, and recognising her to be herself – but not. Then the film ends immediately. It’s incredibly moving. As if a lifelong search, unacknowledged, is finally over; as if, miraculously, I step away and see myself for who I am, and see myself seeing myself. And that, miraculously, is in fact what happens every time we know ourselves through self-reflection, through self-consciousness. Human beings are not just conscious. We are self-conscious. That’s the idea that the film opens up so well.

Read Full Post »

You know about my love of prehistoric cave paintings. The famous images at Chauvet were painted over 30,000 years ago – quite a distance in time. This makes it all the more astonishing that painting kits used about 100,000 years ago have been discovered in a cave in South Africa, evidence not just of the production of art and the presence of a symbolic imagination, but also of an ability to mix chemicals and store materials.

Etologic horse study from cave at Chauvet

This is the abstract describing the research in Science.

The conceptual ability to source, combine, and store substances that enhance technology or social practices represents a benchmark in the evolution of complex human cognition. Excavations in 2008 at Blombos Cave, South Africa, revealed a processing workshop where a liquefied ochre-rich mixture was produced and stored in two Haliotis midae (abalone) shells 100,000 years ago. Ochre, bone, charcoal, grindstones, and hammerstones form a composite part of this production toolkit. The application of the mixture is unknown, but possibilities include decoration and skin protection.

Ian Sample comments:

Two sets of implements for preparing red and yellow ochres to decorate animal skins, body parts or perhaps cave walls were excavated at the Blombos cave on the Southern Cape near the Indian Ocean.

The stone and bone tools for crushing, mixing and applying the pigments were uncovered alongside the shells of giant sea snails that had been used as primitive mixing pots. The snails are indigenous to South African waters.

“This is the first known instance for deliberate planning, production and curation of a compound,” Christopher Henshilwood at the University of Bergen told Science, adding that the finding also marked the first known use of containers. “It’s early chemistry. It casts a whole new light on early Homo sapiens and tells us they were probably a lot more intelligent than we think, and capable of carrying out quite sophisticated acts at least 40,000 to 50,000 years before any other known example of this kind of basic chemistry,” he added.

“You could use this type of mixture to prepare animal skins, to put on as body paint, or to paint on the walls of the cave, but it is difficult to be sure how it was used,” said Francesco d’Errico, a study co-author at the University of Bordeaux. “The discovery is a paradox because we now know much better how the pigment was made than what it is used for.”

So we were there, we Homo sapiens, 100,000 years ago – imagining, thinking, planning, cooperating, collecting, mixing, experimenting, storing, painting; and whatever else this painting led into…

Read Full Post »

I have all sorts of philosophical anxieties about disconnecting ‘official time’ from the ‘real time’ that we experience through the rising of the sun and the arc of the stars – I’ll try to post about these anxieties another day.

But there is a huge historical irony in the fact that Greenwich Mean Time will most likely be replaced by Coordinated Universal Time, which is determined by the International Bureau of Weights and Measures (BIPM) in Paris, a city that lost its own right to determine the world’s time to London many years ago.

In case you missed the details of the recent recommendations of the International Telecommunications Union (ITU), Tony Todd reports:

Greenwich Mean Time (GMT) may be consigned to history as increasingly complex communications technologies require a more accurate system of measuring the time.

International clocks are set according to Greenwich Mean Time, a system that measures time against the rotation of the earth according to the movement of the sun over a meridian (north-south) line that goes through the Greenwich district of London.

The problem for the scientific community is that the earth’s rotation is not constant: it slows down by about a second every year.

US Navy scientist Ronald Beard chaired the working group at the ITU in Geneva which that last week recommended GMT be scrapped as the global time standard.

He told FRANCE 24 on Tuesday: “GMT has been recognised as flawed by scientists since the 1920s, and since the introduction of Coordinated Universal Time (UTC) [measured by highly accurate atomic clocks] in 1972 it has effectively been obsolete.”

UTC solved the problem of earth’s uneven rotation by adding the occasional “leap second” at the end of certain years to keep GMT accurate.

But this piecemeal system is no longer suited to the increasingly sophisticated communications technology and the needs of the scientific community.

“With the development of satellite navigation systems, the internet and mobile phones, timekeeping needs to be accurate to within a thousandth of a second,” said Beard. “It is now more important than ever that this should be done on a continual timescale.”

In effect, what the ITU is proposing is that atomic clocks should govern world time. Instead of using the GMT system and adding leap seconds, time should be allowed to be measured without interruption.

Beard explained that large-scale changes could be made (very occasionally) so that, for example, in 40,000 years time people would not be eating their lunch in the sunshine at “midnight”.

Do you notice that phrase: “Time should be allowed to be measured without interruption”? As if the passage of time itself (the spinning of the earth, the passing of days, the passage of seasons) somehow gets in the way of ‘official’ time – the time on the dial of an atomic clock.

OK, I admit it. As an Englishman I reel at the thought that the ultimate reference point for everything that happens, and in effect for the whole of human history, should be a memorandum issued by a committee in Paris, rather than a line carved into the ground in London.

Read Full Post »

Older Posts »

Follow

Get every new post delivered to your Inbox.

Join 2,168 other followers

%d bloggers like this: