Feeds:
Posts
Comments

Posts Tagged ‘evolution’

There was an extraordinary moment in the evolution of human consciousness and the sociology of cinema etiquette last week. Perhaps it was the first time it had ever taken place – and I was there as a witness! Like being there in 1903 when the Wright brothers flew their way into history; or sitting in the space capsule as Neil Armstrong stepped down onto the surface of the moon.

754px-The_Wright_Brothers_First_Heavier-than-air_Flight_-_GPN-2002-000128

So I’m sitting in the Cineworld Fulham Road last week as the trailers take place before the new Start Trek film (disappointing: 6/10). The guy next to me takes out his mobile phone, checks for messages, leaves it on, and then – this is the Close Encounters of the Third Kind moment – he places it in the moulded plastic fizzy drinks holder attached to the front of the arm rest between us. No self-consciousness; no shame. The bottom of the phone comes forward, towards him; the back leans against the upper edge of the drinks holder; so the phone is at a perfect 37 degree tilt from the vertical for him to see. And he’s watching the film as he is glancing up and down at his incoming messages – like a driver with the TomTom in the edge of vision.

I was too awestruck at the audacity of this technological leap to be shocked. It’s the kind of unforseen improvisation that delights and appalls me at the same time. I bet you big money that within two years there will be dedicated and beautifully designed mobile phone holders on the arm of every cinema seat, but this time just above the fizzy drinks holder. What would my friend have done if he had had a 6 litre carton of coke as well? [Just for the record: This is my idea, and I hold the patent…]

Is this the end of civilisation or the beginning? Is this common in London or New York or Shanghai and I’ve just never witnessed if before?

Read Full Post »

Are you, at least in relation to most of the human population, WEIRD (Western, Educated, Industrialised, Rich, and Democratic)? Then it’s likely that culturally and politically you are a left-leaning liberal whose highest values are autonomy, self-realisation, social justice and fairness. And you are probably suspicious when people appeal to religion, human nature or the well-being of any non-inclusive group to justify their values and political agenda.

David Goodhart reviews The Righteous Mind by Jonathan Haidt.

Haidt is a liberal who wants his political tribe to understand humans better. His main insight is simple but powerful: liberals understand only two main moral dimensions, whereas conservatives understand all five. (Over the course of the book he decides to add a sixth, liberty/oppression, but for simplicity’s sake I am sticking to his original five.)

Liberals care about harm and suffering (appealing to our capacities for sympathy and nurturing) and fairness and injustice. All human cultures care about these two things but they also care about three other things: loyalty to the in-group, authority and the sacred.

As Haidt puts it: “It’s as though conservatives can hear five octaves of music, but liberals respond to just two, within which they have become particularly discerning.” This does not mean that liberals are necessarily wrong but it does mean that they have more trouble understanding conservatives than vice versa.

The sacred is especially difficult for liberals to understand. This isn’t necessarily about religion but about the idea that humans have a nobler, more spiritual side and that life has a higher purpose than pleasure or profit. If your only moral concepts are suffering and injustice then it is hard to understand reservations about everything from swearing in public to gay marriage—after all, who is harmed?

Haidt and his colleagues have not just plucked these moral senses from the air. He explains the evolutionary roots of the different senses from a close reading of the literature but has also then tested them in internet surveys and face to face interviews in many different places around the world.

Morality “binds and blinds,” which is why it has made it possible for human beings, alone in the animal kingdom, to produce large co-operative groups, tribes and nations beyond the glue of kinship. Haidt’s central metaphor is that we are 90 per cent chimp and 10 per cent bee—we are driven by the “selfish gene” but, under special circumstances, we also have the ability to become like cells in a larger body, or like bees in a hive, working for the good of the group. These experiences are often among the most cherished of our lives.

One of my most politically liberal friends read this book and declared his world view to be transformed. Not that he was no longer a liberal but now “he couldn’t be so rude about the other side, because I understand where they’re coming from.” This will be music to Haidt’s ears as the book was written partly as an antidote to the more polarised American politics of the past 20 years, marked by the arrival of Bill Clinton and the liberal baby boomers onto the political stage.

The American culture wars began earlier, back in the 1960s, with young liberals angry at the suffering in Vietnam and the injustice still experienced by African-Americans. But when some of them adopted a style that was anti-American, anti-authority and anti-puritanical, conservatives saw their most sacred values desecrated and they counter-attacked.

Some conflicts are unavoidable and Haidt is not suggesting that liberals should stop being liberal—rather, that they will be more successful if instead of telling conservatives that their moral intuitions are wrong, they seek to shift them in a liberal direction by accommodating, as far as possible, their anxieties.

I’m not sure about this. It suggests that those on the right – politically and culturally – have a bigger, better, clearer and richer view of the complexity of human life and motivation, and that those with a liberal mentality focus on too narrow a range of social values. But if a more naturally conservative thinker fails, say, to be troubled by income disparity or the possession of first-strike nuclear weapons, doesn’t this reveal a moral blind-spot or a failure to recognise certain fundamental social values? Or at least, wouldn’t someone on the left think that?

It also suggests that those on the left are less likely to be religious – and we disproved this in a recent post.

Read Full Post »

I’m halfway through Paul Davies’s book The Eerie Silence, about the Search for Extraterrestrial Intelligence (SETI) project and the wider scientific and philosophical issues involved. One of the ways of investigating the probability of extraterrestrial life is to look at the vexed question of the probability of life on earth, and chapter 2 of the book is entitled, “Life: Freak side-show or cosmic imperative?”

Was there a high probability that any life, let alone intelligent life, would develop on earth? The answer is: we haven’t a clue. And that’s because we still have almost no understanding about how life developed on this planet in the first place; and we don’t even know if it started here anyway – it may have started on Mars and migrated on materials that got dispersed into the solar system and then fell to earth.

We simply don’t know how life began. As Charles Darwin said:

We might as well speculate about the origin of matter.

This lack of knowledge isn’t reflected in the ‘cosmic imperative’ mood of the scientific and journalistic moment. Many thinking people, in other words, believe that given the vastness of the universe the emergence of life must be almost inevitable. Alan Boss of the Carnegie Institution in Washington declared in 2009:

If you have a habitable world and let it evolve for a few billion years then inevitably some sort of life will form on it… It would be impossible to stop life growing on these habitable planets… There could be one hundred billion trillion Earth-like planets in space, making it inevitable that extraterrestrial life exists’ [25-26].

The flaw in this probability argument is obvious even to a non-scientist like myself. Boss uses the word ‘evolve’: if you let a habitable world ‘evolve’ then life is bound to emerge. That would be true if we had any evidence that a ‘world’ evolves. But we don’t. Life evolves, once it is started – we know that. But we can’t use an assumption about the progress of evolution within life as an argument that life itself, at its beginnings, is the result of a pre-life evolutionary process. We have no idea what such a process might involve, or any evidence that it took place, or any indication of what the probability of it taking place might be.

George Whitesides, Professor of Chemistry at Harvard University, gives the alternative view, which Paul Davies himself accepts. First of all he seems sceptical:

How remarkable is life? The answer is: very. Those of us who deal in networks of chemical reactions know of nothing like it… How could a chemical sludge become a rose, even with billions of years to try? … We (or at least I) do not understand. It is not impossible, but it seems very, very improbable [31].

But it’s not so much scepticism as a humble awareness of the impossibility of speaking about a high probability of life emerging when we know so little about what would or would not make it probable in the first place.

How likely is it that a newly formed planet, with surface conditions that support liquid water, will give rise to life? We have, at this time, no clue, and no convincing way of estimating. From what we know, the answer falls somewhere between ‘impossibly unlikely’ and ‘absolutely inevitable’. We cannot calculate the odds of the spontaneous emergence of cellular life on a plausible prebiotic earth in any satisfying and convincing way’ [31].

All we know is that it has happened at least once.

Read Full Post »

The perfect size for a community (whether a village, a religious congregation, or a military unit) is… 150. How do we know?

An Amish school

Primates live in groups, which allows them to solve problems together and reduce the risks of being caught by predators. You stick together; you stand united against a common enemy. But all the time an implicit calculation is being made to work out whether the benefits of cooperation outweigh the costs.

Robin Dunbar explains:

The psychological demands of living in large groups mean that, in primates, species-typical group size correlates rather closely with the species’ brain size. On the primate model, our oversized brain would predict a group size of around 150, the number now known as Dunbar’s Number. We find it in the typical community size of hunter-gatherer societies, in the average village size in county after county in the Domesday book, as well as in 18th-century England; it is the average parish size among the Hutterites and the Amish (fundamentalist Christians who live a communal life in the Dakotas and Pennsylvania, respectively). It is also the average personal network size – the number of people with whom you have a personalised relationship, one that is reciprocal (I’d be willing to help you out, and I know that you’d help me) as well as having a history (we both know how we came to know each other).

The Hutterites illustrate rather clearly just what’s involved. They deliberately split their communities once they exceed 150 individuals because, they maintain, you cannot run a community of more than 150 people by peer pressure alone: instead, you need a police force.

The same thinking also applies to business, management, and the military:

We see the same principle at work in the management philosophy of the Gore-Tex company, known for its breathable, waterproof fabrics. Instead of expanding factory size as its business grew, the late “Bill” Gore kept this factory size to 150 and simply built a new, completely self-contained factory next door. The result is a work community where everyone knows everyone else, and there is no need for formal line-management systems or name badges; everyone is committed to each other and to the communal vision. Has this been the secret to its unusual success as a business?

Perhaps the best example, however, remains the military. All modern armies have a similar organisational structure, mostly developed over the last 300 years by trial and error on the battlefield. The core to this is the company – typically around 120-180 in size – almost exactly Dunbar’s Number. As anyone who has been in the army will tell you, company is family, far more so than battalion or regiment.

Although wild claims have been made about the number of friends people have on Facebook, the vast majority of us have only 120-130. Yes, you can have 500 or 1,000 friends if you want to sign people up, but this seems to have more to do with competition than with real friendship.

It makes you think about the communities you are involved in.

Read Full Post »

What I mean really mean is: atheists are going out of existence because they are not breeding enough. Leaving aside the question of whether there is any truth in religious belief, this raises interesting questions about the apparent benefits of religion – at least for your genetic survival.

This is from a recent article by Jonathan Leake:

Atheists, watch out. Religious people have evolved to produce more children than non-believers, researchers claim, while societies dominated by non-believers are doomed to die out.

A study of 82 countries has found that those whose inhabitants worship at least once a week have 2.5 children each, while those who never do so have just 1.7 — below the number needed to replace themselves.

The academic who led the study argues that evolution, credited by atheist biologists such as Richard Dawkins as the process solely responsible for creating humanity, favours the faithful because they are encouraged to breed as a religious duty.

Michael Blume, a social science researcher at Jena University in Germany, said that over evolutionary timescales of hundreds or thousands of years, atheists have had fewer children and the societies they belong to are likely to disappear.

“It is a great irony, but evolution appears to discriminate against atheists and favour those with religious beliefs,” said Blume.

His arguments are in direct contradiction of evolutionary biologists such as Dawkins, who has argued that religions are like “viruses of the mind” which infect people and impose great costs in terms of money, time and health risks.

Blume’s work suggests the opposite: evolution favours believers so strongly that over time a tendency to be religious has become embedded in our genes. [Sunday Times, 02.01.11, p3]

Why is religion such a benefit? Because a religious tradition is better at allowing values, trust and cooperation to develop.

As well as the promotion of child-bearing by religious authorities, other important factors such as strong shared religious beliefs allow people to fit into a community, accept shared tasks and rules of behaviour. This ability to work together further raises the survival chances of children.

You can read Blume’s academic article “The Reproductive Benefits of Religious Affiliation” here. And in his blog, he quotes from the end of the article:

Evolutionary Theorists brought up far more scientific arguments – but committed believers in supernatural agents brought up far more children. There is a certain irony in here: creationist parents unconsciously defend the reproductive success of their children and communities against evolutionist teachings, whereas some naturalists are trying to get rid of our evolved abilities of religiosity by quoting biology. But from an evolutionary as well as philosophic perspective, it may seem rather odd to try to defeat nature with naturalistic arguments.

Read Full Post »

Yesterday, entranced by Mark Zuckerberg’s Facebook moment,  I was searching for the next Really Big Idea. But someone sent me a link to this interview with Steven Johnson who writes: ‘Eureka moments are very, very rare’.Johnson is the author of the book Where Good Ideas Come From: The Natural History of Innovation. He talks to Oliver Burkeman about how collaboration, rather than a sudden flash of genius, is usually at the root of our most innovative ideas.

“It’s very, very rare to find cases where somebody on their own, working alone, in a moment of sudden clarity has a great breakthrough that changes the world. And yet there seems to be this bizarre desire to tell the story that way.”

At the core of his alternative history is the notion of the “adjacent possible”, one of those ideas that seems, at first, like common sense, then gradually reveals itself as an entirely new way of looking at almost everything. Coined by the biologist Stuart Kauffman, it refers to the fact that at any given time – in science and technology, but perhaps also in culture and politics – only certain kinds of next steps are feasible. “The history of cultural progress,” Johnson writes, “is, almost without exception, a story of one door leading to another door, exploring the palace one room at a time.”

Think of playing chess: at any point in the game, several ingenious moves may be possible, but countless others won’t be. Likewise with inventions: the printing press was only possible – and perhaps only thinkable – once moveable type, paper and ink all existed. YouTube, when it was launched in 2005, was a brilliant idea; had it been launched in 1995, before broadband and cheap video cameras were widespread, it would have been a terrible one. Or take culture: to 1950s viewers, Johnson argues, complex TV shows such as Lost or The Wire would have been borderline incomprehensible, like some kind of avant-garde art, because certain ways of engaging with the medium hadn’t yet been learned. And all this applies, too, to the most basic innovation: life itself. At some point, back in the primordial soup, a bunch of fatty acids gave rise to a cell membrane, which made possible the simplest organisms, and so on. What those acids couldn’t do was spontaneously form into a fish, or a mouse: it wasn’t part of their adjacent possible.

What does all this mean in practical terms?

The best way to encourage (or to have) new ideas isn’t to fetishise the “spark of genius”, to retreat to a mountain cabin in order to “be creative”, or to blabber interminably about “blue-sky”, “out-of-the-box” thinking. Rather, it’s to expand the range of your possible next moves – the perimeter of your potential – by exposing yourself to as much serendipity, as much argument and conversation, as many rival and related ideas as possible; to borrow, to repurpose, to recombine. This is one way of explaining the creativity generated by cities, by Europe’s 17th-century coffee-houses, and by the internet. Good ideas happen in networks; in one rather brain-bending sense, you could even say that “good ideas are networks”. Or as Johnson also puts it: “Chance favours the connected mind.”

Another surprising truth about big ideas: even when they seem to be individual flashes of genius, they don’t happen in a flash – though the people who have them often subsequently claim that they did. Charles Darwin always said that the theory of natural selection occurred to him on 28 September 1838 while he was reading Thomas Malthus’s essay on population; suddenly, the mechanism of evolution seemed blindingly straightforward. (“How incredibly stupid not to think of that,” Darwin’s great supporter Thomas Huxley was supposed to have said on first hearing the news.) Yet Darwin’s own notebooks reveal that the theory was forming clearly in his mind more than a year beforehand: it wasn’t a flash of insight, but what Johnson calls a “slow hunch”. And on the morning after his alleged eureka moment, was Darwin feverishly contemplating the implications of his breakthrough? Nope: he busied himself with some largely unconnected ruminations on the sexual curiosity of primates.

A certain kind of businessperson, I suspect, will buy Where Good Ideas Come From in order to learn to how to come up with a killer business idea, bring it to market, and clean up financially. They may find themselves slightly alarmed, therefore, by a sequence of striking graphics in which Johnson demonstrates that the vast majority of major innovations since 1800 have come from outside the free market – from universities and other environments where profit wasn’t the overwhelming motivation. The urge to hoard, protect and directly profit from good ideas can work against the sharing-and-recombining ethos that the adjacent possible demands. And it’s often the case that those who do attain vast wealth have done so by finding ways to exploit the creativity of the non-market world. Facebook’s Mark Zuckerberg is so rich today only because Tim Berners-Lee developed the web as a non-profit venture. (And a non-profit venture, incidentally, that had no eureka moment either. Johnson quotes Berners-Lee as saying that interviewers are always frustrated when he admits he never experienced one.)

I think this means I can come down from my mountain cabin, withdraw all my patent applications, return the billions of dollars my investors have sent me, and start talking to people again. It seems as if I am going to be poorer but much better connected.

Read Full Post »

I’ve just seen the Facebook film, The Social Network. It works. It shouldn’t, because we all know the story: guy invents Facebook, transforms human self-understanding, and makes a few billion in the process. But it does. Partly because the lesser known sub-plot is turned into the main narrative arc: did he steal the idea and dump on his friends? And partly because the heart of the story, the genesis of Facebook, is such a significant moment for our culture (and perhaps for human history), that it would mesmerise a cinema audience no matter how badly filmed.

It’s Stanley Kubrick trying to film the emergence of human consciousness at the beginning of 2001: A Space Odyssey.

It’s more a screenplay than a film. I had to concentrate so hard on the dialogue and the ideas that I hardly took in the visuals. This is classic Aaron Sorkin, whose West Wing scripts have more words per minute and ideas per episode than anything else on TV in recent years.

I’m also a fan of Ben Mezrich, who wrote the novel on which the screenplay is based. I read his Bringing Down the House a few years ago, a great holiday read about how a team of MIT geeks took their card-counting skills to Vegas and beat the casinos. And it’s true.

Anyway. Go and see the film. It’s a great story and a great cast, directed with unobtrusive style by David Fincher. And I don’t think I’m exaggerating when I say that it captures one of those rare historical moments, that we have actually lived through, when our understanding of what it is to be human shifts quite significantly.

It’s too easy to talk about geography (“First we lived on farms, then we lived in cities; now we live on the internet”). We could have ‘lived on the internet’, even with the interactivity of Web 2.0, without it changing our understanding of ourselves. The same people, but with more information and quicker methods of exchanging it. Facebook has turned us inside out. We used to learn and think and search in order to be more authentically or more happily ourselves. We learnt in order to live. Now we create semi-virtual selves which can exist in a semi-virtual world where others are learning and thinking and searching. We live in order to connect.

But even this doesn’t capture it properly, because people have been connecting for millennia, and at least since EM Forster’s Howards End. With Facebook we don’t just want to connect, we want to actually become that connectivity. We want to become the sum total of those friends, messages, events, applications, requests, reminders, notifications and feeds. Personhood has changed.

Two thousand years ago, through the incarnation, the Word became flesh. In our time, through the internet, the flesh became Facebook.

Time to switch off the computer.

Read Full Post »

%d bloggers like this: