Every few months we hear about the impending death of television, how everyone has shifted to the internet, to social media, to Web 2.0, to Web 3.0… Yes, there are some shifts, but here in the UK we are watching far, far more TV than just a few years ago.
We watch an average of 4 hours 2 minutes of TV a day, up from an average of 3 hours 36 minutes a day in 2006.
Four hours a day! This is an average day in the UK in 2013. Seems like a lot to me.
Here are some of the technological shifts:
We have fewer TVs: The average household now has 1.83 TV sets, down from an average of 2.3 sets in 2003.
But we’re watching more television on more devices: We watch an average of 4 hours 2 minutes of TV a day, up from an average of 3 hours 36 minutes a day in 2006. A TV Licence covers you to watch on any TV, mobile device or tablet in your home or on the move. In 2012, fewer than one per cent of us watch only time-shifted TV.
Premium TV features are on the rise: More than a third of the TV market value in 2012 was from sales of 3D TVs, and sales of jumbo screens (43 inch or more) increased 10 per cent in the past 12 months.
Social networks allow us to engage with each other in real-time like never before: 40 per cent of all tweets are about television shows between 6.30pm and 10pm.
So despite there being more devices and platforms, we are still gathering round the ‘hearth’ of a premium TV at the centre of the home. And instead of being completely absorbed in the entertainment experience, we are tweeting about what we are watching in real-time, which is probably no more than an extension of the chatter that would take place round the TV in previous generations.
I’ve just given a study day about the internet and new media, and it forced me to get my head around some of the jargon and the ideas. Here is my summary of what these terms mean and where the digital world is going.
Web 1.0: The first generation of internet technology. You call up pages of text and images with incredible speed and facility. It’s no different from strolling through a library, only much quicker. The operative verb is I LOOK. I look at pages on the screen just as I look at pages in a book. All content is provided for you – it’s a form of publishing. It may be updated in a way that is impossible when a solid book is sitting on your shelf, but you can’t change the content yourself.
Web 2.0: The second generation of internet technology allows for user-generated content. You don’t just look at the pages, you alter them. You write your own blog; you comment on someone else’s article in the comment boxes; you edit an entry on Wikipedia. And then, by extension, with basically the same technology, you share your thoughts on a social networking site, which means you are commenting not on a static site, but on something that is itself in flux. You have moved from action to interaction; from connection to interconnection. If Web 1.0 is like a digital library, Web 2.0 is like a digital ‘Letter to the Editor’, a digital conference call, a digital group discussion. The verb here is I PARTICIPATE.
Web 3.0: People disagree about the meaning of Web 3.0, about where the web is now going. I like John Smart‘s idea of an emerging Metaverse, where there is a convergence of the virtual and physical world. In the world of Web 2.0, of user-generated content and social networking, you stand in the physical/natural/real world and use the new media to help you around that world – the new media are tools. You talk to friends, you share ideas, you buy things that have been suggested and reviewed by others. But in Web 3.0 the new media become an essential part of the world in which you are living, they help to create the world, and you live within them.
The border between Web 2.0 and Web 3.0 is not tidy here, because Web 3.0 is partly about Web 2.0 becoming all-pervasive and continuous, so that your connection with the web and your social network is an essential part of every experience – it doesn’t get switched off. The mobile earpiece is always open to the chatter of others; the texts and status updates of your friends are projected into the corner of your Google Glasses (like those speedometers that are projected onto the car windscreen) so that they accompany what you are doing at every moment – the connection between real and virtual, between here and there, is seamless; the attention you give to every shop or product or street or person is digitally noted, through the head and eye movement sensors built into your glasses and the GPS in your phone, and simultaneously you are fed (into the corner of your glasses, or into your earpiece) layers of information about what is in front of you – reviews of the product, reminders of what you need to buy from the shop, warnings about the crime rate on this street, a note about the birthday and the names of the children of the person you are about to pass, etc. This is augmented reality or enhanced reality or layered reality.
It’s no different, in essence, from going for a stroll in the mid-70s with your first Walkman – creating for the first time your own soundtrack as you wander through the real world; or having the natural landscape around you altered by neon lights and billboards. But it is this experience a thousand times over, so that it is no longer possible to live in a non-virtual world, because every aspect of the real world is already augmented by some aspect of virtual reality. The verb here is I EXIST. I don’t just look at the virtual world, or use it to participate in real relationships; now I exist within this world.
Web 4.0: Some people say this is the Semantic Web (‘semantics’ is the science of meaning), when various programmes, machines, and the web itself becomes ‘intelligent’, and starts to create new meanings that were not programmed into it, and interact with us in ways that were not predicted or predictable beforehand. It doesn’t actually require some strict definition of ‘artificial intelligence’ or ‘consciousness’ for the computers; it just means that they start doing new things themselves – whatever the philosophers judge is or is not going on in their ‘minds’.
Another aspect of Web 4.0, or another definition, concerns plugging us directly into the web: when the boundary between us and the virtual world disappears. This is when the virtual world becomes physically/biologically part of us, or when we become physically/biologically part of the virtual world. When, in other words, the data is not communicated by phones or earpieces or glasses, but is implanted into us, so that the virtual data is part of our consciousness directly, and not just part of our visual or aural experience (the films Total Recall, eXistenZ, and the Matrix); and/or, when we control the real and virtual world by some kind of brain or neural interface, so that – in both cases – there really is a seamless integration of the real and the virtual, the personal/biological and the digital.
If this seems like science fiction, remember that it is already happening in smaller ways. See previous posts on Transhumanism, and the MindSpeller project at Leuven which can read the minds of stroke victims, and this MIT review of brain-computer interfaces. In this version of Web 4.0 the verb is not I exist (within a seamless real/virtual world), it is rather I AM this world and this world is me.
Watch this fascinating video of someone’s brainwaves controlling a robotic arm:
And this which has someone controlling first a signal on a screen, and then another robotic arm:
So this is someone making things happen in the real world just by thinking! (Which, come to think of it, is actually the miracle that takes place whenever we doing anything consciously!)
Any comments? Are you already living in Web 3.0 or 3.5? Do you like the idea of your children growing up in Web 4.0? What will Web 5.0 be?
Sadly I couldn’t afford to fly out to the Web 2.0 Summit in San Francisco this week. One of the social networking themes discussed was the question of whether there are advantages to sharing less rather than more.
Facebook has pioneered the concept of ‘frictionless sharing’ (a term I just learnt): when your personal information, your consumer choices, your likes and dislikes, your moods, your geographical position, etc, are all shared automatically and seamlessly with your online friends. But this ignores the psychological and sociological evidence that a significant part of friendship and social bonding is choosing what not to share, what not to reveal.
There’s a nice quote from Vic Gundotra who is head of the Google+ project, which tries to be a classier and more selective Facebook:
There is a reason why every thought in your head does not come out of your mouth. The core attribute of the human is to curate how others perceive you and what you say. Even something as simple as music – I don’t want all my music shared with everybody. I’m embarrassed I like that one Britney Spears track. I want people to know I like U2. That’s cooler than saying I like Britney Spears. If that’s how I feel about music, how will I feel about things I read? [Quoted in an article by Murad Ahmed, The Times today, p26]
Less is more, not from a sort of reactionary puritanism, but because the way we create ourselves and communicate who we are is always, at some level, through making decisions about what to reveal and what to withhold. This is how we give shape to the person we are, and allow others to come to know us. I like especially that idea that we ‘curate’ ourselves.
I’ve just seen the Facebook film, The Social Network. It works. It shouldn’t, because we all know the story: guy invents Facebook, transforms human self-understanding, and makes a few billion in the process. But it does. Partly because the lesser known sub-plot is turned into the main narrative arc: did he steal the idea and dump on his friends? And partly because the heart of the story, the genesis of Facebook, is such a significant moment for our culture (and perhaps for human history), that it would mesmerise a cinema audience no matter how badly filmed.
It’s Stanley Kubrick trying to film the emergence of human consciousness at the beginning of 2001: A Space Odyssey.
It’s more a screenplay than a film. I had to concentrate so hard on the dialogue and the ideas that I hardly took in the visuals. This is classic Aaron Sorkin, whose West Wing scripts have more words per minute and ideas per episode than anything else on TV in recent years.
I’m also a fan of Ben Mezrich, who wrote the novel on which the screenplay is based. I read his Bringing Down the House a few years ago, a great holiday read about how a team of MIT geeks took their card-counting skills to Vegas and beat the casinos. And it’s true.
Anyway. Go and see the film. It’s a great story and a great cast, directed with unobtrusive style by David Fincher. And I don’t think I’m exaggerating when I say that it captures one of those rare historical moments, that we have actually lived through, when our understanding of what it is to be human shifts quite significantly.
It’s too easy to talk about geography (“First we lived on farms, then we lived in cities; now we live on the internet”). We could have ‘lived on the internet’, even with the interactivity of Web 2.0, without it changing our understanding of ourselves. The same people, but with more information and quicker methods of exchanging it. Facebook has turned us inside out. We used to learn and think and search in order to be more authentically or more happily ourselves. We learnt in order to live. Now we create semi-virtual selves which can exist in a semi-virtual world where others are learning and thinking and searching. We live in order to connect.
But even this doesn’t capture it properly, because people have been connecting for millennia, and at least since EM Forster’s Howards End. With Facebook we don’t just want to connect, we want to actually become that connectivity. We want to become the sum total of those friends, messages, events, applications, requests, reminders, notifications and feeds. Personhood has changed.
Two thousand years ago, through the incarnation, the Word became flesh. In our time, through the internet, the flesh became Facebook.
Everyone thinks they are too busy. Most people admit that communications technologies and the internet have increased the pressures on them to perform and respond and race ahead. Not many people have any suggestions about how to find spiritual peace or mindfulness in this wired world.
David Gelles writes about the recent “Wisdom 2.0 Summit”, which tried to bridge the gap between our hunger for enlightenment and the frenetic reality of our working lives:
This was the first Wisdom 2.0 summit, which convened a few hundred spiritually minded technologists – everyone from Buddhist nuns to yogic computer scientists – for two days of panels and presentations on consciousness and computers. The goal: to share tips on how to stay sane amid the tweets, blips, drops and pings of modern life. The temple: the Computer History Museum in Mountain View, California, a stone’s throw from the Googleplex. Attendees took in panels on “Living Consciously and Effectively in a Connected World” and “Awareness and Wisdom in the Age of Technology”.
“The problem with the kind of jobs we have is that there is no knob to dial down,” said Gopi Kallayil, an Indian-born marketing manager for Google who studied yoga at an ashram when he was younger. He spoke for many of the participants who professed a deep frustration with their inability to find serenity in an increasingly wired modern world. “If you get on the bandwagon you have to operate at a certain pace or not all.”
Mr Kallayil and others discussed how to squeeze a bit of quietude into the day. Suggestions included meditating before meetings.
Yet a nagging duality overshadowed every conversation. It seemed that almost everyone believed that our constant web surfing, no matter how noble its intent, is not conducive to the spiritual life. “As much as we’re connected, it seems like we’re very disconnected,” said Soren Gordhamer, the conference organiser. “These technologies are awesome, but what does it mean to use them consciously?”
Greg Pass, Twitter’s technology officer, teaches Tai Chi in the company’s office, and exhorts incoming employees to ‘pay attention’ and live in the present moment. Leah Pearlman took a 6-month sabbatical from her Facebook job and now composes all her Friday emails in haiku form:
Ms Pearlman was an example of someone who has successfully integrated a bit of wisdom into work. But a more fundamental question lingered: if spiritual success is more important than worldly gains, why toil away in offices at all? “When the time comes and we’re on our deathbed and we’re saying goodbye to our body and bank account and Facebook account and Twitter account, what’s really going to matter?” asked Mr Gordhamer. If there is an answer, it probably will not be found through a Google search.
And despite technology’s distractions, there was no sense that the crowd was set to abandon Facebook or Twitter. Even those devoted to the spiritual path were committed to keeping their status updated. “I’m extremely grateful for the world of the computer,” said Roshi Joan Halifax, a Zen Buddhist nun. “When I was introduced to the computer, I thought I had gone to heaven.”
Our answer here in the seminary is to have 45 minutes in the chapel each morning – of silence, personal meditation, and communal prayer. It doesn’t necessarily mean that we are all able to maintain our inner peace throughout the next sixteen hours, but it certainly helps.
This coming week, the internet turns 40. On 29th October 1969 Leonard Kleinrock and some colleagues crowded round a computer terminal somewhere in California and logged into another one several hundred miles away. It was the particular type of remote connection that proved significant. It was only partially successful. The system crashed two letters into the first word – which was meant to be ‘LOGIN’; and so the first utterance sent across the net was the biblical ‘Lo…’
To choose a moment like this is somewhat arbitrary. There are many other technological shifts of huge significance that could be noted. But this is the one Oliver Burkeman opts for in his fascinating article about the history and implications of the internet. Arpanet, as this first system was called, was funded by US government money that had been released by Eisenhower in the panic after Sputnik. So it was, indirectly, a result of the space race.
Burkeman takes us through the first academic net, early email, the world wide web, search, the generativity of Web 2.0, and then speculates about where it will be in 4 years. He doesn’t dare to go further than that time frame, because change (not just growth) has been exponential, and you would be a fool to imagine you could see much further. It’s fun to reminisce, but it provokes deeper thoughts about how radically the world has changed, together with our ideas about knowledge, community, the self etc…
One nice quotation is from a science fiction story by Murray Leinster, written in 1946. Everyone has a tabletop box called a ‘logic’ that links them to the rest of the world. Look at how prescient it is:
You got a logic in your house. It looks like a vision receiver used to, only it’s got keys instead of dials and you punch the keys for what you wanna get . . . you punch ‘Sally Hancock’s Phone’ an’ the screen blinks an’ sputters an’ you’re hooked up with the logic in her house an’ if somebody answers you got a vision-phone connection. But besides that, if you punch for the weather forecast [or] who was mistress of the White House durin’ Garfield’s administration . . . that comes on the screen too. The relays in the tank do it. The tank is a big buildin’ full of all the facts in creation . . . hooked in with all the other tanks all over the country . . . The only thing it won’t do is tell you exactly what your wife meant when she said, ‘Oh, you think so, do you?’ in that peculiar kinda voice.
Another article by Tom Meltzer and Sarah Phillips gives a nostalgia trip through various internet firsts: first browser, smiley, search engine, item sold on eBay, youtube video etc. My favourite entry is the well-known first webcam, which was primed on a coffee machine in Cambridge University’s computer lab so that people at the end of the corridor could get live updates on whether it was worth making the journey away from the desk or not.
Looking across the landscape of contemporary culture - at the arts, science, religion, politics, philosophy; sorting through the jumble; seeing what stands out, what unsettles, what intrigues, what connects, what sheds light. Father Stephen Wang is a Catholic priest in the Diocese of Westminster, London. He is currently Senior University Chaplain, based at Newman House Catholic Chaplaincy. [Banner photo with kind permission of Matthew Powell]
As far as I know none of the image use in this blog is against copyright law. Images copied here are either (i) my own or (ii) out of copyright or (iii) used under a Creative Commons License [CCL], which means (roughly, usually) that the photographer (or copyright owner) has agreed the unedited image can be used non-commercially with proper attribution. If I mark an image as CCL it means that I have used the image under a CCL; it does not mean that I am now licensing this image with a CCL.