Digital reflections

I was interviewed by the site Create Hub recently, around the idea of “digital reflections” and our everyday relationships with technology. An excerpt is below; click through for the full discussion.

Q: You recently gave a talk on “Digital Reflections” at Southbank Centre. What was the talk about?

A: I was looking at some of our daily relationships with technology – and how these relationships can shape how we think and feel. Many of us have an incredibly intimate relationship with our phones, for example. They are the first objects we touch when we wake in the morning, the last objects we touch when we go to sleep at night; they are always with us, bringing with them many of the things we care about most. Much of the time, this is great. But I worry that if we have an unexamined relationship with tools like our phones, we risk developing a distorted sense of ourselves; of being excessively influenced by our onscreen reflections and projections.

I struggle with this myself. I get anxious if people don’t reply to my emails or texts fast enough; I feel like I’m missing out, or like my life is inadequate, when I scroll through other people’s timelines; I risk turning every moment of every day into the same kind of time, because I always have the same options available onscreen with me. I risk living in a kind of technological bubble – and of being seduced by how cosy and connected it feels in there. And so I try not to react by violently opposing technology, but instead to put it in perspective; to use and experience it differently; to build different kinds of time and space into my life.

Click here to read the full interview

What will our descendants deplore about us?

I have a new essay on the BBC Future website today, exploring a question that I took to a selection of the world’s brightest minds: from James Lovelock to Peter Singer, via Tim Harford and Greg Bear. The opening is below, and you can read the whole thing on the BBC Future website.

Earlier this year, I had a discussion that made me ask a disconcerting question: how will I be viewed after I die? I like to think of myself as someone who is ethical, productive and essentially decent. But perhaps I won’t always be perceived that way. Perhaps none of us will.

No matter how benevolent the intention, what we assume is good, right or acceptable in society may change. From slavery to sexism, there’s plenty we find distasteful about the past. Yet while each generation congratulates itself for moving on from the darker days of its parents and ancestors, that can be a kind of myopia.

I was swapping ideas about this with Tom Standage, author and digital editor of The Economist. Our starting point was those popular television shows from the 1970s that contained views or language so outmoded they probably couldn’t be aired today. But, as he put it to me: “how easy it is to be self-congratulatory about how much less prejudiced we are than previous generations”. This form of hindsight can be dangerously smug. It can become both a way of praising ourselves for progress rather than looking for it to continue, and of distracting ourselves from uncomfortable aspects of the present.

Far more interesting, we felt, is this question: how will our generation be looked back on? What will our own descendants deplore about us that we take for granted?

Click here to continue reading

What is Apple’s command key all about?

Over at Medium, I’ve just posted my latest piece of techy-etymological exploration, looking this time at the unlikely origins of Apple’s command key – ⌘ – in pre-medieval Scandinavia.

Known sometimes as the St John’s Arms, it’s a knot-like heraldic symbol dating back in Scandinavia at least 1,500 years, where it was used to ward off evil spirits and bad luck. A picture stone discovered in a burial site in Havor, Gotland, prominently features the emblem and dates from 400-600 AD. It has also been found carved on everything from houses and cutlery to a pair of 1,000-year-old Finnish skis, promising protection and safe travel.

It’s still found today on maps and signs in northern and eastern Europe, representing places of historical interest. More famously, though, it lurks on the keyboard of almost every Apple computer ever made—and in Unicode slot 2318 for everyone else, under the designation “place of interest sign.”

Simply click through here to read the rest of the piece.

Automated ethics

My latest essay for Aeon magazine asks when it’s ethical to hand our decisions over to machines, and when external automation becomes a step too far. The first few paras are below: read the rest on the magazine’s site.

For the French philosopher Paul Virilio, technological development is inextricable from the idea of the accident. As he put it, each accident is ‘an inverted miracle… When you invent the ship, you also invent the shipwreck; when you invent the plane, you also invent the plane crash; and when you invent electricity, you invent electrocution.’ Accidents mark the spots where anticipation met reality and came off worse. Yet each is also a spark of secular revelation: an opportunity to exceed the past, to make tomorrow’s worst better than today’s, and on occasion to promise ‘never again’.

This, at least, is the plan. ‘Never again’ is a tricky promise to keep: in the long term, it’s not a question of if things go wrong, but when. The ethical concerns of innovation thus tend to focus on harm’s minimisation and mitigation, not the absence of harm altogether. A double-hulled steamship poses less risk per passenger mile than a medieval trading vessel; a well-run factory is safer than a sweatshop. Plane crashes might cause many fatalities, but refinements such as a checklist, computer and co-pilot insure against all but the wildest of unforeseen circumstances.

Similar refinements are the subject of one of the liveliest debates in practical ethics today: the case for self-driving cars. Modern motor vehicles are safer and more reliable than they have ever been – yet more than 1 million people are killed in car accidents around the world each year, and more than 50 million are injured. Why? Largely because one perilous element in the mechanics of driving remains unperfected by progress: the human being.

Continue reading the essay here

Big data and artificial idiocy

I wrote a piece earlier this year for the Guardian about the perils and delights of big data, and the special stupidity it can breed. The first few paras are below: click through below for the whole piece.

Massive, inconceivable numbers are commonplace in conversations about computers. The exabyte, a one followed by 18 zeroes worth of bytes; the petaflop, one quadrillion calculations performed in a single second. Beneath the surface of our lives churns an ocean of information, from whose depths answers and optimisations ascend like munificent kraken.

This is the much-hyped realm of “big data”: unprecedented quantities of information generated at unprecedented speed, in unprecedented variety.

From particle physics to predictive search and aggregated social media sentiments, we reap its benefits across a broadening gamut of fields. We agonise about over-sharing while the numbers themselves tick upwards. Mostly, though, we fail to address a handful of questions more fundamental even than privacy. What are machines good at; what are they less good at; and when are their answers worse than useless?

Click here to read the whole piece on the Guardian site

Technology’s greatest myth

I wrote this at the end of last year as my final column for BBC Future, aiming to make 2014 a year for longer essays and projects (and paying attention to my young son). It’s a reflection on a couple of years of fortnightly writing about technology, ideas, and tech’s larger place in our sense of the world.

Lecturing in late 1968, the American sociologist Harvey Sacks addressed one of the central failures of technocratic dreams. We have always hoped, Sacks argued, that “if only we introduced some fantastic new communication machine the world will be transformed.” Instead, though, even our best and brightest devices must be accommodated within existing practices and assumptions in a “world that has whatever organisation it already has.”

As an example, Sacks considered the telephone. Introduced into American homes during the last quarter of the 19th Century, instantaneous conversation across hundreds or even thousands of miles seemed close to a miracle. For Scientific American, editorializing in 1880, this heralded “nothing less than a new organization of society — a state of things in which every individual, however secluded, will have at call every other individual in the community, to the saving of no end of social and business complications…”

Yet the story that unfolded was not so much “a new organization of society” as the pouring of existing human behaviour into fresh moulds: our goodness, hope and charity; our greed, pride and lust. New technology didn’t bring an overnight revolution. Instead, there was strenuous effort to fit novelty into existing norms. Continue reading

On video games: difficulty is the point, not the problem

Here’s a piece exploring the difficulties of discussing games compared to other media. It was written first for the book Early Modernity and Video Games (Cambridge Scholars Publishing, February 2014), then republished with Wired and Ars Technica – and, now, here.

Difficulty is built into video games in a different way than for any other medium.

A movie may be difficult, conceptually or in terms of subject matter; it may be hard to understand or to enjoy. Yet all you have to do to access its entirety is to sit and watch from beginning to end.

Written words can be still more difficult. For these, you need a formidable mastery of language, concepts and context; you must convert text into sense. Still, the raw materials are all there for you to work with. You do not have to pass a tricky test in order to get beyond the first chapter, or find yourself repeatedly sent back to the start of a book if you fail. You do not have to practice turning a page at precise moments in order to progress.

Yet this is what the difficulty of many video games embodies: a journey that the majority of players will not complete, filled with trials, tribulations and inexorable repetitions. Continue reading

Clouds, autocomplete and tweets

I’ve been experimenting with re-publishing a few of my recent columns on Medium. If you’ve arrived here in hope of reading them, simply follow (and share!) the links below.

I am the algorithm – on language, thought and ten more years of Twitter

Is autocomplete evil? – how a machine’s whispers in my ears are changing the way I think

The tyranny of the cloud – why you should think twice before you hit “upload”

Why computers will become invisible – our extraordinary intimacy with unseen technology

The meaning of Medium

I’ve been experimenting recently with the young writing space Medium (on which my own words are gathered here). Brought to you by Twitter co-founder, Ev Williams, the potted pitch is “a better place to read and write” – and it certainly delivers in terms of interface and ease of reading.

Behind the scenes, composing on Medium takes What You See Is What You Get to an elegant extreme, with as few options as sensibly possible left to the author: you can determine your text, title, subtitle, illustrations, bold, italics, quotations, links, and that’s about it. Almost every other aspect of formatting is automatic, with acres of white space in the best modern taste atop responsive design fit for any device.

All this, Williams explains, is about creating “a beautiful space for reading and writing — and little else. The words are central. They can be accompanied by images to help illustrate your point. But there are no gratuitous sidebars, plug-ins, or widgets. There is nothing to set up or customize.”

For a writer, it’s a little addictive. I’m typing this into the back end of my own website, which runs on WordPress. The interface is functional, the options and sidebars useful and not too overwhelming. I have a fairly good idea of what my post will look like once I hit “publish.” After Medium, though, it all feels a little weighty: a series of small barriers between these thoughts and their audience that, once lowered, loom disconcertingly large when they return. Continue reading

The attention economy: what price do we pay?

I’ve written my third essay this year for Aeon magazine, exploring the idea of the attention economy – and what it may cost us to pay for content with our time, clicks, tweets and endlessly aggregated attention.

How many other things are you doing right now while you’re reading this piece? Are you also checking your email, glancing at your Twitter feed, and updating your Facebook page? What five years ago David Foster Wallace labelled ‘Total Noise’ — ‘the seething static of every particular thing and experience, and one’s total freedom of infinite choice about what to choose to attend to’ — is today just part of the texture of living on a planet that will, by next year, boast one mobile phone for each of its seven billion inhabitants. We are all amateur attention economists, hoarding and bartering our moments — or watching them slip away down the cracks of a thousand YouTube clips.

If you’re using a free online service, the adage goes, you are the product. It’s an arresting line, but one that deserves putting more precisely: it’s not you, but your behavioural data and the quantifiable facts of your engagement that are constantly blended for sale, with the aggregate of every single interaction (yours included) becoming a mechanism for ever-more-finely tuning the business of attracting and retaining users.

Read the entire essay online at Aeon here