An essay on the emerging culture of gaming, first published in Prospect Magazine, June 2008
Mogwai is cutting down the time he spends playing World of Warcraft. Twenty hours a week or less now, compared to a peak of over 70. It’s not that he has lost interest—just that he’s no longer working his way up the greasy pole. He’s got to the top. He heads his own guild, has 20,000 gold pieces in the bank and wields the Twin Blades of Azzinoth; weapons so powerful and difficult to acquire that other players often (virtually) follow Mogwai around just to look at them. In his own words, he’s “e-famous.” He was recently offered $8,000 for his Warcraft account, a sum he only briefly considered accepting. Given that he has clocked up over 4,500 hours of play, the prospective buyers were hardly making it worth his while. Plus, more sentimentally, he feels his character is not his alone to sell: “The strange thing about this character is that he doesn’t just belong to me. Every item he has he got through the hard work of 20 or more other people. Selling him would be a slap in their faces.” As in many modern online games, co-operation is the only way to progress, with the most challenging encounters manageable only with the collaboration of other experienced players. Hence the need for leaders, guilds—in-game collectives, sometimes containing hundreds of players—and online friendships measured in years. “When I started, I didn’t care about the other people. Now they are the only reason I continue.”
When Mogwai isn’t online, he’s called Adam Brouwer, and works as a civil servant for the British government modelling crisis scenarios of hypothetical veterinary disease outbreaks. I point out to him a recent article in the Harvard Business Review, billed under the line “The best sign that someone’s qualified to run an internet startup may not be an MBA degree, but level 70 guild leader status.” Is there anything to this? “Absolutely,” he says, “but if you tried to argue that within the traditional business market you would get laughed out of the interview.” How, then, does he explain his willingness to invest so much in something that has little value for his career? He disputes this claim. “In Warcraft I’ve developed confidence; a lack of fear about entering difficult situations; I’ve enhanced my presentation skills and debating. Then there are more subtle things: judging people’s intentions from conversations, learning to tell people what they want to hear. I am certainly more manipulative, more Machiavellian. I love being in charge of a group of people, leading them to succeed in a task.”
It’s an eloquent self-justification—even if some, including Adam’s partner of the last ten years, might say he protests too much. You find this kind of frank introspection again and again on the thousands of independent websites maintained by World of Warcraft’s more than 10m players. Yet this way of thinking about video games can be found almost nowhere within the mainstream media, which still tend to treat games as an odd mix of the slightly menacing and the alien: more like exotic organisms dredged from the deep sea than complex human creations.
This lack has become increasingly jarring, as video games and the culture that surrounds them have become very big news indeed. In March, the British government released the Byron report—one of the first large-scale investigations into the effects of electronic media on children. Its conclusions set out a clear, rational basis for exploring the regulation of video games. Since then, however, the debate has descended into the same old squabbling between partisan factions. In one corner are the preachers of mental and moral decline; in the other the high priests of innovation and life 2.0. In between are the ever-increasing legions of gamers, busily buying and playing while nonsense is talked over their heads.
The video games industry, meanwhile, continues to grow at a dizzying pace. Print has been around for a good 500 years; cinema and recorded music for around 100; radio broadcasts for 75; television for 50. Video games have barely three serious decades on the clock, yet already they are in the overtaking lane. In Britain, according to the Entertainment & Leisure Software Publishers Association, 2007 was a record-breaking year, with sales of “interactive entertainment software” totalling £1.7bn—26 per cent more than in 2006. In contrast, British box office takings for the entire film industry were just £904m in 2007—an increase of 8 per cent on 2006—while DVD and video sales stood at £2.2bn (just 0.5 per cent up on 2006), and physical music sales fell from £1.8bn to £1.4bn. At this rate, games software, currntly our second most valuable retail entertainment market, will become Britain’s most valuable by 2011. Even books—the British consumer book market was worth £2.4bn in 2006—may not stay ahead for ever.
In raw economic terms, Britain is doing rather well out of this revolution. We are the world’s fourth biggest producer of video games, after the US, Japan and Canada (which only recently overtook Britain thanks to a new generous tax regime for games companies). Here is a creative, highly skilled and rapidly growing industry at which we appear to excel. 2008, moreover, is already almost certain to top last year’s sales records thanks to the April release of the hugely hyped Grand Theft Auto IV (GTA IV), the brainchild of Edinburgh-based company Rockstar North. Worldwide, GTA IV grossed sales of over $500m in its first week, outperforming every other entertainment release in history, including the Harry Potter books and Pirates of the Caribbean films.
The media analysis that accompanied GTA IV’s triumph was of a markedly higher quality than it would have been even a few years ago. But truly joined-up thinking about the relationships between games, society and culture is still rare. The Observer, for example, let three of its higher-brow cultural critics loose on GTA IV—a quasi-anthropological exercise typified by the author and critic Bidisha’s comment that “whoever scripted these incidentals should call HBO and pitch a show, leaving the rest of the team to design more hit-and-runs,” seemingly unaware that for a budding screenwriter, scripting incidentals for games like GTA IV might represent at least as desirable a career path as pitching shows to US pay television services. On the other side, the Labour MP Keith Vaz responded to news of a reported stabbing in a queue to purchase the game in Croydon by telling the Times that GTA IV “is a violent and nasty game and it doesn’t surprise me that some of those who play it behave in this way.” Given that the GTA series has to date sold over 70m units worldwide, the fact that “some” players may be violent is hardly a revelation.
That the media stumbles and gropes when discussing video games is an old story. More dismayingly, however, similar failings are to be found in a new book by Susan Greenfield, renowned neuroscientist and head of the Royal Institution. Under the title ID: The Quest for Identity in the 21st Century (Sceptre), Greenfield sets herself the task of exploring one of the “great questions” facing us: what will interactive electronic media mean for personal identity and society over the next hundred years? Never one to avoid speculation, Greenfield ranges widely. Her central concerns are, however, easy enough to summarise: every individual’s mind is the product of a brain that has been personalised by the sum total of their experiences; with an increasing quantity of our experiences from very early childhood taking place “on screen” rather than in the world, we thus face a potentially profound shift in the nature of these minds. Specifically, Greenfield goes on to suggest, the fast-paced, second-hand experiences created by video games and the internet may inculcate a worldview that is less empathetic, more risk-taking and less contemplative than what we tend to think of as healthy. We may see people’s mental lives transposed to “the fast-paced, immediate world of screen experience: a world arguably trapped in early childhood, where the infant doesn’t yet think metaphorically.”
Greenfield’s prose—a near-continuous train wreck of redundancies, mixed metaphors and self-contradictions (”this literally incredible concept,” “an ever-changing visual image”)—is perhaps the worst enemy of her attempts to persuade. This is unfortunate, because however much technophiles may snort, she is articulating widely held fears that have a basis in fact. Unlike even their immediate antecedents, the latest electronic media are at once domestic, mobile and work-related, blurring the boundaries between these spaces, and video games are at their forefront, both in terms of the time users lavish on them and their ceaseless technological innovation. A generational rift has opened that is in many ways more profound than the equivalent shifts associated with radio or television: more alienating for those unfamiliar with new technologies, more immersive for those who are. How do lawmakers regulate something that is too fluid to be fully comprehended or controlled; how do teachers persuade students of the value of an education when what they learn at play often seems more relevant to their future than anything they hear in a classroom?
Greenfield asks plenty of compelling questions, but her answers are disappointing. “If your life story is to be unique, if you are to be truly individual, then you will need to see yourself as different from everyone else,” she postulates. True enough, but about as sociologically potent as the observation that we are made of atoms. If such criticism seems petty, it’s important to note that Greenfield’s is an especially pernicious brand of imprecision, in that it is likely only to harden opinion on both sides of the debate: those outside the gaming culture will retain their fears and prejudice, while gamers will become even more convinced that all objections to their pursuit are ill-informed and irrelevant.
Yet beneath the hysterical rhetoric of many objectors, there are eminently reasonable concerns. Spending time playing video games means not spending time on more traditional leisure activities, such as sport, reading or conventional socialising. And, seen from the outside, the benefits of playing thousands of hours of video games can be hard to pinpoint (improvements to hand-eye co-ordination notwithstanding). What would Adam’s life be like if he had been aged 15, or ten, when he started playing World of Warcraft, rather than 25? Might his life even now be better, and richer, if he didn’t spend quite so much time online?
The physiological and psychological mechanisms that electronic games harness are certainly powerful. Not all the horror stories are rumour. In September 2007, a 30-year-old Chinese man in Guangzhou died after playing an online game for three straight days—the fourth death since 2005 directly attributable to excessive game-playing. In 2007, the American Medical Association looked into defining video game addiction as a formal diagnosis, and although it ultimately rejected this idea, there are already clinics treating such addiction in China, South Korea and Amsterdam. Like almost all activites we find somehow compulsive, gaming induces our bodies to produce elevated levels of the neurotransmitter dopamine within a part of the brain known as the nucleus accumbens. Certain kinds of games are, however, especially adept at elevating levels of dopamine over long periods of time via their combination of structured tasks and varied, regular rewards. Gaming is undeniably a problem for some. While there are no agreed-upon statistics, a recent study at Stanford University suggests that men are more likely than women to respond compulsively to games, while a 2007 poll of 1,178 US children and teenagers concluded that 8.5 per cent of youth gamers (aged 8 to 18) could be classified as pathological or clinically “addicted” to playing video games.
Seeking further perspective, I spent an afternoon with a very different kind of gaming insider: Adam Martin, a lead programmer for NCE Studio, the British outpost of one of the world’s largest online games developers, NCsoft. Adam came to his post via a computer science degree at Cambridge and several IT start-ups, and is currently filling his spare hours writing games on his laptop to help him master Korean—the language of a country with the most highly evolved electronic gaming culture in the world. “Computer games teach,” he tells me. “And people don’t even notice they’re being taught. They’re having too much fun. I think the next big change will come from the use of video games in education.” But isn’t the kind of learning that goes on in games rather narrow? “A large part of the addictiveness of games does come from the fact that as you play you are noticeably getting ‘better,’ learning or improving your reflexes, or mastering a set of challenges. But humanity’s larger understanding of the world comes primarily through interaction and experimentation, through answering the question ‘what if?’ Games excel at teaching this too.”
Adam is a believer, in a quietly evangelical manner—”The thing that bugs me is this,” he says as we circle around the implications of the Byron report, “how many of the people dooming and glooming about children and games have bothered asking a child what they think, or why they play?” And the Byron report itself? “It seems a sensible approach to the issues, albeit not very well informed when it comes to gaming.” So there’s no reason to panic? “Not when probably half the British population have now played an electronic game.”
One view that chimes with these observations can be found in a book Greenfield herself invokes at several points, Steven Johnson’s Everything Bad is Good for You (Allen Lane). Since the book’s publication in 2005, Johnson’s argument in favour of what he labels the “Sleeper curve”—the steadily increasing intellectual sophistication of modern popular culture—has become something of a shibboleth for futurologists. To some, such as Malcolm Gladwell writing in the New Yorker, the book was a delightful piece of “brain candy”; to others, like the Guardian’s Steven Poole, it was “an example of a particular philistine current in computer-age thinking.” Johnson’s thesis is not that electronic games constitute a great, popular art to be set alongside the works of Dickens or Shakespeare, but that the mean level of mass culture has been demanding steadily more cognitive engagement from consumers over the last half century. He singles out video games as entertainments that captivate because they are so satisfying to the human brain’s desire to learn. It’s almost a mirror image of Greenfield’s vision. Where she sees an identity-dismantling intoxication, Johnson finds “a cocktail of reward and exploration” born of a desire to play that is active, highly personal, sociable and creative. Games, he points out, generate satisfaction via the complexity and integrity of their virtual worlds, not by their robotic predictability. Testing the nature and limits of such in-game “physics” has more in common with the scientific method than with a futile addiction, while the complexity of the problems children encounter within games exceeds that of any of the puzzles of logic and reasoning they might find at school.
A similar proposition was put to Greenfield at the launch of her book at the Royal Institution in May. Invoking Niels Bohr’s famous retort—”you’re not thinking, you’re just being logical!”—she argued that there are ways of thinking that playing video games simply cannot teach. She had a point. We should never forget, for instance, the unique ability of books to engage and expand the human imagination, and to gift us the means of more fully expressing our situations in the world. Similarly, the study and performance of music teach a unique beauty. But the patterns of player behaviour found within many popular video games consist of far more than a banal flattening of ideas and personalities, or a dryly “logical” sequence of rote moves. Take the activities of Adam/Mogwai and his fellow guild members during one of their three weekly “raiding missions” within World of Warcraft. First, a team of 25 players with a spectrum of abilities and equipment have to meet at a pre-arranged time and place within the game world, under an agreed leader. All players must remain in vocal communication, via microphones and headsets, at all times. The raid itself might take up to ten hours, and is to be conducted according to a painfully researched strategy. Essentially an assault on a heavily fortified dungeon, it will entail mass attacks on a succession of powerful computer-controlled “boss” creatures, each with unique abilities, demanding a unique attack strategy. Players with missile abilities will attack from a distance, healers will keep other players alive, while melee specialists will engage at close quarters, all to a strict timetable. The rewards gained from each encounter—usually advanced equipment—will be allocated according to an in-guild system, depending upon rank, experience, need, contribution and the whim of the guild leader. Those failing to pull their weight could face being summarily ejected.
Still more elaborate is the science fiction game Eve Online, which involves players ganging together to build spaceships. One of the first of the largest class of such ships took a consortium of around 22 guilds—just under 4,000 players in total—eight months to complete, a task that involved complexities of training, materials, role allocation and management that would put many companies to shame.
The complexity of games like Warcraft and Eve is not the only aspect of modern gaming to defy stereotype. Consider demographics: where once gaming was the preserve of adolescent males, players increasingly come from all age groups and both sexes. According to the Entertainment Software Association of America, the world’s largest gaming association, the average American video game player is now 35 years old and has been playing games for 12 years, while the average frequent buyer of games is 40. Moreover, 40 per cent of all players are women, with women over 18 representing a far greater portion of the game-playing population (33 per cent) than boys aged 17 or younger (18 per cent). Much of the recent growth in the value of the gaming industry has been driven by the increased diversity and affluence of its consumer base; the hard core of adolescent males are no longer central. In Britain, Ofcom’s annual Communications Market report for 2007 noted that, despite the electronic games market continuing to grow in value, significantly fewer children were playing console and computer games than two years previously (61 per cent of children aged 5-15 did so regularly in 2005, compared to 53 per cent in 2007).
Perhaps most intriguingly, the video games industry is now growing in ways that have more in common with the old-fashioned world of charades and Monopoly than with a cyber-future of sedentary, isolated sociopaths. GTA IV itself has a superb collaborative mode for online gamers, while the games that have been shifting most units in the last two years belong to a burgeoning new genre known as “social-casual”: games in which friends and relations gather round a console to compete at activities that range from playing notes on a fake electric guitar (Guitar Hero) to singing karaoke and swapping videos of their performances, X-Factor style (SingStar), or playing tennis with motion-sensitive controllers (Wii Sports). The agenda is increasingly being set by the concerns of mainstream consumers—what they consider acceptable for their children, what they want to play at parties and across generations.
These trends embody a familiar but important truth: games are human products, and lie within our control. This doesn’t mean we yet control or understand them fully, but it should remind us that there is nothing inevitable or incomprehensible about them. No matter how deeply it may be felt, atavistic fear is an inappropriate response to technology of any kind. In any case, even the “worst” games are often consumed in ways that defy critics’ fears. Take GTA IV, my pre-ordered copy of which joined the 608,999 other units sold in Britain on 29th April. The game is full of pastiche violence; of slyly explicit dialogue and ceaseless minor homages to cinema, television and music. It has an 18 certificate, and I won’t be inviting any nine year olds to join me in investigating its world. But the play experience is an open-ended delight of exploration and wonder: “Liberty City,” a lovingly detailed parallel New York city, within which you can pass hours driving around in various vehicles, watching the sun rise and set, trying to attract the attentions of cops and then shake them off, and—in one especially memorable moment—driving a stolen ambulance off a roadbridge on to a raised section of trainline, then manoeuvring it underground and through the “Manhattan” railway network. All this is best done in company, and most of the pleasure I’ve taken from the game has involved sitting on a sofa with friends, dissecting the city and dissing each other’s driving skills with gleeful abandon. Left to my own devices, I tired soon enough of the main storyline and the tasks that have to be undertaken for it to progress. But I remain enthralled by the freedom of moving through the game’s virtual city. I even whiled away a happy half hour watching the in-game television channels and laughing at the antics of such characters as the “Republican Space Rangers.” It’s quite a thing, too, to be moved by the beams of an unreal sun setting behind a not-quite-Manhattan skyline.
My GTA experiences are obviously not cultural engagements to be set alongside attending performances of Othello, Otello or, for that matter, watching my Sopranos DVDs. But games have begun to defy some of the accusations traditionally flung at them by custodians of the higher arts: that they cannot move you deeply, or expose you to the moral frissons and complexities of a great narrative. To explore the growing field of those games that set out to do exactly these things, I talked to Justin Villiers, a scriptwriter and director who six months ago moved from the world of television and film to that of video games. “Video game titles are becoming increasingly sophisticated,” he told me. “They need to match voices and dialogue with new, more realistic graphics.” But won’t his career move be seen by many as a step down? “Games match films for scale of production. Hundreds of people work on the big ones. The console has come out of the bedroom and into the living room. And there is now a real desire to craft stories with genuine arcs, to develop complex characters and to craft whole and believable worlds. There are already games out there you could describe as art.” Such as? “There was a game back in 2001 called Ico, on the PlayStation 2. You play a little boy with horns, in a world that’s visually based on Giorgio de Chirico. The story is so simple and touching, the mise en scène is beautiful, and the characters move with such grace. That’s art. Still, in narrative terms, much of the games market is still saturated with terrible stuff: super-enhanced soldiers fighting some evil covenant or other. They’re running out of ideas for that kind of thing, which is partly why the industry is welcoming dedicated writers, directors and artists.”
Justin is represented by Sidelines, Britain’s first specialist game writers’ agency. Launched in February, it is unlikely to remain alone for long. GTA IV had a budget of around $100m and credits two lead writers, 16 minor writers, over 250 voice actors and 40 artists, plus another few hundred animators, designers, testers and researchers. Does Justin think that writing for games will ever compete with writing for films, television or even books? “It’s a different set of skills. For the writing, you have to think in non-linear ways, and of course all choices must be interactive. The player must always be in your mind and cannot be left passive. It’s fun. If you don’t have fun, the player won’t. For the game I’m dealing with now, I’m writing lines that would usually be the preserve of the most flamboyant Hollywood epics.” And does he worry about gaming as a negative influence? “Entertainment has no obligation to inform, instruct or make us better people. Having said that, games can both instruct and inform. The problem for regulators is not content. Kids have been acting out their parents’ tales of brutal war for hundreds of years. The potential problem with games is their addictiveness and the consequences of this addiction.” And to deal with this? “We must make parents aware that it’s not good to leave your kid playing games for 24 hours, just as it’s not good to leave them a 24-hour supply of fast food.” Quite. Although neither the finest fast food nor the best-crafted television can, it must be said, deliver eight hours of total immersion like a World of Warcraft raid.
So far, the dire predictions many have made about the “death” of traditional narratives and imaginative thought at the hands of video games have at best equivocal evidence to support them. Television and cinema may be suffering, economically, at the hands of interactive media. But literacy standards and book sales have failed to nosedive, and both books and radio are happily expanding into an age that increasingly looks like it will be anything but lived on-screen. Young people still enjoy sport, going out and listening to music. They like playing games with their friends, and using the internet to keep in touch and arrange meetings rather than to isolate themselves. And most research—including a recent $1.5m study funded by the US government—suggests that even pre-teens are not in the habit of blurring game and real worlds. This finding chimes with an obvious truth: that a large proportion of “problem behaviours” in relation to any medium or substance exist for resolutely old-fashioned reasons—lack of education, parental attention, security, support and experience.
The sheer pace and scale of the changes we face, however, leave little room for complacency. A month after the release of the Byron report, the Guardian published an article by Richard Bartle, a British writer and game researcher. Its thesis was brief and triumphant. “15 years from now, the prime minister of the day will have grown up playing computer games, just as 15 years ago we had the first prime minister to have grown up watching television, and 30 years ago to have grown up listening to the radio. Times change: accept it; embrace it.” Just as, today, we have no living memories of a time before the existence of radio, we will soon live in a world in which no one living experienced growing up in a society without computers. It is for this reason that we must try to examine what we stand to lose and gain, before it is too late. Susan Greenfield and others are right that there is no necessary correlation between technological and moral progress, and that unintended consequences have proliferated from all those leaps humanity has made over the last hundred and even thousands of years. In the past, such losses have barely registered in our daily lives, because those who could tell us about them were long dead. But today, with epochal change taking place on the scale of generations, our past and our future are almost simultaneous—and the joyful, absorbing complexity that games can deliver is also their greatest threat.
Within the virtual worlds we have begun to construct, players can experience the kind of deep, lasting satisfactions that only come from the performance of a complex, sociable and challenging task. Yet such satisfactions will always remain, in a crucial sense, unreal. Whatever skills it teaches and friendships it creates, an eight-hour World of Warcraft session is nevertheless solipsistic like few other activities. Is a descent into precision-engineered narcissism on the cards? I believe not: the ways we are already making and playing games show that to be human is to demand more than this. But the doomsayers are right in one important respect. If we do not learn to balance the new worlds we are building with our living culture, we may lose something of ourselves.