First person games exist out of the desire to experience firsthand another world, and embody firsthand a character other than our own. This innate human longing is unique to this genre of games, and is apparent more so in some games than in others.
First-person shooters have evolved considerably over the past thirty years, with every leap in quality relating to some furthering of the player’s own sense-of-presence within the world. Sense-of-presence can be defined as the experience of feeling present in a body within a virtual reality; and there are an incredible number of approaches, philosophies, and methodologies for attaining this ideal, far too extensive to cover in any single article. For now, we can briefly analyze the past to identify the major innovations in first-person games of significance to the field of immersive virtual reality, and project one possible trajectory for future development of sense-of-presence within first-person games. At the present time, the striving for photorealism is the game industry’s current avenue for supporting a higher sense-of-presence, among other things such as cinematic design and game mechanics. This longstanding push for higher quality graphics in games is reaching its pinnacle in terms of virtual presence, in which accelerating graphics offer diminishing advancements in the sense-of-presence within a game’s world. Rather than continue to push more minute graphical enhancements, an alternative approach involving hardware should be considered. I offer sensorially immersive head-mounted displays as the strongest potential solution to the limits we currently find in first-person virtual experiences.
For the sake of simplicity, I use the term “first-person shooter” synonymously with “first-person game” and “first-person experience,” as the differences of these terms are irrelevant to this blog post. Rather than concentrate on the “gameness” of any given first-person virtual experience over another, I will focus on the quality of the experience in terms of presence within the virtual world. In addition, “virtual reality” is an amorphous term that has come to mean almost anything that is not natural reality. The more specific term “immersive virtual reality” can at least rule out non-immersive virtual realities such as computer interfaces, yet still includes virtual worlds that are socially immersive such as Second Life and MMORPGs. To distinguish VR hardware from this inflation of language, I use “sensorially immersive virtual reality” to mean virtual reality hardware that directly enhances a player’s sensory experience within a virtual world, whether through visual, auditory, olfactory, gustatory, or tactile means. A head-mounted display or CAVE system, for instance, enhances the visual sense of immersion.
To grasp how sensorially immersive virtual reality hardware can possibly change first-person experiences in the future, we must briefly elaborate on the evolution of first-person gaming through the lens of presence within virtual environments. Maze War (1974) came on the scene long before its time, as graphics technology was a long way off from representing anything that resembled the real world. Yet this first FPS emerged from the fairly new idea that players can experience action through the eyes of a games’ protagonist, as if the player herself were moving in the virtual world.
After Maze War, first-person shooters were here to stay. The technology of graphics continued to improve, and it wasn’t until the 1980’s that a graphics engine could represent more than wireframe models. Driller (1988) introduced the color filled polygon, a major revolution in 3D graphics provided by the Freescape Engine, and also allowed the player to not only look left and right in ninety-degree increments, as found in Maze Wars, but look up and down as well, thus for the first time allowing players to explore a 3D environment in all its dimensions. To put things in perspective, however, the game could still barely chug faster than two frames per second.
Then Wolfenstein 3D (1992) brought about a revolution of modern features that are now taken for granted in first-person gaming, such as selection between multiple weapons, health and ammo pick-ups, and fluid horizontal orientational movement. As the FPS continued to evolve from this template, games such as Doom (1993) brought us even more robust environments with texture mapping and non-grid spatial arrangements.
However, these worlds were still divided into simplified levels or stages, and health and ammo still generally resembled floating objects placed arbitrarily throughout the world. A logically realistic world could net yet be realized at this stage in game engine technology. Yet, in 1995, LucasArts’ Dark Forces came along, and was the first in the genre to make full use of 3D environments. Players cloud look up and down, as well as jump and crouch.
It sounds pretty standard now, but until that point first-person shooters took place on flat planes. The mouse had yet to be utilized as a looking device, until it was implemented with Dark Forces, thus enhancing the feeling of control – or sense of presence – as the player explored the world through the character’s eyes. In addition, the developers started a trend in what they called “active environments” in which the world around them acted independently of the player. In Dark Forces, spaceships were coming and going, machinery went about its business, and platforms shifted autonomously.
While this “active environment” added an extra dimension to the experience, it wasn’t until Half-Life (1998) that world design really took off, giving the player a feeling of exploring a very real, living, dynamic environment. This is yet another revolution that today we take for granted: for the first time, game-related items had a logical purpose within the world. Ammo came from dead soldiers, health was retrieved from first aid kits and abandoned medical centers, and enemies all had a logical and narrative purpose within the world at every moment.
Gone were the days of floating ammo and glowing health placed arbitrarily throughout the levels. Even the concept of “levels” in a conventional sense disappeared, transforming into something much more fluid with a natural narrative progression. Ron Dulin, in Gamespot’s original review, added a curious point about the game:
“Suffice it to say that Half-Life isn’t a great game because of its story; it’s a great game because of how it presents that story… There are scripted events in the game. There are opening and closing scenes. But they all occur naturally within the game environment. It may sound simple, but it goes a long way toward helping create a believable world.”
Today, virtual worlds are reaching a pinnacle in their quality: narrative design, scripted sequences, photorealistic models and textures, fully accurate physics systems, flawless dynamic lighting, real-time reflection, and the list goes on. And yet, as these worlds approach perfection, the input and output devices we use to interact with them remain technologically stagnant. If we are to continue progressing, we must turn our attention from the worlds themselves and re-evaluate the hardware we are using to perceive them. It took numerous innovations, over the course of several decades, to bring first-person games from their initial wireframe models to today’s fully realized dynamic universes. In taking careful notice of these past innovations and analyzing their projection, we can determine potential hardware equivalents that can bring first-person gaming to the next evolutionary phase, and ultimately realize the dream of sensorially immersive virtual reality.
Although the FPS was revolutionized by 360 degree horizontal orientational movement in Wolfenstein 3D, today we are still forced to explore this freedom through a field-of-view between thirty and forty-five degrees. Computer monitors and televisions are not the optimal way of experiencing a game in first-person; not to mention, the FOV is incredibly unrealistic compared to the human eyeball. Considering the amount of progress we have made in the past ten years in graphics technology, it is surprising that this shortcoming hasn’t been seriously addressed by console developers. We need to throw out our monitors and adapt a better output device with a wider field-of-view. Currently, Head-Mounted Displays and CAVE systems are emerging as excellent solutions for giving players peripheral vision and realistic human eyesight within first-person games.
In looking at another significant innovation of the past, the mouse was introduced as an input device in Dark Forces, revolutionizing how the FPS was played. However, like the computer monitor, the mouse – and its cousin, the analog stick – is now an aging technology for use with videogames. In real life, where we look with our eyes is separate from where we point with a gun, and where we point with a gun is separate from where we run with our legs. The point-of-view, the crosshair, and the directional movement are all separate things, and yet in first-person games these are all compacted into the same function, all controlled with the mouse.
Despite this shortcoming, there have been recent attempts to improve upon basic mouse-like functionality. The Razer Hydra controller, for instance, can enhance the FPS by allowing the crosshair to be operated somewhat separately from the player’s point-of-view. Although the device uses magnetic motion tracking as the input for moving a player’s crosshair, the crosshair is still bound to the point-of-view, forcing the player to follow wherever his crosshair wanders. I again offer Head-Mounted Displays as a solution, this time for the the purpose of allowing players to look independently of where they point their weapon and of the direction they are moving.
This article is continued in the next post, The Conquest of Presence in First-Person Shooters Part 2: The Coming Revolution