Paper 2

Paper 2
In this day and age, nothing lasts long as “the new thing.” Phones are being changed each and every day, iPods last less than a year before the new model is out, and video games are changing many of their features. One in particular for video games is their controllers and what is used to make the user interface interactive. In the past few decades, games have changed and enhanced what these controllers are, and there is no doubt that within the next decade, many more changes will be made. In particular, at the rate in which this business is growing, games will not have handheld controllers. Game systems will be able to throw away the chunky, space wasting controllers we have now, and will learn to be controlled with body and even with mind.
In the beginning of video game systems, the Magnavox Odyssey was the first produced home gaming console. In 1972, it featured games with only one controller method: they could move the characters around on the screen. For this, the controller only featured three items. The first was a horizontal movement dial, the second a vertical movement dial, and third was a reset button. It was a primitive controller for a primitive game system. Five years after the Oddysey’s release, Atari was launched to the public and began the popularization of video games. The Atari came out with four different controllers for its system, but each of them had only two features (except for the keyboard controller). They all had a moving joystick or dial, and one button.
It wasn’t until 1983 that video games started using what we now have as standards on our games. The NES was the first game to use a D-Pad and two buttons to control its games. It added more complexity to its system and made the game more complex with the buttons, and at the same time, it made gaming more simple.
Games continued using this setup of D-Pad and buttons for their controllers until in 1994, the first Playstation came out. Its revolutionary design using two, smaller joysticks, a D-Pad, trigger buttons and the normal button system was something unseen before. Its complex layout was something that gave gamers a scare, but that quickly transitioned into an understanding, and gamers soon became numb to noticing the difference.
Then in 2006, the newest Nintendo system did something never seen before in the markets history. The Wii incorporated a controller based on users’movements, along with the classic button/joystick mashup. This new technology branched gaming into a new direction of incorporating human movement into the user’s ability to control the virtual environment on the screen before them. The Kinect for the Xbox 360 and the Playstation Move were products of Nintendo’s newest controller design. The only outcast among them was that the Kinect did not use any controller. The user then became the controller. They must stay in a designated area for the system to detect their controls, but it added a new level to the controllers using movement to make the game function.
After reading this history, one thing becomes apparent. Every controller advancement has been made in just about 10 to 12 years after the previous one was created. That means that since the latest advancement was the movement enhancement, then gamers should have some new feature to feast their eyes upon by the year 2018. Society already has movement as the new, and it has technically gotten rid of controllers with the Kinect, but it needs more, and will get more. The Kinect now is starting this trend of a “no-handheld controller” system. The problem with it now, though, is its control method. Everything is controlled with grandiose movements. To look different ways, players need to move drastically in order to change their viewpoint. It’s a baby technology that hasn’t been totally figured out yet, and it’s apparent. Spaces of gameplay are extremely limited. There aren’t many worlds that are free to roam on the Kinect since it doesn’t have these wonderful controls.
To improve this controller method, game designers and developers will integrate already existing technology and improve the Kinect, or even make a newer, better system. One technology that developers will be implementing into their systems is an eye-tracker. There are plenty of studies done with eye tracking software. What it would do is make the gamers eyes control the eyes of the onscreen character, thus making the movements and actions more fluid to the gamer’s desires. The new control styles, like the revolutionized Playstation controller, will be confusing and difficult at first, but people would get used to them. Society learns to adapt to anything video games bring forth.
This eye tracking will make video games with no handhelds much easier and more efficient to use, but it won’t stop there. As long as users have to perform certain actions (pushing buttons, waving a hand, etc.) in order to control things, there will always be dissatisfaction and unwanted results. Gamers need to be able to make sure their actions are resulting in what they planned. To accomplish this, developers will give video games the ability to read minds.
Yes, it sounds scary. Machines within peoples’heads, prodding into each and every thought so that it can judge what task to perform next. That’s not exactly what I mean, though. Every gamer has had experiences where the on screen action is different then what they intended. Just a couple of days before writing this, it happened to me in Gears of War 3. Instead of diving in one direction, I grabbed cover on a piece of wall. It ended up poorly for my alien avatar, resulting in a mangled, bloody heap of limbs, but it wasn’t what I wanted to happen. I wanted to dive, not take cover. If only the game new my thoughts were on the former and not the latter.
This is what I’m talking about. Games will not prod into all of the gamers’ thoughts. Games will, by the year 2021, be able to recognized what action the user intends, especially in those situations. They will indeed be tested first with the added use of handheld controllers, just to make sure things work tightly, but they will venture away from the physical extensions and accompany this new technology with the movement controls we have, and will have, in the future.
This added technology will be able to immerse the gamer much deeper into the game space. Immersion, the feeling of being part of a world you are not, is based on believability. Graphics, sounds and even stories are the biggest topics in fixing this, but once this new controller style is nailed down, that is where all the talk will be about immersion. Feeling as if one is the character is the most important part, and there is no better way than making the user control them with the exact same movements they wish to perform in the game-space. If the user wants to tap an NPC on the shoulder and then look away in a fashion that makes it look as if they had nothing to do with it, they will be able to do that. It will be possible by controlling the character with those exact same muscle movements. It’s like a virtual Halloween costume. The gamer assumes the role of something imaginary, and their movements correspond to the real person underneath the costume. Unlike these day where the game character defines the movement, the movement of the user will actually define the character.
The only way this is possible though is by adding these thought and move based control systems. Technology moves fast. Devices are created left and right in no time to replace the former. Games will, without a doubt, continue to follow that pattern with their control devices, and it will come down to the gamer itself controlling everything with nothing but their biological

Post Author: ChrisOndracek

Leave a Reply