Beyond the Mouse: 5 Ways We’ll Interface With Future Computers

Since the dawn of personal computing, the mouse has served as the link between human and machine. As computers have become ever more powerful and portable, this basic interface of point-and-click has remained tried, true and little changed.

But now, new ideas and products are offering revolutionary ways for people to interact with their computers. In the tradition of squeezing the most out of machines in the least amount of time, the mouse and even the keyboard might someday come to be relics of a slower, bygone era. Here are five emerging technologies likely to shake up how we get computers to follow our commands.

Multi-touch

Rather than pointing with a mouse or laptop touchpad and then double-clicking on an icon or dragging a scroll bar, for instance, "multi-touch" lets users input complex commands with simple finger gestures. A well-known example is the "pinching" of an Apple iPhone screen with two fingers to zoom, or a two-fingered "swipe" to go through Web pages.

Many other cell phone companies have followed the multi-touch lead of Apple, which has made extensive use of it in its iPhone, iPod touch, MacBook and the soon-to-be-released iPad. And the top surface of Apple's new Magic Mouse is actually a gesture-recognizing touch pad.

An advantage to bringing multi-touch to regular computers would be increasing the pace at which commands can be entered: Multiple fingers trump the single coordinate of an onscreen mouse pointer.

But two key hurdles stand in the way. First, people cannot comfortably reach out and touch computer screens for long periods of time. Secondly, users block the screen they're trying to view when they multitouch.

One proposed way around these problems is the multi-touch interface called 10/GUI by graphic designer R. Clayton Miller. Users rest their hands on what looks like a large laptop touchpad (the keyboard appears above this pad), putting all ten fingers to use in navigating a computer screen and performing actions.

On the computer screen, 10 small, see-through circles appear that represent the users' fingers. Pressing and moving with a certain number of fingers allows the user to access application menus, scroll through pages, and so on.

Gesture sensing

Beyond motion sensing, which a mouse rolling on its trackball already does handily, or iPhone pinching, gesture sensing can allow movement in three dimensions.

In recent years, Nintendo's Wii gaming console has introduced gesture sensing to the masses. A plethora of other manufacturers have recently put out gesture-sensing products, though mostly for gamers.

One company that is likely to target the average desktop user down the road is Los Angeles-based Oblong Industries, Inc. They make a product called g-speak that serves as an "operational environment." A user wearing special gloves stands in front of a giant wall-mounted screen and tabletop monitor. Using a range of gestures akin to a traffic cop – as well as finger pistol-shooting – the user can move images and data around from one screen to the other. (A technology very similar to Oblong's was featured in Steven Spielberg's 2002 film “Minority Report.”)

Christian Rishel, chief strategy officer at Oblong, said this interface lets people sift through massive data sets quickly, "when you're flooded with data and you need to find the right thing at the right time.

Early adopters of the expensive interface include the military and oil companies, said Rishel, but he thinks in five to 10 years all computers will include some form of this technology.

By taking human-computer interactions outside of the two-dimensional screen of the computer, Rishel thinks the time we spend with our computers will become more physical, rewarding and effective.