Philippe Masset

Engineer at Buffer.

Tomorrow's Human-Computer Interfaces

March 2013

Interface: A point where two systems, subjects, organizations, etc. meet and interact.

Interfaces' humble beginnings

By definition, when we want to interact with something or someone else than ourselves, we have to use interfaces.

Between humans, regardless of which senses we use to communicate (hearing, sight, touch), the interface is most of the time ourselves.

We talk, smile, make gestures, shake hands.
This is our body communicating with another body.

When writing was invented around 5200 years ago, while still communicating between each other, we introduced the first widespread communication interface: stone.
Egyptians used papyrus, and later the whole world switched to paper, but you get the idea.

Well before that, 2.6 million years ago, we started using stone tools.
These tools were extensions of ourselves, and interfaces themselves.

This is as far away as interfaces' history gets. More than two million years ago.

Human-Computer Interfaces

Fast forward to modern times, with the invention of the computer.

Computers are the ultimate extensions of ourselves.
We can do virtually anything with them, from modeling entire buildings to exploring our whole planet, land and sea.

So it shouldn't come as a surprise that interfaces between computers and ourselves, or Human-Computer Interfaces, never stop evolving.

Input interfaces

Depending on the definition of "computer", we first started programming them using punched cards.
Write a program on paper, keypunch it, and put your punched card inside the computer.
If you were lucky, the expected data would be printed out on paper.

Apple keyboard
An Apple keyboard

We then started typing on keyboards in the 1970s, when they were first used with Command-Line Interfaces.
To this day, they remain the most popular means of data entry into a computer.

With the advent of Graphical User Interfaces came pointing devices, and most noticeably, the mouse.

Today's smartphones come packed with sensors, such as accelerometers and gyroscopes, which can be used for motion detection.
Specialized cameras (stereo, depth-aware) also serve as image-based gestures detectors.

And with speech recognition starting to be reliable enough for day to day use, it only makes sense to start using microphones to send text and commands to a computer.

So, even with a handful of input interfaces available to us, keyboards and mice are what we've been chiefly using for more than 40 years.


Essentially everything we get out of computers today is by way of screens.
They've been the most used, and almost only computer output interface since we got tired of printing data for the user to see.

Although there have been some improvements to what we see on these screens (thinking GUIs), and how we see it (high density, high resolution, color, 3D), they remain screens; flat surfaces you have to look at.

But screens are also input interfaces. Touchscreens are now commonplace: smartphones, tablets, laptops, cars...

We finally found an input interface that was good enough to (partially) replace the keyboard + mouse combination.
But since it's definitely not perfect, don't expect these forty-year-old technologies to disappear just now; we'll have to find something better first.

Mediated reality

Google Glass

Google Glass

When computer generated images are shown to someone instead of, or in addition to the real world, a whole lot of possibilities open.

Today, everybody can achieve this using his smartphone; but since you have to look at the outside world through your phone, it really isn't immersive nor convenient.

But more specialized devices allow to do a variety of impressive things, literally empowering their users: infrared vision, night vision, brightness reduction, informative text or media overlay...

That's what head-mounted displays (HMDs), virtual retinal displays or bionic contact lenses do, in the same way a simple HUD would; but closer to the body.

Some HMDs look like opaque ski masks, and your whole field of view is covered with screens that show you what's outside in real time; only after having modified the images.

Other HMDs are more discreet, and even if some aren't transparent, they only occupy part of the user's field of view. These basically look like glasses.

And this is what we'll use tomorrow: neat, clever, unobtrusive headgear packed with tons of computational power and networking capabilities.

Brain-computer interfaces

But we can try to look even further into what's next.
Did you know that research on brain-computer interfaces (BCIs) had already started in the 1970s?

Although BCIs aren't made for entertainment or even widespread use today, they've achieved quite a number of feats:

  • Bringing vision back to individuals with acquired blindness
  • Allowing immobilized persons to move artificial limbs by thinking about it
  • Advancing the detection of imagined words, in which the US military are also interested
  • Reconstructing what a person is seeing

Some BCIs need high quality signals from the brain, and are thus placed right inside of it. Other BCIs are less invasive, and can just be strapped to the head.

But no matter how freaky that all sounds, it'll surely be part of our lives someday.
Or probably not ours, but those of people in a few hundred years, when self-improvement brain chips are available to the public. Because that's what everybody wishes in secret, right?


Anyway, as we won't use BCIs in the near future, let's focus on what we will: mediated reality glasses.

Because they're such better interfaces than today's smartphones.
You just talk and watch; no hands, no watching a small screen, no carrying something in your pocket. No bullshit.

So, yeah. Tomorrow – that's when we'll all have a pair of these world-mediating glasses.


"interface". Oxford Dictionaries. April 2010. Oxford Dictionaries. April 2010. Oxford University Press. 01 March 2013 <>.