Designing for Natural Interfaces: Body – Lecture notes
Designing for Natural Interfaces: Body – Lecture notes
For years, human beings have been fascinated with the idea of removing the barriers between our bodies and technology. In films, action heroes don special suits to gain extraordinary strength; in sci fi novels, we can type on holograms that appear in the air in front of us or open doors with just a wave of a hand. But throughout the past few years, research into augmenting the human body has flourished; as a result, surpassing our biological abilities – or at least further leveraging them within our interactions with technology – is no longer just the stuff of fantasy. We’re still quite a ways away from leaping over buildings like superheroes, no doubt—but increasingly, the groundwork is being laid for the clunkiness of technology to disappear, ultimately enabling a more fluid connection between our bodies and our devices.
From Finger to Face
Initial iterations of products that leverage the human body on an intimate scale are already on the market. For example, you may have already played around with face-controlled apps like Nose Zone, in which you point your face in certain directions to control a laser projected from your nose. “Using our face as an input device restores the dexterity we lost when we ditched the physical keyboard,” the creators offer as rationale; but consolidating the path from intention to action by getting rid of steps like tapping a button has potential that lies beyond the realm of gaming. On an ethically promising note, face-controlled software could help users with disabilities interact with tech in more seamless ways; just take a look at Støj, where designer Andreas Refsgaard came up with a system dubbed ‘Eye Conductor’ which enables people with disabilities to digitally make music solely using eye movements to trigger sounds. And conveniently, albeit potentially alarmingly, you can already pay for your midnight snack at KFC simply by smiling in countries like China.
Video — Leap Motion
The Original Input Device
When considering these developments, it’s helpful to keep “Naturalism” in mind—the concept experimental computing tech companies like Leap Motion embrace in their approach to developing technology. Leap Motion describe Naturalism as “the ability to interact with the virtual world in a way that’s intuitive without having to learn how to use a complicated interface or abstracted set of controls.” They also believe that while AR and VR are seen as futuristic technologies, “they actually have the potential to be the simplest and most natural” by redirecting focus back to an input device we’ve used throughout our entire history on this planet: our hands.
Case in point: Leap Motion’s Project North Star, which combines extremely precise real-time hand tracking with a new AR headset. It’s basically like a virtual smartwatch; a gadget that looks like a real-world display or interface, existing in virtual space and morphing on context. In fact, Leap Motion’s user testing showed that people of all ages could interact with virtual representations of familiar objects with no teaching whatsoever and over 99 percent accuracy. That means that if you give an elderly man a virtual book, he’ll just flip it open like he would a real one—a far cry from what happens when you try to teach your grandparent how to use a smartphone for the first time. Considering that the proportion of seniors who use smartphones is 42 percent lower than millennials, eradicating the hardware hurdles of the iPhone or Android could very well open up technology to wider age groups and demographics.
Watch Me Move
A similar logic is behind recent developments in Smartwatch texting being made by players like Dartmouth College as well as Google’s Project Soli, who’ve created a sensing technology which uses radar to detect touchless gesture interaction. With a keyboard organized like a wheel on the face of a watch, the idea behind Smartwatch texting is that you can type using a quick flick of the wrist, tap your fingers together to move on to the next word and erase by rubbing your fingers together. Dartmouth College’s smartwatch, in particular, is based on a predictive text system—similar to the T9 system used in the flip phones of the early 2000s. To even further maximize comfort and intuitiveness, Its keyboard is organized like a wheel with letters divided into groups. Although the researchers admit it takes a while to get used to the system, some test users were able to type 15 words a minute after 5 days of practice (compared to an average typing speed of 30 words per minute on an iPhone.) Those behind these intuitive smartwatches believe that with practice, they’ll ultimately lead to faster typing, which could help us spend less time staring at screens.
Upon first glance, even-smarter-smartwatches and face-controlled apps seem like no big deal. In the context of a world in which computers can understand what’s happening around them and driverless cars could at some point be an everyday reality, using your hands to move stuff around in AR seems almost, well… irrelevant. But that changes once you start considering how even seemingly frivolous creations like face-controlled apps reflect a larger shift towards making the body and technology more intimate and intertwined—a mentality perhaps more poignantly fleshed out in the work being done on robotic arms and powered clothing.
Last year, a man who’d lost his arm to cancer became the first person on earth to live with a mind-controlled robotic arm. The arm has 26 joints, 17 of which can be moved independently of each other; and although his arm is still a prototype and nowhere near ready for being rolled out on the market, he says it’s changed his life. Meanwhile, American company Seismic have created a powered suit which gives its wearer ‘discrete strength’ by mimicking the biomechanics of the human body, and startup SuitX have created a motorized, medical exoskeleton suit they’re gearing up to bring to market. The companies hope these suits could help people regain mobility after accidents, or elderly people gain strength to move with more ease (although the cost is currently a considerable barrier—SuitX’s full body suit goes for $10,000.) Then, there’s the recent development of carbon-fibre blades to keep in mind: the idea is that these would be used to augment human legs—enabling people such as athletes or the elderly to move faster.
The Brain Game
In case you’re wondering, the work being done around augmenting the human body isn’t limited to our legs and arms, either: in fact, according to financial analysts, the market around augmenting the human brain with neural devices is forecasted at $27 billion within the next six years. The U.S. Pentagon’s research and development agency is currently funding nine brain-controlled interface projects it aims to bring to the American FDA for clinical trials in 3-5 years. China recently announced it’s developed a car operated by brain signals you transport by wearing special equipment. Facebook’s working on building a brain computer interface that’ll let you type with your mind without invasive implants, and is even working on a way for humans to hear through their skin by mimicking the cochlea in your ear. And tech entrepreneur Bryan Johnson’s even building ‘neuroprosthesis’: a device he says will allow us to learn faster, remember more, do telepathy, connect to group minds and even download skills like martial arts. (Welcome to the Matrix, perhaps?)
Building Our Bodies
Of course, the ethical questions are rampant—especially since so much of this technology is in its infancy. Will powered suits eventually be made accessible for many people—or will the young and already able-bodied use them to get a boost? How ethical is it for surgeons to give some athletes carbon fibre blades for legs a.k.a. the ability to outperform others? And what’s the point of augmenting the human body and mind, anyway? Indeed, some suggest the motivation falls under something called the transhumanist movement: a belief that we should merge with machines to remake ourselves ‘in the image of our own higher ideals’, eradicate ageing and essentially keep up with artificial intelligence.
These ethical questions will have to be accounted for, to be sure—but by whom? By tech firms and governments, yes, but also by designers. As we’re building systems and products that intertwine the human body and technology more intimately than ever before, let’s make sure we’re also building the future we want to see.