I think Google Glass makes for a wonderful case study to introduce yet another design theory buzzword to wider recognition: Social Affordance. The problems and pushback that Google faces with their newest toy can largely be attributed to their lack of understanding of this concept: Technology is never an end unto itself. Rather it is the use, that, well, users get out of a given technology.
Google does not inform its design decisions with UX insights into social affordances, which shows with Glass. Apple has a track record showing they are good at that sort of thing – and I believe some form of wrist worn device is much better suited to usher in an era of ambient computation than a set of glasses.
Technology is not the issue.
Some of the most successful products arise out of the accidental realization of potential in a technology that its creators never envisioned. One striking example is that of the SMS, or text. A friend of mine at Vodafone told me the story of this one engineer who realized that the unused data stream resources in cellular networks could be turned into a feature. And low and behold, the users gobbled it up.
Users adopted this new feature because it empowered them to communicate through their phone without actually calling on the phone. The social affordance is that people can communicate even in situations where they can not speak. Be it students doing it under their desks, people in crowded places, people wishing to avoid a direct conversation, they all found a common use for it and established new social conventions around the use of mobile phones, too.
With Google Glass we have a new feature, that of ever present ambient computing. But we, as potential users, don’t yet have a clear vision how that feature would fit into our daily lives. More importantly, we don’t quite know how it could fit into our social lives. A permanent HUD, especially one that is coupled with a camera, is inherently disruptive. Even if Glass wearers tune out the block of interface in their vision over time the ever present camera won’t be ignored so easily. The social affordances of Google Glass are too much of a disruptive element in social interaction to be gobbled up the way texts were.
With Glass, Google ignores the cultural construct that is the private vs public distinction.
When we talk about ambient computing, we need to mitigate two things: Attention management of distractive information streams and feedback mechanisms to control the computing layer. We need some sort of feedback to control our computing device. Something that tells us when a task is completed, if more input is required and what the results are. But this feedback is competing against the sensory input of the world that we navigate. And make no mistake, the most pressing sensory input actually stems from the social layer. Few things are more distracting than clearly stated disapproval of our peers.
If whatever we doing with our devices becomes too much of a disruptive element to our social responsibilities, that means we cannot use those devices in social settings. There is a limit to the alienation that both we and people in our surroundings can put up with before harsh penalties kick in: You face more than just a disapproving look in the cinema if you start talking on the phone during the film. Lest this is not clear enough for you to conclude on your own: You will be kicked out when you fail to adhere to the social conventions of watching a movie in a public theater.
This social layer to the use of technology is where appeals to technological affordances themselves fall flat: Yes, there are other technologies that allow you to film others. But they don’t come with the same social affordances. The excuse that the device you are wearing has other purposes besides filming will not eliminate the fact that you are clearly communicating “Hey, I’m wearing this gadget that empowers me to covertly film everything and everyone, at any time.” That is the message people will see first and foremost, when being confronted with such a device.
Ignoring consequence of social affordances is bad UX. “Ambient” needs mitigation to not be disruptive.
Believe you me: Outside the bubble of techno evangelists most people still don’t care about implications of data mining and this whole internet thing. They do care if you point a camera at them. As soon as Google Glass is in the evening news, and people can tell that your funny glasses are actually a camera, that is when they will care about you constantly flouting social rules.
It is at that very point of critical mass that the propensity of the technology to be so useful a product that its merits outweigh its strain on existing social etiquette is put to the test. Is there actually a benefit with mass appeal in the product? Or, as has been said before: Are you sure that the features you build are actually relevant to your audience? Relevant enough to overcome obstacles to adoption?
These questions can, in part, actually be answered ahead of time. You can at least identify the obstacles your product may be facing, if you do due diligence and research the social affordances within existing cultural norms before you are trying to change them. Low and behold, there is actually a return on investment with proper UX design. It pays to understand how humans interact with not just their tools, but each other.
Ambient Computing and how to make it accessible to people is the relevant concept.
The success of a product is not defined by its features. At the end of the day users only care about how they benefit from using it. And we must understand that the benefit must be very obvious for them. Especially if a new technology threatens their habits. Humans are change averse.
I find it a bit ironic. For all the data they gather about their users, Google suck at understanding human behavior. To counter another point that has been raised in the online discussion: Google being evil is not the problem average users have about entrusting their data to them. People can put up with hypocrisy and even oppressive behavior as long as their needs are being served. They won’t put up with a company who does not understand their needs and hence fails to serve them.
It is mind boggling that a company who faced the kind of pushback in Germany, where their very lack of insight into the distinction of public vs private came back to bite them, seems unable to learn from the experience. Mind you that Google is not alone in this. There is a tendency among US based companies to fall prey to the hegemonic bias, a tendency even more prevalent in Silicon Valley. Tech is run by people who are so enamored with their privileged position of “knowing better” that they are completely blinded to their own privileges in the first place. They are oblivious to different cultural realities.
Exacerbating the hegemonic bias is that social in general is not Google’s forte. Just look at their efforts with Vave or Google+. Understanding social affordances, was never part of the design process in Google’s previous endeavors. All their tangible products, from self-driving cars to glasses, are driven by technological solutions that neglect social relevance for a blind belief in technology.
While Google Glass has lots of technological potential, potential to be a landmark that ushers in new interaction models between humans and computers, navigating the obstacles of disrupting innovation requires a perspective beyond belief in technology.
Navigating user expectations is Apple’s game.
There is a several billion pound gorilla in the room when we talk about user experience and market disruption. It’s highly likely that there will be some sort of wearable computing device coming out of Cupertino, too. But I’d wager that it will be quite different from Google’s attempt at it. Because Apple is actually paying attention to how technology fits into the lives of a mainstream audience. Heck, they are marketing themselves not on technological prowess of their inventions, but on the experience they deliver.
Apple is so focused on the user experience of non-geeks that they sometimes alienate a tech oriented clientele. That approach seems to be working out rather well for them. In fact, Apple’s ability to couple engineering ingenuity with an understanding of how people interact with technology led them to completely change the mobile phone paradigm into one of mobile computing.
While the mobile phone paradigm may have settled into a new status quo, it is safe to assume that mobile computing has not. Gathering personalized and ambient data and repurposing it through computation is the game that everyone wants in on. Add a layer of ubiquitous connectedness and sharing, and start connecting the dots which existing technologies could be brought together to afford users with meaningful applications. Meaningful in this case has a strong connotation with catering to preconceived expectations about using technology, mind you.
An iWatch makes sense.
There are always different options available to solve similar problems. But now that I have argued why I don’t think that Google has presented us with a solution that a mainstream audience would currently embrace, I think it’s fair I argue for one that I think does take social affordance into consideration.
Think of texting again (SMS in Europe): In various youth cultures, definitely in Japan, but even at Google’s very own doorstep in the US, people don’t use their phones for calling. A significant demographic texts, rather than speaks on the phone. Obviously it does not make it less awkward for them to speak on the phone when they speak through their glasses instead of a handheld device, much less speak to a computing device on their head in public.
So for reasons beyond the technological problems with speech recognition, there is a case to be made that talking to our gadget should not be the main interface option. Tactile or kinetic interfaces would tie in nicely with the “wearable” aspect of ambient computing. It would also be very discrete. But human cognitive propensity is just too heavily skewed towards visual processing that a main gateway into mobile computing could ever forego a visual interface. Hence, using touch (and speech to a lesser extent) for input, coupled with visual feedback is still the go-to fundamental for any break through in ambient computing.
It follows that the visual interface component should allow for ever present availability, while maintaining a bit of discreteness. What experience better to leverage than one that is already an established mode of interaction: The watch. As an added bonus it is close to able bodied people’s preferred mode of manipulating their surroundings and can track lots of data points originating in that interaction. Let me stress this fact about a wrist-worn device: It can record highly relevant data through proximity and kinetic sensors and it does not even need a camera to do it.
It’s hard to comprehend nowadays just how much of a disruption the introduction of time pieces was back in the day. Try to imagine a world where you did not set appointments by the minute, where your day was not compartmentalized in arbitrary segments, but governed by the necessity of tasks and the rhythm of nature. And yet, the mainstream application in which personalized time-telling wound up in is that of a wrist watch.
My main argument about why I believe we will see wrist-worn ambient computing devices soon is not one of technological affordance. It is that a watch is an established mode of interacting with information, a treasure trove of mental models about fitting technology into our daily routines. The user experience must drive design decisions, especially if we are actively trying to create a disruptive technology. Leveraging existing expectations about how to interact with technology greatly enhances our chances to create a product with mass appeal. Creating something much more discreet than a camera on your nose in the way of providing a background noise, a potentially distracting information stream into our social interactions seems like a sensible approach to me. Many companies seem to agree, with lots of rumors of smart watches going around.
But cracking ambient computing will require more than bolting an existing product onto another. A smart phone on your wrist does not yet bring a tangible improvement to what we currently have. Taking the affordances of both apart and applying those that are useful to a disruptive device will be the kicker.