How would you quantify a statement like

We believe [terrorists are coming]

against

It is probable that [again, something terrible].

This is one topic to really tickle my fancy: Data Visualization, Semantics and Intercultural Communication all rolled into one. Even though I disagree that setting standards in phrasing is the right course of action, I very much enjoy the research from Psychology of Intelligence Analysis prompting it.

If you have a means to convey information less ambiguously, do it! Words don’t mean the same to everyone. Neither do images, mind you, but there is a decent chance that some very low level visualizations are less wrought with semantic ambiguity. Semantic ambiguity is definitely a challenge for quantificational statements in natural languages and as I have recently experienced it can be a pain to overcome in intercultural communication.

My problems were merely communicating workload estimates across a diverse team. Imagine what different interpretations of threat levels expressed in natural language can do to intelligence sharing between different cultures. Read the article on words of estimative probability to get an idea.

Bond Reporting Standards | Bella consults

Some interesting words from Dr. Drang about how critical a properly implemented feedback loop is for human computer interaction. Just days after I lauded Apple for being quite savvy about this whole human-computer interaction thing, he presents a case where they fall short. Rightly so.

I don’t use Fantastical, the tool that he discusses, but I do endorse his comments about usability through instant, incremental feedback.

[The animations are] not just eye candy. The animations are providing instant feedback on how Fantastical is parsing your words and, more important, they’re teaching you Fantastical’s syntax. This is tremendously useful because, despite the wonderful flexibility of NLP, there’s always a syntax and you need to learn it if you’re going to use the product. This lack of instant, incremental feedback is what makes Siri impenetrable to some people; you have to give Siri an entire command and wait to see how she interprets it.

Incidentally, instant incremental feedback is ever present as a repair strategy in human-human interaction. A puzzled expression on a face of your audience prompts you to rephrase what you just said, for example. These sort of natural interactions are what artificially designed interfaces need to imitate in order to make interacting with them feel natural, too. Read the post on All This to see feedback discussed in the context of an actual product.

what's really great about Fantastical

On Social Affordances, Google Glass & the iWatch

I think1 Google Glass makes for a wonderful case study to introduce yet another design theory buzzword to wider recognition: Social Affordance. The problems and pushback that Google faces with their newest toy can largely be attributed to their lack of understanding of this concept: Technology is never an end unto itself. Rather it is the use, that, well, users get out of a given technology.

tl;dr: Google does not inform its design decisions with UX insights into social affordances, which shows with Glass. Apple has a track record showing they are good at that sort of thing – and I believe some form of wrist worn device is much better suited to usher in an era of ambient computation than a set of glasses.

Technology is not the issue.

Some of the most successful products arise out of the accidental realization of potential in a technology that its creators never envisioned. One striking example is that of the SMS, or text. A friend of mine at Vodafone told me the story of this one engineer who realized that the unused data stream resources in cellular networks could be turned into a feature. And low and behold, the users gobbled it up2.

Users adopted this new feature because it empowered them to communicate through their phone without actually calling on the phone. The social affordance is that people can communicate even in situations where they can not3 speak. Be it students doing it under their desks, people in crowded places, people wishing to avoid a direct conversation, they all found a common use for it and established new social conventions around the use of mobile phones, too.

With Google Glass we have a new feature, that of ever present ambient computing. But we, as potential users, don’t yet have a clear vision how that feature would fit into our daily lives. More importantly, we don’t quite know how it could fit into our social lives. A permanent HUD, especially one that is coupled with a camera, is inherently disruptive. Even if Glass wearers tune out the block of interface in their vision over time4 the ever present camera won’t be ignored so easily. The social affordances of Google Glass are too much of a disruptive element in social interaction to be gobbled up the way texts were.

With Glass, Google ignores the cultural construct that is the private vs public distinction.

When we talk about ambient computing, we need to mitigate two things: Attention management of distractive information streams and feedback mechanisms to control the computing layer. We need some sort of feedback to control our computing device. Something that tells us when a task is completed, if more input is required and what the results are. But this feedback is competing against the sensory input of the world that we navigate. And make no mistake, the most pressing sensory input actually stems from the social layer. Few things are more distracting than clearly stated disapproval of our peers.

If whatever we doing with our devices becomes too much of a disruptive element to our social responsibilities, that means we cannot use those devices in social settings. There is a limit to the alienation that both we and people in our surroundings can put up with before harsh penalties kick in: You face more than just a disapproving look in the cinema if you start talking on the phone during the film. Lest this is not clear enough for you to conclude on your own: You will be kicked out when you fail to adhere to the social conventions of watching a movie in a public theater.

This social layer to the use of technology is where appeals to technological affordances themselves fall flat: Yes, there are other technologies that allow you to film others. But they don’t come with the same social affordances. The excuse that the device you are wearing has other purposes besides filming will not eliminate the fact that you are clearly communicating “Hey, I’m wearing this gadget that empowers me to covertly film everything and everyone, at any time.” That is the message people will see first and foremost, when being confronted with such a device.

Ignoring consequence of social affordances is bad UX. “Ambient” needs mitigation to not be disruptive.

Believe you me: Outside the bubble of techno evangelists most people still don’t care about implications of data mining and this whole internet thing. They do care if you point a camera at them. As soon as Google Glass is in the evening news,5 and people can tell that your funny glasses are actually a camera, that is when they will care about you constantly flouting social rules.

It is at that very point of critical mass that the propensity of the technology to be so useful a product that its merits outweigh its strain on existing social etiquette is put to the test. Is there actually a benefit with mass appeal in the product? Or, as has been said before: Are you sure that the features you build are actually relevant to your audience? Relevant enough to overcome obstacles to adoption?

These questions can, in part, actually be answered ahead of time. You can at least identify the obstacles your product may be facing, if you do due diligence and research the social affordances within existing cultural norms before you are trying to change them. Low and behold, there is actually a return on investment with proper UX design. It pays to understand how humans interact with not just their tools, but each other.6

Ambient Computing and how to make it accessible to people is the relevant concept.

The success of a product is not defined by its features. At the end of the day users only care about how they benefit from using it.7 And we must understand that the benefit must be very obvious for them. Especially if a new technology threatens their habits. Humans are change averse.

I find it a bit ironic. For all the data they gather about their users, Google suck at understanding human behavior. To counter another point that has been raised in the online discussion: Google being evil is not the problem average users have about entrusting their data to them. People can put up with hypocrisy and even oppressive behavior as long as their needs are being served. They won’t put up with a company who does not understand their needs and hence fails to serve them.

It is mind boggling that a company who faced the kind of pushback in Germany, where their very lack of insight into the distinction of public vs private came back to bite them, seems unable to learn from the experience. Mind you that Google is not alone in this. There is a tendency among US based companies to fall prey to the hegemonic bias, a tendency even more prevalent in Silicon Valley. Tech is run by people who are so enamored with their privileged position of “knowing better” that they are completely blinded to their own privileges in the first place. They are oblivious to different cultural realities.

Exacerbating the hegemonic bias is that social in general is not Google’s forte. Just look at their efforts with Vave or Google+. Understanding social affordances, was never part of the design process in Google’s previous endeavors. All their tangible products, from self-driving cars to glasses, are driven by technological solutions that neglect social relevance for a blind belief in technology.

While Google Glass has lots of technological potential, potential to be a landmark that ushers in new interaction models between humans and computers, navigating the obstacles of disrupting innovation requires a perspective beyond belief in technology.

Navigating user expectations is Apple’s game.

There is a several billion pound gorilla in the room when we talk about user experience and market disruption. It’s highly likely that there will be some sort of wearable computing device coming out of Cupertino, too. But I’d wager that it will be quite different from Google’s attempt at it. Because Apple is actually paying attention to how technology fits into the lives of a mainstream audience. Heck, they are marketing themselves not on technological prowess of their inventions, but on the experience they deliver.

Apple is so focused on the user experience of non-geeks that they sometimes alienate a tech oriented clientele.8 That approach seems to be working out rather well for them. In fact, Apple’s ability to couple engineering ingenuity with an understanding of how people interact with technology led them to completely change the mobile phone paradigm into one of mobile computing.

While the mobile phone paradigm may have settled into a new status quo, it is safe to assume that mobile computing has not.9 Gathering personalized and ambient data and repurposing it through computation is the game that everyone wants in on. Add a layer of ubiquitous connectedness and sharing, and start connecting the dots which existing technologies could be brought together to afford users with meaningful applications. Meaningful in this case has a strong connotation with catering to preconceived expectations about using technology, mind you.

An iWatch makes sense.

There are always different options available to solve similar problems. But now that I have argued why I don’t think that Google has presented us with a solution that a mainstream audience would currently embrace, I think it’s fair I argue for one that I think does take social affordance into consideration.

Think of texting again (SMS in Europe): In various youth cultures, definitely in Japan, but even at Google’s very own doorstep in the US, people don’t use their phones for calling. A significant demographic texts, rather than speaks on the phone. Obviously it does not make it less awkward for them to speak on the phone when they speak through their glasses instead of a handheld device, much less speak to a computing device on their head in public.

So for reasons beyond the technological problems with speech recognition, there is a case to be made that talking to our gadget should not be the main interface option. Tactile or kinetic interfaces would tie in nicely with the “wearable” aspect of ambient computing. It would also be very discrete. But human cognitive propensity is just too heavily skewed towards visual processing that a main gateway into mobile computing could ever forego a visual interface. Hence, using touch (and speech to a lesser extent) for input, coupled with visual feedback is still the go-to fundamental for any break through in ambient computing.

It follows that the visual interface component should allow for ever present availability, while maintaining a bit of discreteness. What experience better to leverage than one that is already an established mode of interaction: The watch. As an added bonus it is close to able bodied people’s preferred mode of manipulating their surroundings and can track lots of data points originating in that interaction. Let me stress this fact about a wrist-worn device: It can record highly relevant data through proximity and kinetic sensors and it does not even need a camera to do it.

It’s hard to comprehend nowadays just how much of a disruption the introduction of time pieces was back in the day. Try to imagine a world where you did not set appointments by the minute, where your day was not compartmentalized in arbitrary segments, but governed by the necessity of tasks and the rhythm of nature. And yet, the mainstream application in which personalized time-telling wound up in is that of a wrist watch.

My main argument about why I believe we will see wrist-worn ambient computing devices soon is not one of technological affordance. It is that a watch is an established mode of interacting with information, a treasure trove of mental models about fitting technology into our daily routines. The user experience must drive design decisions, especially if we are actively trying to create a disruptive technology. Leveraging existing expectations about how to interact with technology greatly enhances our chances to create a product with mass appeal. Creating something much more discreet than a camera on your nose in the way of providing a background noise, a potentially distracting information stream into our social interactions seems like a sensible approach to me. Many companies seem to agree, with lots of rumors of smart watches going around.

But cracking ambient computing will require more than bolting an existing product onto another. A smart phone on your wrist does not yet bring a tangible improvement to what we currently have.10 Taking the affordances of both apart and applying those that are useful to a disruptive device will be the kicker.


  1. I’ve had an interesting exchange with @iA, @jeroenvangeel and @rafweverberg on Twitter some time ago that prompted me to write down a few thoughts of mine. This blog post elaborates on the ideas I touched upon in 140 character segments. ↩︎

  2. Some may even go so far to say that this one realization singlehandedly changed the business model of network providers, without a bit of new code being written. ↩︎

  3. Or simply don’t want to. ↩︎

  4. Even without ever experiencing wearing one I’m pretty confident that habituation will kick in to that effect, so users won’t be distracted by the hovering icons. ↩︎

  5. Still the yard stick of mainstream in all the countries that tech people would consider “markets” for their expensive gadgets. ↩︎

  6. There surely are an abundance of points about McLuhan to be made here. Since this post is already running quite long, I ask that you make them yourselves or, if you feel so inclined, nudge me to address them later. ↩︎

  7. Incidentally: bragging rights about specs have very limited appeal outside of a zealous tech audience. ↩︎

  8. Yes, skeuomorphism is more than a fad. It can be a functionally motivated design choice, and has been just that in many of Apples pushes for mass appeal. ↩︎

  9. I would even go so far to say that whatever paradigm shift happens in the mobile computing space will have profound repercussions on the mobile phone space. Will smart phones continue to drive the computational work and distribute it to satellite devices, or will their computing aspect be replaced by something new? Either way, if you want to skate to where the puck is going to be, don’t bank on smart phones serving the functions they do now in the future. ↩︎

  10. Really, it took me this long to get to a KnightRider reference? Anyway, the Hoff is popular in Germany for a) his wicked car and the remote on his wrist (think about it - satellite computing, not a smart watch) and b) the unbelievable outfit he wore on a historic night. An ensemble that could have brought down the Berlin Wall even without him performing in it. ↩︎

Happy New Year everyone!

I told you I’d write more about presentation theory and I finally made good on it. Better than most New Years resolutions after a week, I’d say.

Anyway, it has taken me some time to get up to speed working full time with BrightCarbon, but I am starting to get the hang of things. Which bodes well for those of you who would like to see more stuff like this: Cognitive science informed writing about presentation methods. This one is framed in a way that’s quite geeky, I’ll admit.

Then again I am of the firm persuasion that Presentation Design has a lot to learn from Game Design. Game designers excel at applied psychology to drive human interaction with information. The methods they use to solve the communication issues between man, woman and machine are a treasure trove for other design professions. See for yourself:

Worldbuilding in Presentation Design

Contingency Plans and Slideuments

Sorry if the following is a bit non sequitur, but I need to write down an idea that came to me just now. I was thinking of how to stage an apparent technical breakdown in a training situation for high stakes presentations to drive home a point about always providing a fallback for when technology lets you down. Create something that is truly memorable. That led me to think about how to create a very specific fallback, but also an augmentation for my slideument model.

This is where you’ll have to suspend disbelief for a bit, because I have not yet published this model in English. My slideument model is tailored for an office meeting kind of presentation, much in the mold of what Edward Tufte proposes: Don’t use powerpoint, instead bring a printed document with the data and information on it that the participants proceed to discuss in the meeting.

Only that I propose that you bring not only a document to hand out to the audience but also a slide deck that is nothing but closeups of that same document. In the powerpoint version of the document you can animate stuff in sections that are deliberately left blank in the printed version. The audience may then take notes or doodle in those spaces in their own printed version of the document, leaving them with both a task that facilitates information uptake and a deliverable to take home that is perfectly suited to their take on the subject being discussed in the meeting, because they themselves annotated it while watching the presentation.

Keep reading

I have to admit when it comes to Hypertext Theory I am old school: The web is not a hypertext medium.

Be that as it may, one very interesting article I found at the often commendable Content Magazine illustrates how a theory like that of hypertext can (and should) inform our design decisions in how we create meaningful structures. You’ll see that Information Architecture is not something that the advent of the Internet brought about, but rather relies on much older concepts for us to put into our toolboxes, wherever we attempt to work with information.

And if you happened to have paid really close attention to new web projects like Media1 or Anil Dash’s proposal to abandon the web page model altogether,2 well, you may be surprised to find out how much of the sentiment expressed in these ideas has already been around in theory for quite some time.


  1. Here’s a nice analysis from the Nieman Lab ↩︎

  2. Kudos to Anil for making the topic popular, but really, the streams vs static pages debate could use some input from information science or other disciplines that have discussed this in the past. Please? ↩︎

No Longer The Sense Of An Ending