This is the sort of post I wish I had the time and talent to write. Now, the least I can do is link to it and urge you to read it.

With a little bit of introspection and reflection on the functional constraints and social affordances of media, we can grasp new forms of communication. Vine is just an example for Om Malik to elaborate how conventions and consumer expectations emerge from a particular medium.1

So please enjoy a quick introduction to very recent media history, and some context to help us understand how technology and a peer group of users shape what a medium is. Screen sizes, download speeds, user habits, they all have a systematic relation to the way we can package moving pictures into an experience. Putting the pieces together helps us understand, perhaps even tentatively predict media evolution.

And if, after reading the piece, you feel up to it, imagine how a change in user base, like a succession of generations, may influence the development of a tool itself. We don’t read our grandmother’s newspapers, if we read newspapers at all. Do you think that the Youtubes or Facebooks 15 years in the future will cater to today’s expectations? Surely they will adapt to new users demanding their experience to matter, lest they go the way of Myspace.


  1. Incidentally, Medium, the platform, is another example where it is hard to pinpoint what it is that makes the medium. Its properties are fluid, never quite clear to either its authors or readers whether it is a distribution platform or a channel with its own voice and brand. And yet, through networked use of the technology, of the medium, properties emerge that bring value to both, authors and readers. 

Vine gets better with age: How screens, speed and networks are changing the future of online video — Tech News and Analysis

Wonderful, just wonderful. Science informs how information architecture can expand its use across experience channels. Which is basically applied business model design.

Yep, this article from the Journal of Information Architecture presents us with a look at the retail experience both online (across various devices) and offline, and how to create semantic “anchors” that help customers make sense of the experience in both worlds.1

Successful cross-channel user experiences rely upon a strong informational layer that creates understanding amongst users of a service. This pervasive information layer helps users form conceptual models about how the overall experience works (irrespective of the channel in which they reside).2


  1. Which is, incidentally, eerily close to my USP. You know, the whole meaning business. 

  2. The nice thing about scientific articles is that I can just copy & paste the abstract. Much more convenient than having to dig for a quotable catch phrase. 

Sense-making in Cross-channel Design

Building Facebook Home with Quartz Composer (by David O Brien)

This is just the first in a series of videos that Dave O Brien created in a timely fashion: Some of you may have heard about the Facebook design team using Quartz Composer for prototyping their Home app.

The emphasis on animations resonates with another current debate, about the haptic experience that physical books provide. The result of that haptic experience should not come as a surprise to you, my endeared regular readers: There are affordances about mapping information to spatial and haptic cues that pictures under glass can’t provide.

When you read a book or magazine, you are navigating information in physical space. Your brain creates a rough map of the information that you are browsing while you are flipping page after page. Moreover, it draws upon past experiences with book space to inform the mental image of your current read and bestows you with a sense of empowerment over the text, and a feeling of serendipity.

Now, when you take away the physical feedback that paper provides to your senses, you are taking away functionality from the user interaction with the text. No feeling the weight of pages to tell how far into the text you are, no sense of halting and reversing the flick of a page in mid-air because you glanced something you want to inspect more closely.

But you know what pictures under glass can do for you to give you some design elements not available to print? That’s right. Animation. Hence the video above. The design team from Facebook realized just how much physicality matters, so they looked for a way to make their wireframes animate according to physics. Inertia, pseudo-gravity, all these sorts of things matter in animation.

I’m not saying that you can fully emulate the experience of physical objects with current digital technology. I’m saying that you need to make up for the shortcomings of our current interaction paradigm (pictures under glass) by introducing explicit feedback mechanisms. Visual feedback is the go-to choice most of the time. But sound or vibration is already available in many touch devices.

Prezi Interface Considerations

I had some opportunity to develop high concept Prezis with various clients in recent months. In doing so I learned a lot about how far you can push the technology and in what scenarios the software plays to its strengths.1

Mind you that Prezi works best as an interface for content you prepare with authoring tools outside of Prezi. If you have a means to create vector graphics in a flash file format, you can enjoy intricate layering and zooming effects to navigate your material.2 As this brief introduction may show you, there are ways to overcome Prezi’s technical limitations for both presenting, and—even more relevant in my opinion—exploring information.

Prezi, much like other authoring tools, still suffers from a lack of widely recognized best practices and commendable design principles. Here’s to hoping I may provide a bit of useful input to their fashioning. More information is in the Prezi itself, and some files for download can be found at my lab.


  1. This is not going to be a post about critiquing a product, even though there is ample opportunity to do so with Prezi. Some of its idiosyncracies are baffling, to say the least, and the kind of design decisions Prezi forces upon its users are a tough pill to swallow at times. But hey, I made it work and so can you. 

  2. I fully intend to post some tutorials about how to create the right flash files for Prezi in the future. The one tool that makes it possible, other than flash itself, is pdf2swf

Some interesting words from Dr. Drang about how critical a properly implemented feedback loop is for human computer interaction. Just days after I lauded Apple for being quite savvy about this whole human-computer interaction thing, he presents a case where they fall short. Rightly so.

I don’t use Fantastical, the tool that he discusses, but I do endorse his comments about usability through instant, incremental feedback.

[The animations are] not just eye candy. The animations are providing instant feedback on how Fantastical is parsing your words and, more important, they’re teaching you Fantastical’s syntax. This is tremendously useful because, despite the wonderful flexibility of NLP, there’s always a syntax and you need to learn it if you’re going to use the product. This lack of instant, incremental feedback is what makes Siri impenetrable to some people; you have to give Siri an entire command and wait to see how she interprets it.

Incidentally, instant incremental feedback is ever present as a repair strategy in human-human interaction. A puzzled expression on a face of your audience prompts you to rephrase what you just said, for example. These sort of natural interactions are what artificially designed interfaces need to imitate in order to make interacting with them feel natural, too. Read the post on All This to see feedback discussed in the context of an actual product.

what's really great about Fantastical

On Social Affordances, Google Glass & the iWatch

I think1 Google Glass makes for a wonderful case study to introduce yet another design theory buzzword to wider recognition: Social Affordance. The problems and pushback that Google faces with their newest toy can largely be attributed to their lack of understanding of this concept: Technology is never an end unto itself. Rather it is the use, that, well, users get out of a given technology.

tl;dr: Google does not inform its design decisions with UX insights into social affordances, which shows with Glass. Apple has a track record showing they are good at that sort of thing – and I believe some form of wrist worn device is much better suited to usher in an era of ambient computation than a set of glasses.

Technology is not the issue.

Some of the most successful products arise out of the accidental realization of potential in a technology that its creators never envisioned. One striking example is that of the SMS, or text. A friend of mine at Vodafone told me the story of this one engineer who realized that the unused data stream resources in cellular networks could be turned into a feature. And low and behold, the users gobbled it up2.

Users adopted this new feature because it empowered them to communicate through their phone without actually calling on the phone. The social affordance is that people can communicate even in situations where they can not3 speak. Be it students doing it under their desks, people in crowded places, people wishing to avoid a direct conversation, they all found a common use for it and established new social conventions around the use of mobile phones, too.

With Google Glass we have a new feature, that of ever present ambient computing. But we, as potential users, don’t yet have a clear vision how that feature would fit into our daily lives. More importantly, we don’t quite know how it could fit into our social lives. A permanent HUD, especially one that is coupled with a camera, is inherently disruptive. Even if Glass wearers tune out the block of interface in their vision over time4 the ever present camera won’t be ignored so easily. The social affordances of Google Glass are too much of a disruptive element in social interaction to be gobbled up the way texts were.

With Glass, Google ignores the cultural construct that is the private vs public distinction.

When we talk about ambient computing, we need to mitigate two things: Attention management of distractive information streams and feedback mechanisms to control the computing layer. We need some sort of feedback to control our computing device. Something that tells us when a task is completed, if more input is required and what the results are. But this feedback is competing against the sensory input of the world that we navigate. And make no mistake, the most pressing sensory input actually stems from the social layer. Few things are more distracting than clearly stated disapproval of our peers.

If whatever we doing with our devices becomes too much of a disruptive element to our social responsibilities, that means we cannot use those devices in social settings. There is a limit to the alienation that both we and people in our surroundings can put up with before harsh penalties kick in: You face more than just a disapproving look in the cinema if you start talking on the phone during the film. Lest this is not clear enough for you to conclude on your own: You will be kicked out when you fail to adhere to the social conventions of watching a movie in a public theater.

This social layer to the use of technology is where appeals to technological affordances themselves fall flat: Yes, there are other technologies that allow you to film others. But they don’t come with the same social affordances. The excuse that the device you are wearing has other purposes besides filming will not eliminate the fact that you are clearly communicating “Hey, I’m wearing this gadget that empowers me to covertly film everything and everyone, at any time.” That is the message people will see first and foremost, when being confronted with such a device.

Ignoring consequence of social affordances is bad UX. “Ambient” needs mitigation to not be disruptive.

Believe you me: Outside the bubble of techno evangelists most people still don’t care about implications of data mining and this whole internet thing. They do care if you point a camera at them. As soon as Google Glass is in the evening news,5 and people can tell that your funny glasses are actually a camera, that is when they will care about you constantly flouting social rules.

It is at that very point of critical mass that the propensity of the technology to be so useful a product that its merits outweigh its strain on existing social etiquette is put to the test. Is there actually a benefit with mass appeal in the product? Or, as has been said before: Are you sure that the features you build are actually relevant to your audience? Relevant enough to overcome obstacles to adoption?

These questions can, in part, actually be answered ahead of time. You can at least identify the obstacles your product may be facing, if you do due diligence and research the social affordances within existing cultural norms before you are trying to change them. Low and behold, there is actually a return on investment with proper UX design. It pays to understand how humans interact with not just their tools, but each other.6

Ambient Computing and how to make it accessible to people is the relevant concept.

The success of a product is not defined by its features. At the end of the day users only care about how they benefit from using it.7 And we must understand that the benefit must be very obvious for them. Especially if a new technology threatens their habits. Humans are change averse.

I find it a bit ironic. For all the data they gather about their users, Google suck at understanding human behavior. To counter another point that has been raised in the online discussion: Google being evil is not the problem average users have about entrusting their data to them. People can put up with hypocrisy and even oppressive behavior as long as their needs are being served. They won’t put up with a company who does not understand their needs and hence fails to serve them.

It is mind boggling that a company who faced the kind of pushback in Germany, where their very lack of insight into the distinction of public vs private came back to bite them, seems unable to learn from the experience. Mind you that Google is not alone in this. There is a tendency among US based companies to fall prey to the hegemonic bias, a tendency even more prevalent in Silicon Valley. Tech is run by people who are so enamored with their privileged position of “knowing better” that they are completely blinded to their own privileges in the first place. They are oblivious to different cultural realities.

Exacerbating the hegemonic bias is that social in general is not Google’s forte. Just look at their efforts with Vave or Google+. Understanding social affordances, was never part of the design process in Google’s previous endeavors. All their tangible products, from self-driving cars to glasses, are driven by technological solutions that neglect social relevance for a blind belief in technology.

While Google Glass has lots of technological potential, potential to be a landmark that ushers in new interaction models between humans and computers, navigating the obstacles of disrupting innovation requires a perspective beyond belief in technology.

Navigating user expectations is Apple’s game.

There is a several billion pound gorilla in the room when we talk about user experience and market disruption. It’s highly likely that there will be some sort of wearable computing device coming out of Cupertino, too. But I’d wager that it will be quite different from Google’s attempt at it. Because Apple is actually paying attention to how technology fits into the lives of a mainstream audience. Heck, they are marketing themselves not on technological prowess of their inventions, but on the experience they deliver.

Apple is so focused on the user experience of non-geeks that they sometimes alienate a tech oriented clientele.8 That approach seems to be working out rather well for them. In fact, Apple’s ability to couple engineering ingenuity with an understanding of how people interact with technology led them to completely change the mobile phone paradigm into one of mobile computing.

While the mobile phone paradigm may have settled into a new status quo, it is safe to assume that mobile computing has not.9 Gathering personalized and ambient data and repurposing it through computation is the game that everyone wants in on. Add a layer of ubiquitous connectedness and sharing, and start connecting the dots which existing technologies could be brought together to afford users with meaningful applications. Meaningful in this case has a strong connotation with catering to preconceived expectations about using technology, mind you.

An iWatch makes sense.

There are always different options available to solve similar problems. But now that I have argued why I don’t think that Google has presented us with a solution that a mainstream audience would currently embrace, I think it’s fair I argue for one that I think does take social affordance into consideration.

Think of texting again (SMS in Europe): In various youth cultures, definitely in Japan, but even at Google’s very own doorstep in the US, people don’t use their phones for calling. A significant demographic texts, rather than speaks on the phone. Obviously it does not make it less awkward for them to speak on the phone when they speak through their glasses instead of a handheld device, much less speak to a computing device on their head in public.

So for reasons beyond the technological problems with speech recognition, there is a case to be made that talking to our gadget should not be the main interface option. Tactile or kinetic interfaces would tie in nicely with the “wearable” aspect of ambient computing. It would also be very discrete. But human cognitive propensity is just too heavily skewed towards visual processing that a main gateway into mobile computing could ever forego a visual interface. Hence, using touch (and speech to a lesser extent) for input, coupled with visual feedback is still the go-to fundamental for any break through in ambient computing.

It follows that the visual interface component should allow for ever present availability, while maintaining a bit of discreteness. What experience better to leverage than one that is already an established mode of interaction: The watch. As an added bonus it is close to able bodied people’s preferred mode of manipulating their surroundings and can track lots of data points originating in that interaction. Let me stress this fact about a wrist-worn device: It can record highly relevant data through proximity and kinetic sensors and it does not even need a camera to do it.

It’s hard to comprehend nowadays just how much of a disruption the introduction of time pieces was back in the day. Try to imagine a world where you did not set appointments by the minute, where your day was not compartmentalized in arbitrary segments, but governed by the necessity of tasks and the rhythm of nature. And yet, the mainstream application in which personalized time-telling wound up in is that of a wrist watch.

My main argument about why I believe we will see wrist-worn ambient computing devices soon is not one of technological affordance. It is that a watch is an established mode of interacting with information, a treasure trove of mental models about fitting technology into our daily routines. The user experience must drive design decisions, especially if we are actively trying to create a disruptive technology. Leveraging existing expectations about how to interact with technology greatly enhances our chances to create a product with mass appeal. Creating something much more discreet than a camera on your nose in the way of providing a background noise, a potentially distracting information stream into our social interactions seems like a sensible approach to me. Many companies seem to agree, with lots of rumors of smart watches going around.

But cracking ambient computing will require more than bolting an existing product onto another. A smart phone on your wrist does not yet bring a tangible improvement to what we currently have.10 Taking the affordances of both apart and applying those that are useful to a disruptive device will be the kicker.


  1. I’ve had an interesting exchange with @iA, @jeroenvangeel and @rafweverberg on Twitter some time ago that prompted me to write down a few thoughts of mine. This blog post elaborates on the ideas I touched upon in 140 character segments. 

  2. Some may even go so far to say that this one realization singlehandedly changed the business model of network providers, without a bit of new code being written. 

  3. Or simply don’t want to. 

  4. Even without ever experiencing wearing one I’m pretty confident that habituation will kick in to that effect, so users won’t be distracted by the hovering icons. 

  5. Still the yard stick of mainstream in all the countries that tech people would consider “markets” for their expensive gadgets. 

  6. There surely are an abundance of points about McLuhan to be made here. Since this post is already running quite long, I ask that you make them yourselves or, if you feel so inclined, nudge me to address them later. 

  7. Incidentally: bragging rights about specs have very limited appeal outside of a zealous tech audience. 

  8. Yes, skeuomorphism is more than a fad. It can be a functionally motivated design choice, and has been just that in many of Apples pushes for mass appeal. 

  9. I would even go so far to say that whatever paradigm shift happens in the mobile computing space will have profound repercussions on the mobile phone space. Will smart phones continue to drive the computational work and distribute it to satellite devices, or will their computing aspect be replaced by something new? Either way, if you want to skate to where the puck is going to be, don’t bank on smart phones serving the functions they do now in the future. 

  10. Really, it took me this long to get to a KnightRider reference? Anyway, the Hoff is popular in Germany for a) his wicked car and the remote on his wrist (think about it - satellite computing, not a smart watch) and b) the unbelievable outfit he wore on a historic night. An ensemble that could have brought down the Berlin Wall even without him performing in it.