Having made the case that we need to look for examples of inventive information architecture on Prezi that acknowledges the functional paradigm of its medium, today I present you this example by Travis Hitchcock. Prominent indicators link pieces of information (nodes, if you will) to each other, maintaining a hierarchy of information that literally and figuratively scales well.1 I find this quite a compelling case for discovering details about typography through zooming.
By the way, the embed code that Prezi churns out really does not play nice with tumblr. I don’t know if all the divs and classes are meant to be helpful so I can apply my own styling or if they are necessary for the Prezi to work (and I’m too lazy to check) but in any case the result is not pretty. I always have to hope that things work out on the front end once I hit the publish button. ↩︎
I pondered a while how I should approach this Prezi presentation about information architecture by Peter Morville. For one thing I really like the content. It gives you a nice impression of what information architects do. But then I feel that it is being let down by the structure. Ironically what we see is an introduction to information architecture that suffers from poor information architecture.
Mind you, I don’t mean to bash the author nor the tool. I just think that it’s time that for Prezi, as for other types of presentation tools and media, we start to appreciate their properties and develop information architecture that caters to their strengths and weaknesses. This particular example, unfortunately, fails to do so.
The problem with the structure of Peter’s Prezi is that zoom or size is not meaningful at all. As is the case with too many Prezi examples already, the information is accessible only through a linear, monosequential roller coaster ride that may induce motion sickness to boot. Try to navigate the Prezi without the guiding help of the path that Peter created. Once the background image loads there is a visual cue that you start from the bottom left and follow a maze, but that does not get you everywhere you need to be to fully understand the content. In effect, you are bound to linear consumption in a way that is worse than what I came to expect from print.1
A page in a book usually features linear design.2 And yet, by typographic conventions such a page already has better information architecture and usability than most Prezis. On a page I can scan the text flow for headlines or keywords and skip to the part that I want to know about. I can’t do that with a Prezi that goes out of its way to hide information away in arbitrary places.
Prezis often look like mind maps that Mr. Magoo would draw under a microscope. Unlike headers in print, the size of the typeface does not give the user a clue about information hierarchy. No indents are there to tell us where a new line of argument may begin, no grid is there to guide our uptake. Most of the time we don’t even have a clue about how far we have progressed in absorbing the content. This isn’t just bad architecture. I’d say that in the worst examples there is no structure at all. It’s Labyrinth of powerpoint slides.
Now, I do not claim to have the definite solution to the puzzle that this new technology poses. I did in the past propose one possible framework to structure information on the infinite canvas on Prezi in a meaningful way, but I can imagine others that work just as well, if not better, at making the form of the content meaningful. Making the form have function. I’d say this calls for Information Architecture with a capital I&A, does it not?
In Peter’s defense: the title itself says that this version is only alpha 0.1 though ↩︎
Text need not be a monosequential medium. Footnotes are an obvious example. Even in print or handwriting do we have highly sophisticated information architecture that defies linearity. I shall revisit the theory of hypertext some other time, but in the meantime I highly recommend you ponder in awe its precursor, the information architecture masterpiece that is the Talmud. In there you find centuries of dialogic discourse among Hebraic scholars condensed into multilinear typography. Yeah. That just happened. Thousand years ago. ↩︎
Well, what can I say: there is a lot that can be learned from cinema which also applies to presentation design, even when things don’t move. Personally I very much propose that you also do invest a little time to learn about animation if you are serious about presentation design. Nothing says “change of state” like a visual representation that actually changes state. Transformation. From here to there. The works.
As soon as time allows I’ll surely revisit the matter in more detail. Until then, the link points you to an article that contains a few really helpful hints to get you started.
You may remember that we are still waiting for visual.ly to open their lab and release the kraken. Erm, I meant their creation tool. Well, here’s a working alternative for you to check out. It’s free, too.
I’ve seen some of the results embedded on German news sites and they looked decent enough. The tool creates solid charts and graphs, not so much flashy infoposters, just so we’re clear. Have fun.
The scientific literature mentioning politeness is generally dominated by contributions from a rather select subset of linguists. From Lakoff’s analysis of gender mandated speech patterns (Lakoff 1973)1 to the de-facto standard of politeness research put forward by Brown and Levinson (Brown&Levinson 1982)2 it is the scientific discourse that built upon their work that has the greatest impact in citations by a large margin. Politeness research, it seems, can still mostly be attributed to the field of pragmatics and in some respects to that of sociolinguistics.3
Such narrow range of influence is not without problems. Lack of sociocultural perspective notwithstanding, chances are that even within the linguistic domain the notion of of politeness as defined in mainstream theories fails to account for the range of uses with which speakers employ honorifics and other signifiers of politeness. It also should be noted that by subsuming certain speech patterns and linguistic choices under a framework of politeness one already discards potential alternative approaches to explain the same phenomena.
“You can take any language. Take English for example…”—This sentence sums up so much of what is wrong in social and cognitive sciences or even life in general, it has become a standard joke among non-native English speaking linguists.
Knowledge Representation, Mental Models and Barsalou Frames
It’s a regrettable but undeniable fact of life that specialist knowledge hardly ever disseminates from one field to another. Innovation largely comes from those rare occasions when it does.
Recently I was researching some psychological concepts that are used in interaction design and I noticed how mental models are represented in flow charts1 to explain the way users understand a given interface. Some of these charts were eerily familiar, yet I could find no mention of another psychological concept that I feel would amend the theory of mental models:
Frames, in Barsalou’s sense, are recursive attribute-value structures. While frames can be used to implement individual and sortal concepts, their attributes can themselves be analysed as recursively interrelated functional concepts. Given that frames are the basic format of concept formation in cognition, attributes and frames might have neural correlates in our brain.
Frames are a natural linguistic and conceptual format for the representation of complex ontologies that embody substance-accidence and part-whole relations. Of particular interest is the relation of frames to complex representational formats such as conceptual spaces and mental models.2
There you have it. Scientists looking for the relation between
their specialist knowledge of frames and
I just thought I put it out there. Perhaps you know an IxD person who would be interested in this kind of research.3
The examples were mostly from papers and slides used at conferences, but a quick introduction to mental models for designers can be found in this article: UXmag↩︎
The quote is lifted from an invitation to a 2007 conference organized by a cognitive science research group at the Heinrich Heine University. About the FFF↩︎
If she doesn’t mind digging through scientific articles, this paper may give a good hint as to why the theory is relevant for design. ↩︎
There is a well established psychological phenomenon called priming that has all sorts of effects on communication. It can both help you understand and hinder you to interpret meaning through facilitation or interference. Now, in fairness I should tell you that we are talking about split seconds when it comes to measuring the effect. Nonetheless the phenomenon shows us how our perception of things, our understanding of their meaning or function, is influenced by their form.
The relation between form and function is something designers obsess over. Introducing an audience or users into the equation adds a whole new dimension to the way form and function interact. But it is this threedimensional matrix in which a designer really operates. We must never forget to ask ourselves how the perception of our design impacts its meaning. That may sound trivial to interface designers or other communicators, but all too often do I see design that clearly has no appreciation of human perception.
Perception matters. Don’t take my word for it. You can experience interference for yourself in this little video:
If you want to learn a bit more about how our brains are shaping the way we see the world I recommend you take a look at the wonderful Christmas Lectures of 2011. The lectures are an annual event of the Royal Institution of Great Britain to promote science to the public. Besides providing a great introduction to the world of science they also set a standard for presentation.
“Hyperlinks direct individuals through complex webs of information and ultimately dictate the type of information an individual is able to access.”—A special issue of the Journal of Computer Mediated Communication examines the role of hyperlinks in shaping and guiding our information society. Through, you know, science.
President Bush congratulating the winner of the 2006 Japanese Richard-Gere-lookalike contest, Prime Minister Koizumi. (Photo courtesy of the US government)
Did it ever strike you as odd how appropriating clothing to certain contexts is called dress code? I don’t intend to dwell on folk etymology for too long, nonetheless the underlying metaphor may serve to remind you that culture is an artifact of human interaction that manifests through a set of symbolic entities. Symbols encode meaning.
All the time we make suppositions and guesses about other people, about how they behave, how they dress, how they speak. Many of those guesses are subconscious, routing through an opaque set of filters that enable us to process several millions worth of sensory inputs per second. Our minds feature a repository of learning experiences turned into processing algorithms. It is because of this property that the sensory inputs we face are subconsciously interpreted as signals of a code.
Sometimes being called out for our own prejudices makes the structure of these filters visible. Code works only as long as both the sender and the recipient of a message are understanding symbols to mean the same thing. With shared experience comes a set of overlapping world views and thus, shared code.
What if we don’t share a code?
Exposing yourself to a new environment is frightening. We can tell that our grasp of the code may be insufficient to understand what is required of us in unfamiliar settings. Taking your first job interview is one such example. How do I dress, how do I speak, how do I behave to meet the expectations of my seniors and, perhaps more importantly, how can I make sure I understand what they are signaling to me?
Then there are the cases where we should be uncertain, but fail to be. When we feel on familiar ground and rely on the codes that served us so well in the past. This is when we fail to communicate.
In intercultural communication we might be inclined to second guess our presuppositions a little more. We know not to trust our intuition, because it is based on insufficient experience. But even if we are alerted to the fact that cultures differ and try to mind it when we communicate across cultural boundaries it is really hard to shake the false affordance of cultural code.
Let me present the following example:
Former Japanese Prime Minister Koizumi, according to current self-help books on dress codes in the US, would have a hard time landing a job in some industries favoring a conservative appearance. His curly mane does not fit with what etiquette guides tell the aspiring male bank teller to sport for his job interview. Koizumi’s flowing locks, rather, have routinely been taken as a symbol of his somewhat hippie-esque nature—even if the attribution of long hair is changing in the US and some conservatives now dare letting their hair down.
But Koizumi is no Hippie. The irony with his looks is that his haircut is indeed a symbol to indicate his world view. Only that in a Japanese context the code is reversed. After all, the samurai class wore long hair and it was the western oriented liberals of the Meiji-Restauration who forced the men of the former ruling class to cut off their chonmage.
Thus, to Japanese conservatives of the older generation a haircut that covers the ears means something entirely different from what it means to American conservatives of the same age. It says “conservative” in a particular cultural code.
Koizumi also happens to be a proclaimed fan of The King. However, in contrast to what his appearance might have you believe he has no affinity to hippie culture. At All. As a political statement Koizumi more than once visited a revisionist ceremony honoring Japanese WW2 soldiers, including war criminals.(Photo courtesy of the US government)
On Establishing Meaning and Learning from Other Disciplines
Serendipity is a wonderful thing. Once in a while branching out your network to have contacts on the fringe that provide you with insight beyond the expertise of your core truly pays huge dividends. I recommend you do the same, if you are at all interested in updating or amending the concepts you already are all too familiar with.
Anyway, I just happened upon a gem of a post that illustrates why I consider game designers a great resource for other creative professions to learn from. Incidentally it featured one of the key tenets of semiotics, if only through a solid understanding of character design. This is where experience and innovation converge: taking apart what works for others to adapt it for your own means.
Whatever you design, the following principle is true for all professions:
Do yourself a favor and treat yourself to a wonderful read how style and substance feature into character design. Best of all, J. Shea even touches on historical examples after which many a game models its characters and gives ethnological explanations of how those real world designs came to be. Just click on the quote. Maybe you’ll even find some insight of your own to take away from it.
Apple shuts out Snow Leopard users from accessing its new content creation tool via a call to the system version in the properties of the app. That means getting to use iBooks Author on SL is as easy as editing the system version requirements via the property list editor (as per my system default) or a text editor. Good to know that the Leopard can indeed change his shorts.
First, you will have to make your system purport to be a Lion, editing the system version moniker. That little act of camouflage will allow you to install iBooks Author from the app store. Having restored your system version file you need to make the app work nice with the Snow Leopard and change the system requirements in its package contents to those of your true colors.
I had some deviation from the process that digital tweaker describes because on my system I don’t usually enable admin rights. Moreover I found out that starting the app before editing its version file results in the app refusing to start afterwards. You have blown your camouflage too early, so to speak. Not to worry, trashing the stubborn app and installing it again, this time more carefully following the process, worked out fine.
So here is how getting the app to run worked for me:
Update your system so you have iTunes 10.5.3 and SnowLeopard 10.6.8 running
Go to HD>System>Library>CoreServices in the Finder and copy the file SystemVersion.plist to your desktop
Make a duplicate of the file on your desktop
Double click the file on your desktop not named “copy” and in the property list editor replace the two instances of 10.6.8 with 10.7.2 (don’t worry, there are just a few lines to see after expanding the top entry: you’ll find your way). Save. Yes, overwrite.
Drop the thusly edited file into the CoreServices folder, authenticate and replace the original file.
Install iBooks Author from the app store.
Edit the name of the duplicate system file on your desktop to say SystemVersion.plist like the original and drop it into the CoreServices folder to restore your system settings. Authenticate.
In the finder go to your application folder and right click the newly installed iBooks Author app. Select “show package contents.”
Copy the file “info.plist” from the contents folder of the app to your desktop.
Double click the desktop copy of the file and replace the value of 10.7.2 in the line “LSMinimumSystemVersion” with that of 10.6.8. Save.
Drop the edited copy back into the contents folder of the app, authenticate and rejoice. Your app should now run on Snow Leopard upon a click from the dock.
If in doubt, better ask experts! You tango with the Leopard at your own risk! The link points you to a source where more knowledgeable people than I could help:
Some time ago I pondered the way our tools shape the way we approach communication. As a case in point I compared the service that the web sensation visual.ly offers to that of Powerpoint. One serves to create infographics, the other to build presentation slides. Both are putting their users into a predetermined mindset of how to approach their task through templates that set a standard. As many a bulletpoint survivor may tell you, these standards are not always a good thing.
The people of visual.ly are determined to better the standards of visualization, I am confident to say. Having read my post they were kind enough to invite me and have me participate in promoting useful visualization principles on their blog.
What I have tried to do is distill some of the recent arguments from the infoviz community and put it into a structure that helps discussing the issue of what constitutes a good infographic in the first place. I really hope that my contribution may inspire some of you to take another look at the tools at your disposal.
A lot of sources, arguments and studies have been cut from the article to condense it to the bare essentials. I’d be more than happy to take up any loose ends in the comments.
One thing you as a presenter are probably looking to achieve is to make a lasting impression on your audience. Give them something meaningful to take away from the occasion.
I believe I found the expression of Something They’ll Always Remember through Nancy Duarte who proposes you incorporate a moment of memorable surprise into your speech. This striking moment may serve as a focal point for watercooler talk and become a hook by which people will always be able to refer to your presentation.
Now, at science conferences many talks are rather dull, with scientists being quite preoccupied with gathering the data, rather than honing their skills of presenting them live. Yet, the greatest impact of a S.T.A.R. moment that I know dates back to one such conference, albeit somewhat inadvertently as far as I can gather from the records. It has, in effect, entered the history books of scientific communication in general and urology in particular.
What to take away from the epochal Brindley Lecture? Breaching social norms can very much create a moment the audience will always remember. Planning for an audience reaction and carefully considering their tolerance for contextual breach of norms, however, may help you decide where the element of surprise turns into an element of shock. There is too much of a good thing, even with STARs.
The link will lead you to a memoir of the events that transpired on that fateful night in Las Vegas (no less!), published in the British Journal of Urology. Consider yourself warned…
Let the Games Begin: What I Learned About Gamification from the Holidays
The avid reader may remember that I have previously made public my disdain for the generic title of “what I learned about X through Y.” Now I shall need to make an exception proving the rule. This winter I really did realize something that inspired me to start a series about how games relate to communication.
The thing I learned first hand through painful experience when meeting with my family is just how bad of a board game Monopoly really is. Now, to put that into perspective you may need to either be somewhat of a board game geek yourself or at least have an appreciation of the extent to which Germans love their board games. They do. So much so that a whole genre that places emphasis on well balanced design is actually referred to as German Board Games. They are also called designer games for the cult following that some of their auteurs manage to inspire.
Let me explain why I suffered through many a session of Monopoly this winter. You probably should know that my brother and I are highly competitive. I love to beat him. Oh, how I love to crush him…
Not to get carried away let’s just say my family has always enjoyed playing games, but in one of my many rare lapses of judgement I forgot to bring any to the hut in the alps we went this winter. The place was soon engulfed in snow. There was too much snow to ski, if you can believe it. And the only board game around was Monopoly.
Against better knowledge I asked we play a round or two. Boy did I learn the hard way why Monopoly is not called a designer game. Thus, in truth, “many a session” was really only a handful, until my brother and I decided that the mechanics of the game were not crafted in a way that accommodated either of us at the table, much less both of us.
To be fair to the game of Monopoly, I would like to make the case that it actually does excel in one area that is part of game design. In the posts to come I will portrait the elements of game design in more detail but you may already grasp a whole lot about what makes or breaks a game design when you look at the successful element of Monopoly: it can teach you a life lesson. It really does captivate how futile and frustrating the struggle of the disenfranchised is in the face of an oligarchy accumulating power.
As a player you are completely taken by the narrative of capitalism and soon understand how unfair life can be. Once the dice roll in someone else’s favor at the beginning of the game, all there is left for you is to watch them take what little you had to start with, until after hours of humiliation you finally succumb to inevitable bankruptcy. Not even the dumb luck my brother has with dice, nor his acumen of exploiting favorable strategies can make up for odds that are stacked so ridiculously one sided early in the game.
Hence, in terms of storytelling, Monopoly does a great job to model an unpleasant part of the world for the players to experience. It’s just that for the two or three players watching a monopoly unfold at their expense the game is no fun at all.
Now that you have endured my lack of updates over the holidays without much complaining I’ll start things off with both a new look and a quick outlook of the things to come. For one thing I have another freebie in the works which is scheduled to complete a series of posts. Another series of essays will probably take longer to complete than the first and I don’t even have a carrot to dangle in front of you. I just hope you bear with me regardless, as I incorporate these series into my posting schedule of the usual assortment of communication related musings, insights, inspirations, rants and experiments.
Incidentally the influx of text heavy posts swayed me to switch my color scheme. Now reading those verbose essays of mine in your browser should be a bit easier on the eyes. Maybe I’ll make more thorough cosmetic adjustments in the future but for now this will have to do.
Look for the tags “gamification” and “politeness” at the bottom of my posts if you wish to follow either series. The former will provide a practical perspective on how I balance design constraints and user incentives while the latter will be a theory heavy exploration of current research from anthropology and sociolinguistics. You have been warned.
When I write about “gamification” I will actually be discussing the playful interaction of human beings in, you know, games rather than marketing buzz, but I’ll be damned if I passed up the opportunity to spice up my search engine rankings for such a sexy keyword. Seriously though, games are a great window into the inner workings of communication. After all we are looking at the way humans behave in an artificial environment under a set of arbitrary rules, both of which are created solely through tools of communication. I will take a hands-on approach and show you how I go about planning and executing a design. At times managing user incentives may look like applied psychology, which should not come as a surprise because that’s what it is to a great extent. But overall this series is going to be fun.
Reversing the “all-work-and-no-play” scenario the politeness essays are going to dig deep into scientific discourse. I will argue the case that the phenomenon of politeness is a focal point for us to study the meaning of human interaction. How do norms and social rules come into practice? When and why do we follow these rules or choose not to? How do we interpret the signs that codify conformity to normative behavior? Be prepared to be baffled at times when I go into the intricacies of Japanese honorifics or when I analyze the efficacy of swearing. The least you can take away from this series is how to be scientifically certified rude.
There you have it. I’m back to work. And I wish you all a Happy New Year!
If I have not pointed you to excelcharts.com before the fault clearly lies with me. Here’s an example of what Jorge Camoes can do: put a smile on your face when you look to inform yourself about excel charts and data crunching. Magical? Maybe. Definitely witty, certainly informative and quite positively worth your time if you care about visualization at all.
Steve Silberman of the NeuroTribes blog has put together a splendid post to feast your eyes and expand your mind about how the way we interact with computers came to be. Thanks to the sketchbook of interface pioneer Susan Kare you can see for yourself how working around obstacles makes for great design. If it were not for the human touch she gave to computing who knows if visual interfaces had taken off the way they did. There’s pixel art in it as well. What are you waiting for?
What Skeuomorphism Can Tell Us About Design Thinking (and Apple)
Skeuomorphism is one of those concepts that occasionally makes for great small talk. Admittedly it only works with a rather select audience. But when it does work you not only have the pleasure of overcoming a hurdle of three consecutive vowels, you also have a topic where design minded folks can channel both their inner nerd and their disdain for the mainstream. As a conversation topic it’s pure, unadulterated hipster gold.
You can witness this property of the word in recent online debates, where skeuomorphism gets flung around like an insult sometimes. In posts and comments that criticize the interface choices Apple has been pushing to its customers the word pops up a lot. Notably James Higgs and Oliver Reichenstein (of iA fame) have criticized Apple for their sentimental ornamentation. The iCal app has gained notoriety to the point of being featured as an example in the Wikipedia entry.
The whole debate over Apple choosing a design that is so controversial among designers (see also Kaishin Lemeden) is why I take skeuomorphism to be an excellent example of how informed choice is the base for any design process. The better informed, the better the choices, the more likely a design is a success.
Skeuomorphism in its essence is a design cue or pattern that is nonessential for the functionality of a design. However, it is reminiscent of a former design, where it originally was essential. Carrying over patterns from earlier designs for ornamental purposes has been around at least since the time the ancient greeks and their contemporaries repurposed patterns from silverware to their pottery.
Today you can find an everyday example of fully nonfunctional ornamentation residue in the abundance of fake comb joints you see at a popular Swedish furniture chain. What used to be a method to interlock wooden boards remains as a faux impression of sturdiness in shelves that are cheaply glued together.
It is precisely because of its association with sentimental ornamentation that skeuomorphism can be a well thought out choice. But it can also be detrimental if you don’t apply the concept with purpose.
What is good design anyway?
Let’s be frank about this: Design is a trade where concepts and practices from a plethora of disciplines converge. No wonder that even accomplished designers sometimes fail to understand the perspective of a colleague. With cross-disciplinary inputs being as diverse as they are, no one can be an expert of everything in the field.
You may apply concepts from rhetorics and applied chemistry alike, balance task demand, categorical perception and gestalt theory with focus group studies or a corporate design beyond your control and more often than not you do it all under the vengeful eye of a controller constraining your time and budget. If in the face of all this your starting point for a design critique is its aesthetic value… well, you’re probably doing it wrong.
I don’t mean to belittle the importance of beauty. Aesthetic value is an important part of design, it can even be functionally motivated to a certain degree. Beautiful things are actually easier to use, so if you can make positive affect (check the science yourself) work in your favor, by all means, make it pretty! And if you want to communicate cheap in an ad catalogue get rid of all that fancy white space, use lots of callouts in a hideous red and yellow color scheme and make that logo bigger!
But aesthetic appeal is but one element with the efficacy to help you create succesfull designs. The key takeaway from a design is: it should work. That’s what differentiates design from the idea of l’art pour l’art. There is an objective at the start of your creative process and you are solving problems and managing constraints along the way to create something that meets the objective.
Defining the creative process in such a way entails that there is no perfect design. Once you are managing constraints that are in conflict with each other logic dictates that no solution can fully satisfy all constraints. You are forced to make tradeoffs. Looking at it this way, good design stems from making sound choices and finding a solution that is optimal rather than perfect. It meets your objective as much as possible, given the circumstances.
Does Apple design (just) work?
I claim no insight to the design process that takes place behind closed doors in Cupertino. Yet I find it reasonable to assume that the most important objective for a company is: their product sells. To that end Apple designs hardware and software in a way that bundled together is so appealing as to warrant the price tag.
Part of their design process is resolving the problem of turning a canvas on which pixels can take any shape or form into a facilitator of an immersive experience for the user. What they are selling, in effect, is not so much the hardware and software, but an experience of being empowered to complete tasks that the user would have found much harder to accomplish if not for the help of the Apple bundle. Even their marketing claim alludes to the experience.
Creating an immersive experience requires to eliminate the saliency (or prominence) of the fact that the user is interacting with a complex machine in the first place. Hence, the minimalist aesthetic of the hardware. It is quite functional in its understatement to enhance the contrastive properties of anything the canvas displays by virtue of its aesthetic properties. Thus we can fully motivate that design choice functionally.
What contrasts with the understated appeal of the hardware are the pixels on the screen. The apps themselves, especially on the touch devices, are quite exuberant in their aesthetic. Some call their aesthetic kitsch, even.
I think that it is in the development of iOS that the immersive experience was fully developed to be the driving concept of Apple software design. Tactile interaction took away another layer of abstraction in the mental model users employ to navigate the virtual realm of their software. Instead of using physical interfaces like the mouse or keyboard to send orders into the machine, the screen now all but fades away when users manipulate the pixels directly by touch. Now the software must uphold the illusion of direct interaction not to upset that newly formed mental model. This is precisely what Apple aims to do, as stated in their iOS Human Interface Guidelines:
When appropriate, add a realistic, physical dimension to your application. Often, the more true to life your application looks and behaves, the easier it is for people to understand how it works and the more they enjoy using it.
Here the aesthetic appeal of skeuomorphism translates into a functional category of app design. Not only does the illusion of familiarity make a pixelated rendition of a well known object less intimidating, thus more appealing to interact with than an abstract user interface to a demographic that has lots of spending power but little experience with virtual reality—remember, Apple wants to sell products. Alluding to familiar objects can also be part of a usability concept when it employs visual metaphors that draw on the shared cultural memory of users.
Making skeuomorphism work
Bear in mind: there is no default aesthetic for virtual environments. Moreover, the most experience we still have with the physical world. There we acquire concepts of cause and effect that we apply anew in each unfamiliar setting.
This is as much a blessing as a curse for skeuomorphic interface designs. If you leverage the preexisting expectations of cause and effect artfully, users will enjoy the affordance of the interface. However, when the skeuomorphic aesthetic writes cheques that your interface can’t cash the resulting false affordance is a major disappointment.
Much like visual metaphors skeuomorphic design is also contingent on a shared cultural archive of forms and associations. What is considered as luxurious in one culture may be perceived as tacky and cheap in another. Associations also change over time. Some ornaments may even turn into complete obscurity. Do you think that a manual typewriter reference, a carriage return, will be understood to do what the return key on a keyboard does by people who grew up without typewriters? What appeal would a typewriter even have to them? These are properties you need to account for when choosing a real world reference.
Distinct visual features help to distinguish apps from each other. In multitasking environments or swiped app switching the prominent visual cues to the identity of the app are a welcomed source of orientation. Contrast is a key tenet of design: having salient features communicate a swift feedback of a particular app state is a boon.
Thus, skeuomorphism done right can translate aesthetic and nostalgic appeal into a functional category. It can help differentiate interface environment at a quick glance. And it can leverage mental models to make an interface more intuitively accessible. You just have to weigh these benefits against the potential sacrifices you will have to make in terms of universal appeal, functionality or screen real estate.
The problem with iCal
I put it to you that Apple has received the most flak not over creating suboptimal design (as I defined it above) but rather for choosing an aesthetic that did not live up to the expectations of a small but vocal user base. Yet in the light of how the recent OSX iteration has incorporated the iOSque faux leather iCal look I think it is reasonable to assume that the design process behind the desktop OS has shifted in a way that does indeed produce less than optimal design.
There are some glaring usability issues on iOS alone, where two skeuomorphic apps employ different interface designs, but use the same leather bound paper metaphor. Why can’t I flip pages on my iCal like I can in iBooks? Such an omission is more than false affordance, it is a serious consistency problem within the iOS experience alone.
Tradeoffs will have to be made when creating a unified look and feel for both, the touch and the desktop interface, is prioritized. However, if streamlining the operating systems pressures the OSX software to adopt models that were tailor made for the touch system, some of those models are bound to fall short of what desktop users have come to expect, both in functionality and aesthetics.
The question whether it is even possible to force two different models of interaction into one design framework notwithstanding: alienating part of your user base through aesthetic and usability issues without making up for it in some other way does not seem a smart choice to me. Then again I don’t know who Apple expects to buy their experience packages in the future, nor what their ulterior objective in the ongoing transition is. But I too find the iCal app quite ugly.
There’s a peculiar thing about attention: it is a limited resource. So many places are vying for that attention of ours, so much information and so little time. And yet the sense of being overwhelmed by distraction is not a new phenomenon. Our brains have evolved to focus on relevant information through millennia of natural selection. Even if the result is far from perfect, cognitively we cope today as we always have.
The limits of our attention, however, are not only located in the realm of cognition. I would like for you to give a moment’s thought to the intricate complexities of communication that form us as a social species. And I’ll start things off by telling you the name of a man who died twice.
Carlo Giuliani. 10 years and a few months ago Carlo died while he spoke out for something he believed in. He was shot in Italy and by the nature of his tragic death his name became the rallying cry for a movement in its infancy.
Youthful revolt questioning the way the world was governed garnered public attention. Anti-globalisation was a serious contender to become the next big thing, political parties in Europe and the Americas started to form around the idea of reigning in the rising power of the international finance sector. And then, just a few months after Carlo was shot, two airplanes crashed into the very symbol of capitalism. With the lives of the people in New York the terrorists took away the voice of a generation. Carlo’s name could no longer be heard.
The whole world was watching the US and the anguish, the shock and frustration of her people. Fixated on the aftermath we had no ears for anything else that might be going on. Not collectively at least. Communication has a lot to do with power. And the most powerful military force and cultural hegemon of the world commanded attention. We could not look away. The war on terror framed our world as the cold war framed the world before.
But for all those watching from afar it was only the perception of the world that had changed. The problems remained the same. Only this time the voices of those who wanted to call attention to these problems no longer got through. In an attention economy there are only so many items that will be elevated into a status of public debate. Society, much like the brain of the individual, has filters. These filters are the media.
Only what passes the threshold of being pushed through those gates, media outlets that are being monitored by enough people to create the context of a collective conversation, becomes an issue that is on the agenda. Having outsourced our policy making to professionals it is that agenda that is being worked on by politicians.
Imagine for a moment that history turned out differently. Imagine, not that the plains did not crash, nor how those in power reacted, but rather that the filters had allowed other topics to rise to a level of public discourse. Imagine that the headlines had been different. That people turned to other sources to find a common narrative of the world. That the agenda had continued to be filled with the items we were about to put on it a decade ago.
Today many who used to get the better end of the globalization bargain are for the first time experiencing the sting of systemic inequality that engulfs the world. For the first time they feel utterly powerless and robbed of their agency. This is not a new feeling for others. A new name was heard in their rallying cry. Mohammed Bouazizi set himself alight and the voices that demanded change could no longer be silenced. In the European Union the demands of a young generation are strikingly similar albeit facing much less dangerous opposition. There is an underlying sense of insurrection against a system that denies agency to its subjects connecting these incidents. And with new filters, new media outlets, there are now more places where this sense may be turned into a shared identity.
Today we find ourselves asking the same questions that were being asked roughly ten years ago. Not just the 99%. Youth in the world is demanding an answer. Always have, always will. It’s just that we did not pay attention.
While I’ll be taking lots of work with me, the next two weeks will at least see me getting a bit of a tan on that pasty complexion of mine. The following illustration should give you a rough estimate. Incidentally, I’m not only looking to increase the relative size of my head but will need to do some heavy embed lifting with tumblr in upcoming projects. So I thought I’d kill several birds with one giant rock and give away a free embedded do-it-yourself custom made interactive waterski postcard to test the waters.
Now, I guess you know how to use a pair of scissors and your fingers to cut out what needs cutting, fold what needs folding and give my gorgeous avatar a befitting pair of water ski. This being an svg means that you should be able to download the source file and add an even better looking head to my body. I’m sorry if I’m looking slightly out of shape, but I trust you can make amends. Just edit what you feel like editing and upon printing (thick glossy photo paper for best results) you can show off the leg-like qualities of those fingers of yours and jump away.
I’m still looking for elegant solutions to reliably embed svg. Here I used the <embed> tag with a reference to the Adobe SVG viewer. If you can think of more robust solutions please let me know. The file is hosted on Amazon S3 because tumblr does not allow me to host vector graphics. Bad tumblr!
If you don’t want to edit svg files but care for a custom made postcard none the less I took the liberty of uploading an image where you’ll have to add a head the old fashioned way. And should you be totally crazy about the idea of having an interactive waterski experience but want one with a more feminine touch, drop me a line and I’ll see what I can do.
Neurobabble is the marketeers’ skewed misappropriation of cognitive science and a pain in my neck. Someday less busy I intend to give you my take on this pest in thorough detail. In the mean time I suggest you read the scienceblog article on how the brain does not work. It popped up in my twitter timeline and I felt I had to give it some more exposure.
The computer metaphor for the brain seems just too appealing for the Silicon Valley technovangelist to pass up, which is why you should be very, very sceptic about claims of artificial intelligence rooted in their culture. At least now you may know why.
Much more than just a rant against Google’s name policy, when Charlie counts the ways in which assumptions fail you can learn a lot about communication. Always question your ethnocentricity as you might not even be aware of your own biases. Try to find out how your audience perceives what you are showing them. Showing neglect to account for their needs and wants and for their history and culture will come back to bite you.
As an aside to the privacy debate regarding “real names”: what I don’t understand in all this Google hubbub is the amount of people who fear for their Gmail accounts. Who in their right mind would give away access to the most robust of all internet protocols (yes, when zombies eventually arrive the last Internet thingy that will probably still work is your mail) to a shady third party that is accountable for nothing while freely admitting to scouring mails for data? Don’t be evil my zombie outrunning glutes!
Oh, I’m on G+ alright, if only to see what it’s all about. They may take my online life, but they will never take my freedoooo… Well, not my mail at least.
I understand the sentiment of tech people who see old ideas revisited and touted as new - but there is one thing you should never forget: There is merit in revisiting old ideas from time to time.
The concept of the hyperlink for example is much older than the invention of the world wide web. Without adapting indexing techniques of millennia gone by to new environments and technologies however, there would not be a world wide web. And yet even today where we apply links to generate a ubiquitous infrastructure for accessing content on the internet we don’t utilize their full potential. Bidirectional links anyone?
So I suggest you approach these new takes on interactive movies and nonlinear storytelling with an open mind. After all, technology has changed and recent innovation might allow these old ideas to thrive where they failed before.
5 Things Binge Drinking With a One Legged Stripper Taught Me About Marketing
Nothing. Absolutely nothing.
He was a great sport, though. He taught me lot about partying beyond the point of passing out. Perhaps I even learned something about not relying on first impressions. Alas, I remember nothing about that night. Binge drinking does that to you I hear. Not that I ever drink. Was there even a one legged stripper?
You could already have known that there was nothing to be learned from the experience just by reading the title. It is a variation of one of those self-help internet tropes that piss me off. Everyone is a marketer these days. People are “optimizing” their titles to entice readers to click on them. But blindly following advice that is littered across get-readers-fast-blogs is bound to hasten the rise of the army of mindless thought leaders.
Taking a page from the book of life and adding a human touch to your teachings is probably a good idea to connect to your audience. True epiphany is greater still, yet hard to come by. Reversing the process and putting a generic backstory to your preconceived ideas just to follow a common it-works-to-attract-readers marketing scheme will eventually turn life lessons into a meme that ridicules genuine revelation.
Seriously, stop it. Already I am starting to avoid titles like that. All too often they feel like a scam. I don’t know how many of my readers have been disappointed by similar unsubstantiated claims of what-life-taught-me-about-living to the point of shying away from them, but I do know enough about communication to tell you that feeding your readers generic material is not a sustainable strategy. While we’re at it, unless you want to become a final boss in the army of brain dead thought leaders don’t create lists just for the sake of it either. Once a scheme gets too common it no longer works. Even if I follow one of these “optimized” titular links, just to satisfy my curiosity, I am already eager to dismiss your ideas. Are these really the clicks you are looking for?
Who would have thought a one legged stripper would be a marketing sage. If only I could remember…
Porting Content Across Distribution Channels: Multiscreen Strategies
Here’s why transmedia design is one of my favorite challenges in communication: I like it because it makes the choices you face in communication tangible. Things you usually don’t waste a thought on when you compose your message all of a sudden start to matter if you create content that needs to work in various media. Not only is this a great exercise to learn from. Judging from the way media are evolving into an interconnected ubiquitous ecosystem this is going to be more than an exciting media experiment. It will be something we need to work out to cope with the new media reality.
So when I came across this gem of a slide show on multiscreen strategies that doubles as a poster child for great slide design on slideshare I just had to share it. (via lab - a blog resource for innovative broadcasting applications in German)
“Ideas cause ideas and help evolve new ideas. They interact with each other and with other mental forces in the same brain, in neighboring brains, and thanks to global communication, in far distant, foreign brains.”—Neuropsychologist Roger Sperry, 1964 – one of history’s many cases for networked knowledge and combinatorial creativity (via curiositycounts)
“You’re reaching through a window, then putting your hands into a box, to perform your task.”—Matt Gemmell describes cognitive load and the advantages and disadvantages of native applications in Apps vs the Web. (via 9-bits)
Andy Rutledge almost nailed it. When he posted his unsolicited redesign of the New York Times he introduced his idea with the preface
Digital News is broken.
Then he tried to repair it with some alternative web design practices. And although he scraped success a couple of times he failed. He was bound to. Digital News cannot be repaired by web design in its current state, because web design in a sense is part of the problem. Without saying as much, Khoi Vinh stated that
the argument that the redesign’s author makes is not quite so persuasive, mostly because it makes some rash assumptions, misses some critical realities and, perhaps worst of all, takes a somewhat inflammatory approach in criticizing the many people who work on the original site.
Actually the rash assumptions are where I feel Andy went closest to home. There is a deeper problem with the way information is relayed to customers. And there is a deeper problem with the role designers play to help organizations communicate. Hence Khoi is right that there are some realities a designer will have to face. With them she usually faces the need to see those realities as constraints bearing on the design process.1
Back to the specific problem of web design. What does a designer do? She manages design constraints. One of the more recent problems she has to deal with is serving the same content not just to various browsers, but devices with vastly different screen sizes. There are tablets now and smart phones and different resolutions and what have you.
All sorts of media queries and CSS wizardry can be used to create device-agnostic web design through content choreography. Trent Walton creates beautiful solutions when he orchestrates
the best experience possible at any screen size or resolution.
However, even these beautiful solutions don’t make up for yet another sad reality:
Content is a black box in web design.
More often than not, especially in Digital News, web designers literally create boxes into which content is poured. The content itself is almost completely opaque in the design process. Then they push those boxes around and perhaps add some embellishments to try and make the content more appealing, more easy to digest, more accessible. And yet they do all this without ever touching the information hierarchy and flow of the content they are supposed to serve. Thus there is a severe limit to the effectiveness of their designs that is beyond their reach. Relying on web design principles omits the larger scope of transmedia communication.
This is where my perspective differs from that of designers and I freely admit to rash assumptions and the appeal to a fairy tale land of customers willing to reshape their organizational structures. In my view the content should be part of the design process. I don’t mean that every news item should get its own bespoke design, there is a limit to my naïveté, you know. Instead I propose that Digital News can and should create hooks in the content.
Content must be systematically modularized respective to its meaning. Only then can it be ported across various media.
To a certain extent web designers understand this when they are asking for semantic code. Some hooks already exist to work with an information hierarchy in your design. A headline is marked differently than body text (or better be, lest you invoke the wrath of the hypertext markup demons). Accordingly, part of Andy’s proposal was to only serve headlines on mobile devices. But current html code cannot solve the problem. The content itself must be structured in such a way that it can be served respective to each set of specific constraints across different media.
Incidentally structuring their content semantically is something that news outlets have naturally evolved to do. There are established news design conventions: Apart from headlines journalists routinely craft the lede and the deck, and they use frameworks like the inverted pyramid among other things. Unfortunately these practices have never been systematically applied or augmented outside of the media they evolved in. This is part of the problem why traditional news reporting struggles in new media.
Here’s why web design is bound to fail: The web is only one such new medium, albeit a precarious one, because it provides an infrastructure for other media as well. None the less it is paramount to understand that better code is not the answer. HTML really does not help you structure your information flow in a video format, for example.
To illustrate how you need to structure your content semantically to prepare it for various media I’ll need to explain the two dimensions in which media operate. The thing about media is that they not only transport, but also shape your message. But at least there is a method to the way they do it, so we can use this to our advantage.
The first dimension is that of functional constraints. This seems quite obvious: Visual information is useless in a medium like radio. But it entails a great deal of subsequent decisions you will have to make about what kind of information you wish to convey, as much of that information you will have to encode separately. Take the tone of voice from which you may deduce if some one is being serious when he makes a dismissive comment. Unless you tell your readers about his facial expression and his markedly pitched voice, crucial information to correctly decode the meaning of what was being said is missing in writing.
The second dimension is that of social conventions and recipient expectations that frame content in a medium. You perceive the same information differently if it is presented to you by different sources. Picture Glen Beck on Fox News calling for the extermination of what he claims to be a cancer of society. Picture the millions of viewers believing his every word, buying into the hatred and fear. Now imagine that the very same words about a world wide conspiracy and sinister forces working to rob you of your livelihood were broadcasted on The Onion. Let’s face it, were it not for the framing property of their platform the message of many pundits had long crossed the border to being satire.
We will need to address both dimensions of media because they both shape the message we send through them. News organizations are but one example of where serving the same content across different media does not equal serving the same message. And yet they are in a position where an innovative content architecture would be the most visible and hence have the greatest impact on user experience.
We could, for example, repurpose distinct modules for various adaptations of the same story or news item through semantic structuring of content. The creation process for an item would then become a collaborative storyboarding process. This story board would be shared by different editors, videographers, designers and illustrators who could pick those elements they need for their own creations.
More importantly for our purposes of solving the black box problem: Out of the storyboard structure we could automatically serve our content to different distribution channels. The functional and social dimensions of each medium would dictate the way the modules are arranged across devices and platforms. Popularity for example, unlike Andy suggested, is actually a highly relevant element of a news item. All of a sudden we could have a semantic hook to the boxes.
Sure, a custom made content architecture is a big investment. Most likely it amounts to a proprietary solution that only works for the specific conditions within the content creation structure of one particular organization. But content architecture is only meant to address the specific constraints of content creation for that organization in the first place. For them it may solve several problems that go well beyond a nice looking page on every device. It could streamline internal information flow as well as the output of semantic information. No more stupid stock photo fails.
Outside of news organizations this is not utopia, mind you. Law enforcement has long been organizing heaps of information into meaningful structures for an ongoing case. The medical field has tools of its own to structure and manage information. Sometimes the professionals in the creative fields have yet to catch up in working out how to use new technologies to their advantage.
Why go through all the trouble of trying new applications? Because new media are still in the middle of a major change. Some of that change is due to new technologies augmenting and changing the functional dimensions of these new media. But the more substantial shift is that of public adoption of thusly augmented information channels.
The social conventions that evolve around media oftentimes are quite different from what early adopters or creators had in mind. Still, one change I feel comfortable predicting is that of the growing importance of transmedia communication. When communication technology comes in several forms and sizes, yet becomes more and more connected and ubiquitous, the content will have to follow suit.
There’s an interesting side story hidden in the depressing fact that as a designer you will have to adhere to constraints that are, in fact, political. Like you cannot remove links from the main page because some internal subgroup of the organization you are dealing with will raise hell for not getting the exposure of that link, however useless it is semantically. Internal power struggles trump user orientation. Now, just briefly imagine it were not a designer but rather a high profile consultancy doing the redesign. Stepping on people’s toes without consideration for operational constraints ironically is what business consultants are notorious for… ↩︎
There is a new promising service looking to alter the media landscape. Visual.ly tries to bring visualization to the masses. Content creation that was once the realm of specialists is now accessible to everyone, or so the service claims.
This prosumer trend of empowering consumers to become creators themselves is older than hardware stores and not restricted to design circles, but certain professions seemed to hold much less public appeal to DIY than others, crunching data being one of them. And yet time and time again new tools have changed the way we interact with information. Looking at visually I was immediately reminded of one particular tool that forever changed the lives of office workers across all professions.
That tool is Powerpoint.
A comparison with the infamous software is not necessarily a coveted one because Powerpoint does have a bad reputation for obscuring rather than helping communication in many cases. For all the good things Powerpoint has done (more on that later) it did open Pandora’s box of bullet point hell and was just too much for most users to handle responsibly.
The problem with Powerpoint is not that it is a bad tool. The problem is that albeit created for the purpose of empowering users to create visually enriched presentations the tool did nothing to promote thusly augmented communication. Quite on the contrary.
Chances are you suffered through more than one bullet point riddled snooze fest. Be it in college or the workspace, slides are everywhere and they are still proliferating at an accelerating pace. In a sense the tool has become the product. Send me the powerpoint by tomorrow, will you?
Today everyone can build slides. And everyone uses bullet points. The templates that were built in to Powerpoint set the example after which the unsuspecting office worker builds her slides. These unfortunate examples of slide design have in turn entered the mental model of presentations for the average customer. Thus popular practice of presentation design today is riddled with unhelpful and even detrimental standards.
Before Powerpoint and its templates there was no popular presentation design standard. There were people doodling away on chalk boards, overhead projectors and flip charts. And then there were a few specialist designers crafting slides by hand for expensive presentation equipment.
The world of information design and visualization currently resembles this divide between experts and consumers before powerpoint. Granted, visualization is more salient as a popular concept than slide design was, but prosumers are still few and far between. And lest you wonder what all this entails for visual.ly bear in mind the picture I painted of how Powerpoint proliferated its standards among newly empowered users.
Now, please look at the promotional infographic that visual.ly lets you create with just a handful of mouse clicks and keyboard entries. How much information is encoded in these 500x2103 pixels?
The assertion that my tweets bear zero interestingness stings, but I suspect that there was a technical problem with the twitter API to resolve my data. I do use twitter several times a day and tweet about communication resources, cognitive science and the occasional pop culture reference. However, visual.ly’s failure to recognize me as the bestest twitterer of them all is not my concern.
I do take issue with the fact that this infographic is the standard they are setting. With all respect to the talented designers at visual.ly, the terrible, terrible signal to noise ratio of the piece goes against everything I am trying to accomplish as a communication architect. How much information about me did you gather from this infographic? Did any of the embellishments and design choices help you grasp the information or make it more accessible to digest at least?
This is a far cry from a helpful visualization. Heck, it’s almost a poster child for everything that is wrong with lazy, uninformed information design and it is promoted as the example for do-it-yourself visualization? Are you kidding me?
Contrary to what my vitriol might have you think I don’t actually mean to bash the folks at visual.ly but rather hope to partake in a debate of how to improve communication standards. The toxic influence of bad standards released into the public, as demonstrated by the Powerpoint example, is something I would like help prevent to happen again. And I do think that visual.ly can play a decisive role in this.
We can see how public perception of visualizations is shifting now that web poster art has turned into the latest hype to generate clicks. While users get to see much more visual display of information (much of it of a fast food standard, mind you) they don’t get to learn what constitutes good practice or what tools are needed to create good infographics.
The outline of a hurtful mental model is already showing: Venture Beat quotes the co-founder of visual.ly:
Please take a gander at this wonderful series on essential visualization resources and try to find Photoshop. Infographics are not about pixel manipulation and Photoshop is not even a great tool for creating art work from scratch. And yet, in the public eye infographics are something a designer creates with photoshop. Data is conspicuously absent.
Visual.ly could change all that. There are some great things about visual.ly that are likely to provide beneficial impact to the world of infographic design. One is that it transforms the medium of infographics by giving it a platform where peers, amateurs and professionals alike, share their work and thus their ideas. Think of what slideshare did to the public perception of presentations.
This is perhaps the best thing powerpoint has done: It has propelled the medium of presentations into popularity and paved the way for other services to build upon it. Powerpoint has become a great tool for the skilled user to create amazing content both to accompany speeches or to be used in exaptations of powerpoint, like poster design in academia or the presentation spin off of online slide shows.
The one thing we must make sure is to empower users to create meaningful content. To do so we must protect them from mental models that would limit their design choices to bad eye candy. If the folks at visual.ly are with me on this, their platform would be a boon to educate and inspire, while providing a unique tool that helps users create their own visualizations.