Impressions from UPA International 2010

Tobias Komischke / Wednesday, June 2, 2010

The Usability Professional’s Association held its annual conference in Munich, Germany, last week. 750 people from 45 countries around the world participated.

Here’s a little right-up on some of the sessions I attended.

Guided Selling: Helping Customers Make Sense of Your Offering

Presenter: Michael Hawley (mad*pow)

Guided Selling is a UI paradigm used on e-commerce sites. This paradigm tackles the issue on how to zero in on presenting the right product (or service) to a potential buyer. Instead of providing means to search and filter the product space, a series of questions are asked, just like a sales person would ask a customer in a physical store. These questions are used to assess the buyer’s values, intended usage and knowledge of a particular product category. Then that information is used to direct her to product selections that meet her needs.

Michael reported on a project where test customers compared websites that use Guided Selling. What these test customers found important was:

  • In general, test customers enjoy the guided selling approach.
  • It’s key to ask the right kind of questions and offer the right kind of answer. Otherwise you end up with an experience similar to that where a customer just wants to browse in a physical store, but the sales guy keeps asking what exactly she’s looking for.
  • The speed with which you can advance through the guided selling funnel is important. People don’t want to waste time, but they also want to control the pace.
  • The use of video to show a person asking you the questions was polarizing. Some test customers loved videos, others hated them.
  • Audio should be used to enhance, not to replace text that is shown on the website.
  • There’s some skepticism as to whether the final product selection is really the best for the customer – as opposed to the company that wants to sell the product.

Natural User Interfaces: Humans Like It Round

Presenter: Claude Toussaint (Designaffairs)

The talk presented numerous examples showing that there is a trend back to analog interface metaphors. For example, showing a rotary dial for a phone on an iPhone app. Interestingly, a lot of the old analog user interfaces were round. Today it’s not only a fashion statement to re-introduce them, but you can leverage people’s knowledge on how to use them. I’m not sure though whether or not teens understand rotary dials since they never interacted with an old-style telephone.

Claude also said that in his opinion the front-end design for natural interfaces like the Microsoft Surface is more effort-some than the functional development – because there are so many different ways to design a feature and subtle variances have a large impact on the user experience.

According to him, it’s very hard to document or specify rich interactions. Therefore, design and implementation steps blend in the development process. Tools like WPF accommodate for that kind of process.

 

Design for Happiness

Pieter Desmet (TU Delft)

I loved this one, although it was kind of theoretical. His talk focused not on how to reduce frustration with products, but instead on how to create great user experiences. For him, it comes down to designing for happiness. He reported that 60% of products that are returned for refund are perfectly functional, but people who bought them just don’t like them. They don’t make them happy.

He presented several theories that try to explain what happiness is composed of. His research is too young still for results that could be applied 1:1 in UX design, but I think what he does is important and there will be great findings.

 

Comparative Usability Measurement

Presenter: Rolf Molich (DialogDesign), Jurek Kirakowski (University College Cork), Tomer Sharon (Google)

In this very interactive session they discussed findings from a study where 15 teams of usability experts evaluated the same e-commerce website. Rolf then compared the teams’ approaches on how to assess the usability of that site. Here are some of the take-aways:

  • The majority of teams had one person involved with conducting the study.
  • In terms of person hours used to carry out the study, the minimum number was 21 hours (the highest number was way beyond 100 hours, if I remember correctly).
  • Most teams carried out moderated usability tests. They used a median of 20 test participants.
  • Some teams used un-moderated test sessions. Those yielded some weird data (outliers), e.g. a minimum completion time of 0 seconds. Since the sessions were un-moderated, it’s hard to understand where these outliers could come from. It’s also not easy to decide whether or not to include them in the data analysis.
  • The measures most frequently used across the teams were: time on task, success or failure rate, satisfaction.
  • When they calculated the overall time on task, the presenters did not include time on task for tasks that the test participants did not successfully finish.
  • Time on task is not a good variable, it cannot represent the user data properly.
  • Some teams used self-made questionnaires to assess the usability of the website. According to Jurek (a trained statistician), the quality of these questionnaires were beyond voodoo.
  • Only one team out of 15 carried out a context-of-use analysis.
  • Rolf said that he could not find any research studies indicating that “thinking aloud” would influence time on task. That’s interesting to me, because I had assumed that verbalizing interactions and thoughts would add time. Yet, Rolf was saying that there is a counterbalance: by verbalizing you may think through the task actually faster which allows you to finish it quicker, too.

 

Web Analytics and User Testing

Presenters: Marijn Klompenhouwer, Adam Cox (both: User Intelligence)

Their talk made the case to combine web analytics and classical usability testing. They shared experiences where web analytics informed UX researchers about where exactly users chimed out of a sales funnel on an e-commerce site. With that pointer, a usability test could then focus on the details specifically.

In another example a usability test found that 2 out of 10 test users had a usability problem. 2 people out of 10 may or may not be a serious problem. Through web analytics – considering a much higher number of users - it could be validated that indeed this is a real problem and not just an isolated episode.

So in summary, it’s not about deciding for one method against the other, the strength is the combination of both.

 

Application of UI Patterns for the Development of Business Applications

Presenters: Ulf Schubert , Martin Groß, Wolfgang Bonhag (all: DATEV)

In order to standardize UI elements with a high degree of embedded usability, DATEV follows the approach to develop user interfaces based on UI patterns that are implemented in software building blocks. According to their experiences having UI patterns does not eliminate the need for style guides though.

They consider it important to consequently separate business logic, interaction design and visual design. To that extent, they make use of XAML and deploy Microsoft Expression Blend.

 

Myths about Usability Testing

Panelists: Rolf Molich (DialogDesign), Jakob Biesterfeld (UID), Karen Bachmann (Perficient), Whitney Quesenbery (WQUsability).

Interacting with the audience, the panel discussed the evidence of a couple of usability myths:

  • “5 test users will find 85% of [a product’s] usability problems.”
    Based on his own research Rolf disagreed with this statement which was reported by Jakob Nielsen in the early 90-ies. According to Rolf’s comparative usability assessment studies, about 60% of the usability problems were found by the teams that engaged in his studies – and these teams tested any kind of number of test users. The panel agreed that “5 users are enough to drive a good iterative design cycle.” is a better statement. My own opinion is that it’s not about the absolute numbers. I don’t care if it’s 5 users finding 85% of problems or maybe 6 users finding 80%. I think Nielsen’s log curve which is the basis for his “5 users – 85% problems” statement is correct in that it points to the fact that the cost-benefit ratio goes south with an increasing number of test participants.
  • “Expert reviews provide results as reliable as usability tests.”
    Here I think they confused reliability with validity. At least they only talked about the extent to which these two methods can actually yield meaningful results. But then again, these scientific criteria don’t really apply here anyways. Rolf had data from his studies indicating that there is no difference in the meaningfulness of results that come out from expert reviews and usability tests. Usability tests have more credibility for stakeholders since they involve real users. Also, both methods can be nicely combined, so it should not be an “either – or”.
  • “Eye tracking shows what users see, so it reveals important issues that usability testing alone cannot.”
    All panelists agreed that this statement is not true. Yet, eye-tracking is very convincing for stakeholders (“so sexy!”).
  • “The main goal of usability testing is to find usability problems.”
    All panelists agreed that this statement is not true. It’s is important to report on positive findings as well, e.g. in order to validate design assumptions.

 

And here’s a shaky shot from the dinner reception. Nice venue + nice food & drinks + nice people = nice evening. 

Next year’s UPA conference will be in Atlanta, GA. See you there!