Why Santa uses Webnographer

December 17th, 2013 by James

Santa using Webnographer

Santa’s challenge is that he wants to deliver the right present to the right person, but he is constrained by being far far away from his customers. What the children in Lapland want is very different than the children in Brazil, or Ireland. Only by using Remote Usability Testing can he reach the far corners of the world. Santa’s challenges are similar to most Website Product Managers.

His customers have a broad variety of needs. Some children want the latest video game, others the latest Lego kit, and some want that patch work doll. So Santa needs to sample thousands of children to be able to find the requirements of millions of children. Only by using a service like Webnographer can you easily research 100’s of users instead of 10.

Up to the last moment new toys are being released, launch dates are being changed. Only by using a service like Webnographer can the testing be moved forward or back easily. There is no participant recruitment cost to reschedule.

Santa tried AB testing last year but then 50% of the children got the wrong present. He has realised that Analytics is like accounting, it only tells you after the fact. With Webnographer you can test before.

To add Webnographer to your New Year’s wish list contact Sabrina or James at Webnographer.

Illustration by Bruno Miguel Fernandes Maltez

Discovering WHY from numbers

November 7th, 2013 by Sabrina

A couple of weeks ago James and I gave a talk at the UXPA in London. The theme of the October event was “The power of quantitative data”. The title of our talk “Discovering WHY from numbers”.

In a UX test, conventional UX thinking is that numbers can only tell you what happened, and cannot explain the ‘why’ of a UX issue. James and I showed in our talk how one can discover ‘why’ an issue is happening. And how one can go beyond just knowing simple completion rates.


@ElviaVasc created a great sketchnote of our talk, which you can see below. And there are many great pictures of the event on the UXPA Pintrest board.

23 Remote Usability Methods

October 31st, 2013 by James

It is easy to think that there is only one or two remote usability methods out there: moderated and un-moderated. But there are many more methods than just a couple.

Last year before the Denver Lean UX conference, Sabrina and I came up with a list of the different Remote Research Methods. In total we came up with 23 different methods. Some of them we use at Webnographer, and other methods we know that other people use. I am sure we are missing one or two. Please post any we are missing in the comments.

This post is the first one of a series, giving a top-level overview of all the online user research methods. Over the next few weeks, we will be posting a detailed article on some of more popular methods explaining in detail how and when to use it.

The 23 methods fall into 2 main categories: synchronous, where the moderator or researcher interacts with the respondent; and asynchronous, where the researcher does not interact with the respondent directly.

The methods listed with a (w) are ones that
we either use or have used at Webnographer.


Synchronous remote methods


Moderated Test


The participant and the researcher meet via web/video conferencing. The participant is asked to think aloud while the researcher notes his/her observations.


Provides a remote version of a lab test and enables a researcher to test users anywhere in the world.


Online Ethnography


This method researches communities facilitated through online spaces such YouTube, Twitter, or Pintrest. The research is carried out by participating in those online spaces.


Online Ethnography helps increase empathy and reduce context collapse. By partaking in the experience the designer understands the participant’s perspective.


Telephone Survey


Participants are called and asked questions on the phone.


A quick way of gathering responses from users. It can be useful for getting feedback from users who will not take part in other forms of survey data.


Remote Eye Tracking


This method provides information about where the participant is looking on the page. It identifies what attracts their attention and what passes unseen.


It shows which areas of the webpage users look at first and which areas they ignore.


Asynchronous remote methods


True Intent


Users are intercepted when they arrive on the website. They are asked the reason for their visit before being redirected back to the site to complete the task using a remote usability testing tool. While they complete the task, their interaction with the site is recorded.


Records real user behaviour and identifies their real goals. Identifies why people are visiting and whether they succeed or not. Helps to prioritise and quantify issues.


Set Task


Participants are set a task on a website to find a piece of information. While they navigate through the site their interactions are recorded with a remote usability testing tool.


Evaluates if users can complete a task successfully, how they navigate the site, and the amount of time spent. It provides insights into where and why customers are having problems.


Race Test


Participants are asked to perform a task on the website within a given time.


Tests how people will perform under stress. Often behavior is different in stressful situations.


Click to Finish Test


Users are asked to complete a task on a single page. Once participants click on the page the task is finished. The test can be done with static mock-up or real sites.


Provides a fast way of gathering early feedback. This method evaluates whether users can find a piece of information within a page. Metrics include success ratings, time on task, and satisfaction ratings.


5 Second Test


Participants are shown a webpage for five seconds. They are then asked to write down everything they remember about the page.


Shows what users remember about the webpage, and evaluates what items are the most prominent. This method uses recall memory.


Recognition Test


Participants are shown a webpage. The page then “disappears” and participants are shown a list of items. The participants must identify which items were on the webpage and which items were not.


Shows what users remember about the webpage, and evaluates what items are the most prominent. This method uses recognition memory.


Critical Incident Report


Participants are asked to give feedback via email, postal mail, or a web form about their experience after visiting a site.


A fast and relatively inexpensive method of collecting data about users’ experiences.


A vs B


This method is used to test alternative designs or versions for the same webpage. It can use a web stats package or a remote usability testing tool like Webnographer.


With a large enough sample size, this method identifies which design route works best.


Web Analytics


Web analytics collects your website’s visitor log data, and allows the analysis and reporting of Internet data for purposes of understanding and optimizing your website.


Discover where your visitors are going, what pages they are visiting, and how long they are staying. This method does not identify why users are visiting.


Open Card Sort


Participants are given a set of cards. They need to come up with the appropriate categories and group similar cards.


A card sort can help navigation design and information architecture. This can also be an effective way of determining which labels and wording works best.


Closed Card Sort


Participants are given a set of cards and categories. They need to group similar cards under each appropriate category.


A card sort can help navigation design and information architecture. This can also be an effective way of determining which labels and wording works best.


Click Sort


Participants are given a set of cards and categories. They need to click on the appropriate category for each card.


This is similar to a card sort but, due to its faster interaction, users can complete far more items than a card sort.


Tree test


Participants are asked to perform a task. They are shown a list of items, such as links, and they need to select the one item they think is the most effective to complete the task.


A tree test is useful for testing menu structures.


Diary Study


Participants are asked to keep a log/diary of their activities over a period of time. The study can have a focus/theme. In this case, participants need to track everything related to a given topic.


Helps to understand how users’ experience changes over time. It can help with getting a broader experience, especially with a service that has multiple touch points.


Participant Review


Participants are sent a link to a website and give their design suggestions via instant messaging.


Co-design goes remote. This allows people with diverse backgrounds to contribute to the design.


Video and Send


Users film their experience and then email the video back to the researchers.


Video reduces context collapse. It makes the designer feel closer to the research participant.


Web Survey


Participants respond to a survey online. They can either be intercepted on a website or emailed the invite to take part in the research.


It is one of the most cost-effective ways of gathering a large selection of user feedback.


Longitudinal Survey


The study is conducted over a period of time with the same participants. The aim is to observe the changes that occur between the different sessions.


It is useful to see how users experience changes over time.


Delayed Survey


Users are intercepted by a pop-up window when they arrive on the website. They are asked for their email address and about their goal.
The following day, the user is emailed a follow-up survey questioning the success of their visit.


A delayed survey captures the whole customer experience.


Exit Survey


Real users are intercepted by a pop-up window. They are asked if they want to take part in the research, and a new window opens in a pop-under if yes. When they end the session and close the browser, the survey window reappears, and they are asked a couple of questions.


Captures users’ experience after their interaction with the site.


This post is the first one of a series, giving a top-level overview of all the online user research methods. Over the next few weeks, we will be posting a detailed article on for each method explaining in detail how and when to use each method.


Feel free to post in the comments, if you feel that we are are missing a remote research method.


Return of Investment for User Experience made simple – A Workshop.

June 20th, 2013 by Sabrina

Miguel Vieira http://www.flickr.com/photos/miguelvieira/

There is an issue that many UX, or Usability issues go unfixed. One of the biggest causes is that there is often no finance argument or Return on Investment (ROI) case being made by the UX team.  ROI enables teams to overcome arguments of resource limitations, and opposing opinions of other stakeholders by putting hard financial numbers to the argument and show the benefit that would be achieved if issues were fixed.

This workshop will explain how to create a Return on Investment Argument for User Experience, quickly and easily.
We will share our methods to simply calculate a Return on Investment. We will give examples for both Web Applications in competitive markets, and for in house applications behind firewalls.

The workshop will cover both the methods, and how to collect the data to act as the inputs to the calculation. We will go through practical examples of both calculating the ROI as well as defending the calculation.  Being able to make an ROI argument makes your UX and Usability findings more likely to be implemented as well as increases the audience for your work.

At the end of this one day workshop you will walk away with the knowledge of:-

  • Benefits of a Return on Investment argument.
  • How to make a Return on Investment Case for making UX and Usability improvements.
  • Which methods to use under which circumstance to calculate the ROI.
  • How to build a simple model to illustrate your case.
  • What data you need to collect, and how to collect it

The workshop has been designed for the UX practitioner who has little finance or maths knowledge. You will need to have some experience of User Experience and be able to do basic arithmetic in Excel.

.         Sabrina Mach .           James Page

The workshop will be given by James Page and Sabrina Mach. Both of them have many years of experience quantitatively analyzing websites from a User Experience point of view. They have helped clients ranging from British Telecom to Vodafone, Millenium Bank and Sky, and worked in conjunction with agencies from Ogilvy, SapientNitro, LBI, Flow, and Head London.  They are both founders of Webnographer, which is the world’s leading Remote Usability Specialists.

When: 26th July.
Where: At Webnographer’s in Lisbon office, Portugal.
Cost: 150€ + (€34.50 IVA/VAT) until 8th July and then 250€ + (€57.50 IVA/VAT) after. (Includes lunches and coffee). There is a student and start-up discount. Contact us to find out more.
Getting there: We have deals with hotels. Contact us, with the sort of Hotel you are looking for and she can put you in touch with the right place.

We are looking forward to seeing you in Lisbon!

Buy your ticket here - http://roilisbon.eventbrite.com

Protest against Lab Testing at UXLX in Lisbon

May 17th, 2013 by Sabrina

Protest Signs

Today we protested outside the UXLX conference to Stop Lab Testing (Usability Lab Testing).

Why protest?
I co-founded Webnographer with James, because I believed that we needed new techniques and methods to carry out user research.

The lab testing method is over 20 years old now.  Since then there has been a large shift in the technology we use, and our behaviour. We have moved on from using a desktop computer in one fixed environment 20 years ago, to portable devices (smart phones, tablet PCs, and laptops) in a multitude of contexts today. This has changed how and where people access the internet, and how much distraction or attention is given to an interaction in a given environment.

With this huge shift in tools and behaviour, the methods we use to understand individual behaviours needs to change too.

We need methods that help evaluate behaviour in its multitude of contexts, environments, languages, and countries. We need to test products with a multitude of customers, not just 10 people in London, or Lisbon, or Berlin. We need to get feedback independent of where people are located. We need to test with large numbers of users in diverse locations to be able to quantify the impact of design changes.

As Marshall McLuhan said: “We shape our tools and thereafter our tools shape us.”

This means that User Experience Research must be reshaped too. At Webnographer we are building those tools to help you understand people, so that you can make better products too.

To find out more, follow @webnographer on Twitter.