Analysis & Evaluation

From MPDLMediaWiki
Revision as of 15:07, 19 June 2009 by Rkiefl (talk | contribs)
Jump to navigation Jump to search

Evaluation[edit]

Interfaces where a usability life cycle is applied are subject to evaluation. Evaluation, done by the User Interface Engineering team can be carried out with different Methods:

  1. Usability Interviews and tests with 'Thinking Aloud'
  2. GUI Workshops 'e.g. Card Sorting'
  3. Software based evaluation

Usability Interviews (Thinking Aloud/Think Aloud Protocol[1])[edit]

From our point of view the method of choice because it balances result and effort.

Participants

1

Supervisor

1 plus 1 visitor (usually developers)

Equipment
  1. PC
  2. Internet connection
  3. Tasklist (provided by UIE)
  4. Test Form (provided by UIE)
  5. Recording Device
Duration

1h

Results

Document with notes on user actions for each action/step.

The results are analysed after 6 - 8 interviews towards a statistic, providing GUI flaws within use cases.

Steps
  1. Short Introduction
  2. Participant solves task independently
  3. Open questions/diskussion


Usability interviews are conducted to discover, document and classify usability issues for

  • a specific release (e.g. PubMan 2.0.0.1)
  • a functional prototype
  • a prototype draft

Up to 8 interviews are usually sufficient to discover the main flaws of a GUI. The interviewer just provides tasks for his participant. The participant does all tasks on his own and should be encouraged to comment on his actions. It is not recommended for the interviewer to interfere in any form. If the participant gets stuck and there are important steps to follow, he gets a hint how to continue.

Interviews can be recorded optionally if participants agree. If a participant is not able to solve a task or step it will be noted as a fatal usability issue. For later interviews with a more standardized set of tasks the issues are to be classified in the following way:

  1. Fatal (Task could not be finished successfully or in a proper way)
  2. Serious (Task could not be finished on the first attempt or user performance is bad)
  3. Minor = (The user hesitates, is not sure or does not feel comfortable with the flow)

For each interview the UIE team provides an interview outcome. The document contains all tasks, observation notes and comments of participants. Additionally the interviews collect demographic characteristics as well to prepare data needed for personas. After a valid number of interviews exists they are analysed and summarized.

Interviews for PubMan Release 2[edit]

Interview series I

  • Munich: 2 Interviews

Interview series II

  • Nijmegen: 9 Interviews

Interview Analysis

Interviews for PubMan Release 3[edit]

Interview series I

Place
  • Max Planck Institute of Molecular Plant Physiology, Berlin: 2 Interviews
  • Max Planck Institute for Gravitational Physics (Albert Einstein Institute), Golm 2 Interviews
  • Max Planck Institute for Human Development, Berlin 2 Interviews
Date/Time August 6/7
Status Closed

Interview Series II

Place
  • Harnack-Haus, Berlin: 3 Interviews
Date/Time Mittwoch, 05. November 2008/09:00-12:00
Status Closed

R3 Interview Analysis

Interviews for PubMan Release 4[edit]

Interview series I

Place
  • Max Planck Digital Library, Munich: 4 Interviews
Date/Time April - June
Status Open

Interview Series II

Place
  • Max Planck Institute for Foreign and International Social Law, Munich: 2 Interviews
Date/Time April - June
Status Open

R4 Interview Analysis

Interviews for FACES Release 3 (Prototype test)[edit]

Place
  • Max Planck Institute for Human Development
Date/Time To be scheduled
Please note We need an information about your status (expert/non-expert) Please take a note behind your name.
Status Planned

Expert Interviews[edit]

Workshops[edit]

Participants

5 - 8

Supervisor

1 - 2

Equipment

Equipment provided by UIE Team

Duration

4h - 6h

Result

A rough draft of a graphical user interface part.

Steps
  1. Introduction to a GUI task
  2. Participants work on paper prototypes
  3. Question and Answers
  4. Comparison of Results
  5. Clustering
  6. Rework if necessary
  7. Preparation plus dissamination of results


For each workshop the UIE team prepares a topic. The following topics are already covered:

Workshops 2008[edit]

  • Scheduled: 09.12.2008 - 10.12.2008, Bibliotheca Hertziana - Max Planck Institute for Art History

Topics: A brainstorming can be found on Faces GUI.

Workshops 2007[edit]

Software based Evaluation[edit]

Currently the pubman prototype is monitored by an open source application called ClickHeat. This solution generates pictures called heat maps. Heat maps show where users click (dots) and how often (colour).

Example:

Heat Map of Depositor Workspace

Software based evaluation is not conducted for R3. Currently no open source application is available for analysing dynamic pages.

  1. http://en.wikipedia.org/wiki/Think_aloud_protocol Wikipedia on 'Think Aloud Protocol'