Difference between revisions of "User Interface Evaluation"
Line 210: | Line 210: | ||
[[Image:clickheat_depositor.png|framed|center|Heat Map of Depositor Workspace]] | [[Image:clickheat_depositor.png|framed|center|Heat Map of Depositor Workspace]] | ||
Software based evaluation is not conducted for R3. Currently no open source application is available for analysing dynamic pages. | |||
[[Category:Analysis & Evaluation]] | [[Category:Analysis & Evaluation]] | ||
[[Category:User Interface Engineering]] | [[Category:User Interface Engineering]] |
Revision as of 08:50, 6 November 2008
|
Interfaces, built at the MPDL are subject to evaluation. Main focus are all interfaces of the current PubMan solution. Evaluation of the User Interface Engineering team has different approaches: Usability Interviews and tests, GUI Workshops and software based evaluation.
Usability Interviews (Thinking Aloud)[edit]
Participants |
1 |
---|---|
Supervisor |
1 plus 1 visitor (usually developers) |
Equipment |
|
Duration |
1h |
Results |
Document with notes on user actions for each action/step. The results are analysed after 6 - 8 interviews towards a statistic, providing GUI flaws within use cases. |
Steps |
|
Usability interviews are conducted to discover, document and classify usability issues for
- a specific release (e.g. PubMan 2.0.0.1)
- a functional prototype
- a prototype draft
Up to 8 interviews are usually sufficient to discover the main flaws of a GUI. The interviewer just provides tasks for his participant. The participant does all tasks on his own and should be encouraged to comment on his actions. It is not recommended for the interviewer to interfere in any form. If the participant gets stuck and there are important steps to follow, he gets a hint how to continue.
Interviews can be recorded optionally if participants agree. If a participant is not able to solve a task or step it will be noted as a fatal usability issue. For later interviews with a more standardized set of tasks the issues are to be classified in the following way:
- Fatal (Task could not be finished successfully or in a proper way)
- Serious (Task could not be finished on the first attempt or user performance is bad)
- Minor = (The user hesitates, is not sure or does not feel comfortable with the flow)
- Cosmetics = (No influence on user performance)
For each interview the UIE team provides an interview outcome. The document contains all tasks, observation notes and comments of participants. Additionally the interviews collect demographic characteristics as well to prepare data needed for personas. After a valid number of interviews exists they are analyzed and summarized. See all documents and the summary here:
Interviews for PubMan Release 2[edit]
Interview series I
- Munich: 2 Interviews
Interview series II
- Nijmegen: 9 Interviews
Interviews for PubMan Release 3[edit]
Interview series I
Place |
|
Date/Time | August 6/7 |
Status | Closed |
Interview Series II
Place |
|
Date/Time | Mittwoch, 05. November 2008/09:00-12:00 |
Status | Closed |
Expert Interviews[edit]
Participants |
1 |
---|---|
Supervisor |
1 - 2 |
Equipment |
|
Duration |
2h - 4h |
Results |
Document with protocol of interview. Prototype draft. |
Steps |
|
Expert interviews are conducted to get a better understanding of one specific working process and how it can be organized/reorganized within the user interface. A very domain specific knowledge is needed to understand interface requirements better. Functional aspects come additionally and can not be separated from interface needs.
Expert Interviews 2007[edit]
- User Interface Evaluation/Expert Interviews/2007 Garching
- User Interface Evaluation/Expert Interviews/2007 Munich
- User Interface Evaluation/Expert Interviews/Easy Submission Summary
Workshops[edit]
Participants |
5 - 8 |
---|---|
Supervisor |
1 - 2 |
Equipment |
Equipment provided by UIE Team |
Duration |
4h - 6h |
Result |
A rough draft of a graphical user interface part. |
Steps |
|
For each workshop the UIE team prepares a topic. The following topics are already covered:
Workshops 2008[edit]
- User Interface Evaluation/Adding Lists of Authors, MPDL Berlin
- User Interface Evaluation/Easy Submission, Nijmegen
Workshops 2007[edit]
- User Interface Evaluation/Short Item View, MPDL Munich
Software based Evaluation[edit]
Currently the pubman prototype is monitored by an open source application called ClickHeat. This solution generates pictures called heat maps. Heat maps show where users click (dots) and how often (colour).
Example:
Software based evaluation is not conducted for R3. Currently no open source application is available for analysing dynamic pages.