Thinking Aloud for Digitized Books Interview Analysis

From MPDLMediaWiki
Revision as of 07:33, 4 October 2010 by Rkiefl (talk | contribs) (→‎Analysis)
Jump to navigation Jump to search

This is a protected page.

Results from 'Thinking Aloud' interviews for ViRR Release 3[edit]

The usability evaluation method 'think aloud' is the most affordable and efficient among numerous test methods. If one follows human centred design principles to go for ISO 13407[1], it is good way to get relevant issues fast.

Participants[edit]

According to the main groups defined in Personas 2 + 11 Persons were interviewed. 2 Persons were interviewed at the MPDL in Munich and 11 at the MPIER in Frankfurt:

Group Amount
Administrator 0
Librarian 7
Scientist 2
Secretary 4
Guest 0
All 13

Working conditions[edit]

Working conditions are quite well among all 13 Participants although there is some distraction and more urgent tasks need to be tackled while screen handling. This is a quite good rate compared to tests from 2008 and 2009 even if we partial the two MPDL staff members out.

Desktop space Rating
Lot of space 13
Good 0
Crowded 0
External Help Rating
Available 13
IT Services 0
Poor 0
Noise and distraction Rating
Distraction 0
Little 4
None 7
Urgency of task Rating
Lower 5
Average 6
Higher 2

Experience with Screen Interfaces[edit]

Participants already do have a lot of experience. They are well experienced with web applications already. The rate is quite high compared to other institutes.

Percentage of Work spent with applications Percentage of web applications involved
88% 57%

Terms and definitions[edit]

Issues

Indicate where users were hesitating, asking, put a comment or missed the right action at the first attempt. (A hint for User Interface Engineering where problems exist and if they can be related to wording, process or position). Issues are rated as Minor, Serious or Failed but do not count as 'failed' or 'ok'. They only provide hints to the interface engineering team how to judge severity of issues.

Failed

The task couldn't be completed with the interface. This is the case when users get lost within the interface, do something wrong or can not continue because something is missing.

OK

The task leads to a valid result. This is the case when no step shows a single 'Failed'. It does not mean that single steps can have issues of type serious or minor.

Analysis[edit]

Non Functional Tasks[edit]

Non Functional

These tasks are performed as a warm up only and are usually carried out fine.

Browse & Display[edit]

Browse & Display

These are less important tasks. They are performed to as a warm up for participants and are usually carried out flawless.

View Item/Details[edit]

Non Functional
These are less important tasks. They are performed to as a warm up for participants and are usually carried out flawless.

Export and Miscellaneous[edit]

Non Functional
These are less important tasks. They are performed to as a warm up for participants and are usually carried out flawless.
Non Functional
These are less important tasks. They are performed to as a warm up for participants and are usually carried out flawless.

Edit Book Meta Data[edit]

Non Functional
These are less important tasks. They are performed to as a warm up for participants and are usually carried out flawless.

Edit Table of Content[edit]

Non Functional
These are less important tasks. They are performed to as a warm up for participants and are usually carried out flawless.

Summary[edit]

Measures taken from interviews do not lead necesserily to a solution of the problem in one go. Old issues need to be tested again. As R4 comes with a new interface it needs to be tested completely.

References[edit]