Difference between revisions of "Thinking Aloud for Digitized Books Interview Analysis"
m (→Charts) |
m (→Charts) |
||
Line 109: | Line 109: | ||
=Charts= | =Charts= | ||
[[Image: non_functional.gif| | [[Image: non_functional.gif|thumb|Non Functional]] | ||
These are less important tasks. They are performed to as a warm up for participants and are usually carried out flawless. | These are less important tasks. They are performed to as a warm up for participants and are usually carried out flawless. |
Revision as of 13:30, 1 October 2010
APPLICATION AREAS |
---|
|
PROJECTS |
Research- and Metadata Handling Corporate & Interface Design (under Rework) |
edit |
Results from 'Thinking Aloud' interviews for ViRR Release 3[edit]
The usability evaluation method 'think aloud' is the most affordable and efficient among numerous test methods. If one follows human centred design principles to go for ISO 13407[1], it is good way to get relevant issues fast.
Participants[edit]
According to the main groups defined in Personas 2 + 11 Persons were interviewed. 2 Persons were interviewed at the MPDL in Munich and 11 at the MPIER in Frankfurt:
Group | Amount |
Administrator | 0 |
Librarian | 7 |
Scientist | 2 |
Secretary | 4 |
Guest | 0 |
All | 13 |
Working conditions[edit]
Working conditions are quite well among all groups although there is some distraction and more urgent tasks need to be tackled while performing screen work. This is quite a good rate compared to other institutes.
Desktop space | Rating |
Lot of space | 13 |
Good | 0 |
Crowded | 0 |
External Help | Rating |
Available | 13 |
IT Services | 0 |
Poor | 0 |
Noise and distraction | Rating |
Distraction | 0 |
Little | 4 |
None | 7 |
Urgency of task | Rating |
Lower | 5 |
Average | 6 |
Higher | 2 |
Experience with Screen Interfaces[edit]
Participants already do have a lot of experience. They are even quite well experienced with web applications so far. The experience with web-applications is quite high compared to other institutes.
Percentage of Work spent with applications |
88% |
Percentage of web applications involved |
57% |
Terms and definitions[edit]
Issues
Indicate where users were hesitating, asking, put a comment or missed the right action at the first attempt. (A hint for User Interface Engineering where problems exist and if they can be related to wording, process or position). Issues are rated as Minor, Serious or Failed but do not count to the task as souch. They only provide hints to the interface engineering team how to judge severity of issues.
Failed
The task couldn't be completed with the interface. This is the case when users get lost within the interface, do something wrong or can not continue because something is missing.
OK
The task leads to a valid result. This is the case when no step shows a single 'Failed'. It does not mean that single steps can have issues of type serious or minor.
Charts[edit]
These are less important tasks. They are performed to as a warm up for participants and are usually carried out flawless.
Summary[edit]
Measures taken from interviews do not lead necesserily to a solution of the problem in one go. Old issues need to be tested again. As R4 comes with a new interface it needs to be tested completely.