Gareth White
Professional video game developer, Ph.D. researcher (usability and user experience for games) and director of games research at http://www.VerticalSlice.co.uk/
Supervisors: Graham McAllister and Geraldine Fitzpatrick
Address: Shanghai, China
Supervisors: Graham McAllister and Geraldine Fitzpatrick
Address: Shanghai, China
less
InterestsView All (81)
Uploads
Papers by Gareth White
Usability evaluation can help guide developers by pointing out design issues that cause users problems. However, usability evaluation methods suffer from the evaluator effect, where separate evaluations of the same data do not produce reliably consistent results. This can result in a number of undesirable consequences affecting issues such as:
• Unreliable evaluation: Without reliable results, evaluation reports risk giving incorrect or misleading advice.
• Weak methodological validation: Typically new methods (e.g., new heuristics) are validated against user tests. However, without a reliable means to describe observations, attempts to validate novel methods against user test data will also be affected by weak reliability.
The playthrough evaluation framework addresses these points through a series of studies presenting the need for, and showing the development of the framework, including the following stages,
1. Explication of poor reliability in heuristic evaluation.
2. Development and validation of a reliable user test coding scheme.
3. Derivation of a novel usability evaluation method, playthrough evaluation.
4. Testing the method, quantifying results.
Evaluations were conducted with 22 participants, on 3 first-person shooter action console video games, using two methodologies, heuristic evaluation and the novel playthrough evaluation developed in this thesis. Both methods proved effective, with playthrough evaluation providing more detailed analysis but requiring more time to conduct.
Usability evaluation can help guide developers by pointing out design issues that cause users problems. However, usability evaluation methods suffer from the evaluator effect, where separate evaluations of the same data do not produce reliably consistent results. This can result in a number of undesirable consequences affecting issues such as:
• Unreliable evaluation: Without reliable results, evaluation reports risk giving incorrect or misleading advice.
• Weak methodological validation: Typically new methods (e.g., new heuristics) are validated against user tests. However, without a reliable means to describe observations, attempts to validate novel methods against user test data will also be affected by weak reliability.
The playthrough evaluation framework addresses these points through a series of studies presenting the need for, and showing the development of the framework, including the following stages,
1. Explication of poor reliability in heuristic evaluation.
2. Development and validation of a reliable user test coding scheme.
3. Derivation of a novel usability evaluation method, playthrough evaluation.
4. Testing the method, quantifying results.
Evaluations were conducted with 22 participants, on 3 first-person shooter action console video games, using two methodologies, heuristic evaluation and the novel playthrough evaluation developed in this thesis. Both methods proved effective, with playthrough evaluation providing more detailed analysis but requiring more time to conduct.