Academia.eduAcademia.edu

Webuse: Website usability evaluation tool

2003, Malaysian Journal of Computer Science

Usability is one of the major factors that determines the successfulness of a website. It is important therefore to have certain measurement methods to assess the usability of websites. The methods could be used to help website designers make their websites more usable. This research focuses on website usability issues and implements a tool for evaluating the usability of websites, called WEBUSE (WEBsite USability Evaluation Tool). Based on literature research, a 24-question evaluation questionnaire has been ...

Malaysian Journal of Computer Science, Vol. 16 No. 1, June 2003, pp. 47-57 WEBUSE: WEBSITE USABILITY EVALUATION TOOL Thiam Kian Chiew and Siti Salwa Salim Department of Software Engineering Faculty of Computer Science and Information Technology University of Malaya 50603 Kuala Lumpur, Malaysia Tel: 603-79676376/6347 Fax: 603-79579249 email: [email protected] [email protected] ABSTRACT Usability is one of the major factors that determines the successfulness of a website. It is important therefore to have certain measurement methods to assess the usability of websites. The methods could be used to help website designers make their websites more usable. This research focuses on website usability issues and implements a tool for evaluating the usability of websites, called WEBUSE (WEBsite USability Evaluation Tool). Based on literature research, a 24-question evaluation questionnaire has been formulated. The questionnaire is implemented as a Webbased tool. Visitors’ of a website can use it to evaluate the usability of the website. The visitors’ responses to the questionnaire are analysed. The results of the analysis show the good and bad usability aspects of the website. Website designers and developers can improve their websites based on these results. WEBUSE is suitable for the evaluation of all types of websites. Evaluation provided by WEBUSE is reliable and has received favourable user satisfaction and acceptance. Keywords: 1.0 WEBUSE, Website Usability Evaluation Tool, User Interface, User Satisfaction, Human Computer Interaction INTRODUCTION Usability is defined in ISO 9241-11 as the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use [1]. There are two questions about usability that should be asked when designing a system, especially interactive system [2]: 1. How can a system be developed to ensure its usability? 2. How can the usability of a system be demonstrated or measured? Many Web-based interactive systems have been developed since the last decade. According to IDC, Asia (July 1998), the number of websites in Asia-Pacific region had increased by 75% between September 1997 and May 1998. With Web authoring tools, producing websites becomes easy. Even inexperienced information providers can create their own websites. However, authors of these websites usually create their websites with a content and structure from their own perspective rather than the users’ perspective. On the other hand, some authors just transform the information from printed form to web pages without adapting for presentation on the Web. Evaluating usability of a website is therefore important. However, problems in getting usability results used more in development are basically due to lack of usability of the usability evaluation methods and results [3]. The precision of usability evaluation method itself will determine the accuracy of the evaluation. By using different evaluation methods, different results may be obtained for the usability of the same system. Website usability can be studied from different perspectives [4]. Different website usability evaluation tools can be designed based on the different perspectives emphasised. 47 Chiew and Salim 2.0 A REVIEW OF EXISTING EVALUATION METHODS AND TOOLS There are different types of evaluation methods used to examine usability-related aspects of a system. According to Mack and Nielsen [5], the evaluation methods can be classified into four categories. The categories are: § Automated – usability measures are computed by running a user specification through evaluation software. § Empirical – usability is assessed by testing the interface with real users. § Formal – using exact models and formulas to calculate usability measures. § Informal – based on rules of thumb and the general skill, knowledge and experience of the evaluators. Benbunan-Fich, on the other hand, categorised usability evaluation methods into four categories [6]: § Objective performance – measures the capability of the visitors using the website in terms of time taken to complete specific tasks through the system. § Subjective user preferences – measures the users’ preferences to the system by asking them to elicit their opinions or use a questionnaire for rating the system. § Experimental – based on controlled experiments to test hypotheses about design and their impact on user performance and preferences. § Direct observation – inspect and monitor the users’ behaviour while they are interacting with the system to detect usability problems. Each method has its strengths and weaknesses. Website designers or developers need to select suitable evaluation methods based on certain factors. The factors include stage of design, novelty of project, number of expected users, criticality of the interface, cost of product and finances allocated for testing, time available, and experience of the design and evaluation team [7, 8]. Several website usability evaluation tools and methods can be developed based on the above categories, such as: § WAMMI WAMMI was developed by Human Factors Research Group (HFRG) in 1999. WAMMI is an evaluation tool for websites. It is based on a questionnaire filled by visitors of a website, and gives a measure of how useful and easy it is to use the visitors found about the site [9]. The WAMMI report provides the following information: • Overall usability score and the general rating of a website. • Detailed usability profile in terms of five usability scales: attractiveness, control, efficiency, helpfulness, and learnability. • Detailed listings of those aspects of the website that visitors have found to be specially good or specially problematic. § NIST Web Metrics The objective of the National Institute of Standards and Technology (NIST) Web Metrics is to explore the feasibility of a range of tools and techniques that support rapid, remote, and automated testing and evaluation of website usability [10]. Web Metrics consists of, among others, the following prototypes: • Web Static Analyser Tool (WebSAT) that checks the HTML code of web pages against usability guidelines, either its own, or a set of IEEE Standard 2001-1999 guidelines. It can check individual pages or an entire website. • Web Category Analysis Tool (WebCAT) that lets the usability engineer to quickly construct and conduct a simple category analysis across the Web. • Web Variable Instrumenter Program (WebVIP) that lets the usability engineer to rapidly instrument a website so as to capture a log of user behaviour on the site. • Framework for Logging Usability Data (FLUD) which is checks behaviour of website users by capturing user interaction log. • FLUD Viz tool that lets the usability engineer to visualise and analyse a single usability session. § Bobby Bobby is a Web accessibility software tool designed to help expose and repair barriers to accessibility and encourage compliance with existing accessibility guidelines. Bobby tests for compliance with accessibility standards such as the U.S. Government’s Section 508 and the Web Content Accessibility Guidelines provided by the W3C’s Web Access Initiative. Bobby allows developers to test Web pages and generate summary reports highlighting critical accessibility issues [11]. 48 WEBUSE: Website Usability Evaluation Tool § Protocol Analysis The protocol analysis or “think aloud” method was introduced by Benbunan-Fich [6] from Seton Hall University. It is based on direct observation of a real interaction between the user and the system. During the evaluation session, the user is asked to carry out a pre-defined task using the system (website). At the same time, the user is asked to verbalise his/her thoughts by “thinking aloud”. The user needs to explain his thinking process and reasons for his actions. The way the user approaches a task and the reasons why problems occur during the user interacts with the system is captured by using a concurrent protocol. Video or audio tapes are required for this evaluation process. The methods measure either objective usability of a website or users’ subjective perception about the website. Objective measures such as evaluating a website based on its HTML code (WebSAT and Bobby) or measuring users’ performance on carrying certain tasks (WebCAT and FLUD), tends to assess technical correctness of the website, rather than overall impact of the website to the users. The advantage is that the measures can be easily quantified. However, the external factors such as connection speed, and cultural issues or other human factors are not considered. Subjective measures, on the other hand, assess impression of the users towards the design of the website as well as the effect of the website design towards user interaction. It places the users at the centre of usability evaluation and is suitable for evaluating websites since a website is normally visited by many users from different backgrounds and different places. Jakob Nielsen claimed that usability is about basic human capabilities and users’ needs which do not change nearly as rapidly as technology [12]. He also claimed that human factors remain the same decade after decade [13]. This raises an important point: website usability is a human factor issue which should be examined based on a set of welldefined guidelines. In fact, most of the usability guidelines available focus on how human beings feel, look, and use websites. User satisfaction and convenience are the main considerations when discussing about website usability. This research therefore takes the approach of subjective measures. It aims at developing a tool that asks the users to evaluate websites. It uses a web-based questionnaire, which is the most effective way to collect larger amount of data from people all over the world. It measures users’ subjective satisfaction and impression on the websites. 3.0 METHODOLOGY The methodology adopted by this research is shown in Fig. 1. The research first studies the issues related to website usability. These include concept of usability, usability evaluation methods and tools. Based on the study, the evaluation method was determined, i.e. a Web-based usability evaluation questionnaire that allows the users to rate the usability of evaluated websites. The method is chosen because studies had found that questionnaire data can be both reliable and valid for the assessment of user satisfaction with websites or computer-based applications [14]. Major usability evaluation criteria are then identified in order to formulate the evaluation questionnaire. A structured approach is then used to analyse and design the evaluation tool. The Active Server Pages (ASP) is used to develop the tool. The tool is tested by a group of 40 randomly selected users. 4.0 ANALYSIS OF EXISTING WEBSITE USABILITY EVALUATION TOOLS AND EVALUATION CRITERIA Table 1 shows the usability aspects covered by evaluation of the four evaluation tools studied. It must be emphasised that the usability aspects are interdependent and interrelated. For example, user satisfaction is related to other factors such as user interface attractiveness, performance, and navigational aids. Similarly, efficiency affects system performance. It is therefore important to identify major usability categories based on which evaluation criteria can be formulated. The categorisation process can be done by first examining possible usability guidelines, then classifying them into respective categories based on the main usability aspects they measure. The classification can help to simplify the evaluation process. 49 Chiew and Salim Identify Problem Domain Concept of Usability Usability Evaluation Methods Usability Evaluation Tools Determine Evaluation Method Formulate Evaluation Questionnaire Analyse and Design the Evaluation Tool Implement the Evaluation Tool System Testing and Evaluation Fig. 1: Research Methodology Table 1: Usability Aspects covered by the four Usability Evaluation Tools Tool Usability Aspects User satisfaction Emotional effect Learnability/ Ease of use Efficiency User control Accessibility Navigational aids Content and organisation User interface attractiveness Performance Readability WAMMI ü ü ü ü ü WebSAT ü ü ü ü ü ü ü ü ü ü ü ü Bobby Protocol Analysis ü ü ü ü ü ü ü ü ü ü ü After an extensive study on related resources [15, 16, 17, 18, 19 and 20], the following website usability evaluation criteria have been identified: § § § § § Display space of the website should not be divided into many small sections in order to give comfortable reading experience to the users. This implies the number of frames used should be limited. Users need not scroll left and right to read the content of the website because that will cause reading difficulty. The website should be accessible to users with different browser capability. Avoid using technologies that might cause users’ systems to crash when visiting the website. Thorough system testing is required before the website is launched to the public. The website should not contain elements that are distracting or irritating to users, such as scrolling text, marquees, and constant running animations. The website should contain no orphan page. Every page should contain at least a link up to the home page and some indication of current page location, such as a site map or menu. 50 WEBUSE: Website Usability Evaluation Tool § § § § § § § § § § § § § § § The placement and content of site map or menu should be consistent so that users can easily recognise them and identify the targeted link. Information can be easily searched. For a large website, search features should be provided. Users should be able to easily differentiate links that have been visited and those that have not. Standard link colours (red for visited links and blue for not visited links) should be used. Information should be up-to-date. Outdated pages should be replaced. Download time should not exceed 15 seconds as users do not want to wait too long to download a file or access a page. Users should be allowed to use back button to bring them to the previous page. Pressing back button accounts for 30-37% of all navigational acts. Do not open too many new browser windows as that will obstruct the users to trace their current location or status in the website. The website should respond according to users’ expectation. This includes the standard use of GUI widgets such as use radio buttons for selecting one among many options. Reduce elements that look like Web advertising as too many advertisements will irritate users. Information should be presented in natural and logical order, which follows the standard convention. Use meaningful words to describe the destination page of a hyperlink. This will save the users time by not going to unnecessary pages. The website design, including page layout, use of colours, and placement of page elements, should be consistent to give users a standard look and feel of the website. Use colours with good contrast and page elements that will attract users’ attention to the main information of the page, rather than distracting them. Enhance readability of a page by avoiding blocks of text. Instead, organise the text using headlines, subheadlines, bulleted list, highlighted keywords, short paragraphs, and so on. Headlines can be used to highlight the content of a section or a page to help users grab brief idea about the section or page. Provide sufficient navigational aids to help users moves around in the website. This includes providing links at the bottom of a page to allow users to go to the top of the page if the page is long. The 20 usability criteria studied show important aspects of website usability. The criteria can be classified into 4 categories. The categories are: § Content, organisation, and readability, § Navigation and links, § User interface design, and § Performance and effectiveness. 5.0 DESIGN OF USABILITY EVALUATION QUESTIONNAIRE The classification of the criteria into categories is shown in Table 2. From the table, it is clear that a criterion may fall into more than one category and this indicates that the categories are related to each other. In order to design the usability evaluation questionnaire, 6 questions are formulated for each category based on the evaluation criteria. The following guidelines are used when designing and developing the evaluation questionnaire: § Evaluate aspects that are closely related to human factors, or those issues that are user-centred. § Evaluate subjective user satisfaction based on objective and clearly defined usability evaluation criteria. § Easy to use and present clear and comprehensive report to the users. § Provide feedback to users if possible. Questions for evaluating content, organisation and readability are: § This website contains most of my interest material and topics and they are up-to-date. § I can easily find what I want at this website. § The content of this website is well organised. § Reading content at this website is easy. § I am comfortable and familiar with the language used. § I need not scroll left and right when reading at this website. 51 Chiew and Salim Table 2: Classification of Usability Evaluation Criteria into Usability Categories No Usability Criteria 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 Display space Scroll left and right Accessible Distracting or irritating elements Orphan page Placement and content of site map or menu Information search Link colours Up-to-date information Download time Back button Open new browser windows Respond according to users’ expectations Web advertising Follow real world conventions Hyperlink description Consistent design Use of colour Organisation of information Navigational aids Content, Organisation, & Readability ü ü Usability Category Navigation User & Links Interface Design ü Performance & Effectiveness ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü ü Questions for evaluating navigation and links are: § I can easily know where I am at this website. § This website provides useful cues and links for me to get the desired information. § It is easy to move around at this website by using the links or back button of the browser. § The links at this website are well maintained and updated. § The website does not open too many new browser windows when I am moving around. § Placement of links or menu is standard throughout the website and I can easily recognise them. Questions for evaluating user interface design are: § This website’s interface design is attractive. § I am comfortable with the colours used at this website. § This website contains no feature that irritates me such as scrolling or blinking text and looping animations. § This website has a consistent feel and look. § This website does not contain too many Web advertisements. § The design of the website makes sense and it is easy to learn how to use it. Questions for evaluating performance and effectiveness are: § I need not wait too long to download a file or open a page. § I can easily distinguish between visited and not-visited links. § I can access this website most of the time. § This website responds to my actions as expected. § It is efficient to use this website. § This website always provides clear and useful messages when I don’t know how to proceed. 6.0 WEBUSE DEVELOPMENT AND TESTING The evaluation tool developed is called WEBUSE (Website Usability Evaluation Tool). It was developed based on the model shown in Fig. 2 52 WEBUSE: Website Usability Evaluation Tool Content, Organisation & Readability Navigation & Links Web-based Usability Evaluation Questionnaire Usability Evaluation Result and Suggestions for Improvement User Interface Design Performance & Effectiveness Fig. 2: WEBUSE Development Model The steps for evaluation are as follows: § User selects the website to be evaluated. § User answers the usability evaluation questionnaire. § The user’s response is sent to the WEBUSE server for processing. § Merits are assigned according to the response (answer) for each question. The merits are then accumulated based on the four usability categories. § Mean value for each category is considered as the usability point for that category. Overall website usability point is the mean value of usability points for the four categories. § Usability level is determined by the usability points. Five options are available for each question. The options and corresponding merits are shown in Table 3. Table 3: Options for WEBUSE Questionnaire and Corresponding Merits Option Merit Strongly Agree 1.00 Agree 0.75 Fair 0.50 Disagree 0.25 Strongly Disagree 0.00 Usability point for a category, x, is defined as: x = [ Σ(Merit for each question of the category) ] / [ number of questions ] Table 4 shows the usability levels and the corresponding usability points. Table 4: Usability Points and Corresponding Usability Levels Points, x Usability Level 0<=x<=0.2 Bad 0.2<x<=0.4 Poor 53 0.4<x<=0.6 Moderate 0.6<x<=0.8 Good 0.8<x<=1.0 Excellent Chiew and Salim Table 5 shows the general report of the evaluation. Table 5: WEBUSE General Usability Report Category Content, Organisation and Readability Navigation and Links User Interface Design Performance and Effectiveness Overall Point Usability Level Detailed Report View View View View View The last column provides links to detailed report for the usability category. By clicking the link, the user can read a more detailed report for that category. The detailed report shows the evaluation result for each question (usability criteria). Table 6 shows the design of the detailed report for content, organisation and readability. Table 6: Detailed Report for Content, Organisation and Readability Usability Criteria This website contains most of my interest material and topics and they are up-to-date. I can easily find what I want at this website. The content of this website is well organised. Reading content at this website is easy. I am comfortable and familiar with the language used. I need not scroll left and right when reading at this website. Category Rating Point Usability Level A prototype for WEBUSE was developed. 40 participants were chosen randomly. They were students of Faculty of Computer Science and Information Technology, University of Malaya. 9 of them are pursuing their Master of Computer Science Degrees at the Faculty, 22 of them are doing their Bachelor Degrees (Computer Science or Information Technology), while 4 are doing Diploma in Information Technology, and others are doing Advanced Diploma in Computer Science. Table 7 shows the participants’ computer and Internet literacy. Table 7: Participants’ Experience in Using Computers and the Internet Using Computer Internet Never 0 0 < 1 Year 1 1 1-3 Years 17 23 > 3 Years 22 16 Total 40 40 The participants were asked to use WEBUSE for evaluating four websites, 10 participants for each website. They were asked to surf the websites for about 20 minutes to answer 6 questions related to the websites. Answers for the questions were available at the websites. The purpose of asking the participants to answer the 6 questions was not for measuring the participants’ ability on surfing and searching the websites. Rather, it was for giving the participants opportunity to move around the websites so that they could gain a better understanding about the websites, from the perspectives of content, organisation and readability, navigation and links, user interface design, as well as performance and effectiveness. With the surfing experience, the participants could better evaluate the websites. The evaluation can be divided into two parts: § Qualitative evaluation: Evaluates the participants’ acceptance and satisfaction of WEBUSE. § Quantitative evaluation: Evaluates reliability of WEBUSE using Cronbach’s Alpha Coefficient, α. The coefficient is defined as: α = (k / k-1) [ 1 – ( ∑s 2 i / s 2 )] where: k = number of test/evaluation items s 2 i = variance of a single test/evaluation item s 2 = variance of the total scores 54 WEBUSE: Website Usability Evaluation Tool In general, reliability coefficients take on values from 0 to 1.00, inclusive. The greater the coefficient, the more reliable the measurement [21]. The 40-participant experiment showed that the coefficients took values that were greater than 0.8 (0.832, 0.814, 0.834, and 0.865 for each website respectively). This implies that the evaluations are reliable. In terms of user acceptance and satisfaction, Table 8 shows the questions asked and the participants’ responses. Numbers 1, 2, 3, and 4 represent each evaluation group of 10 participants. Table 8: Ratings on WEBUSE by the 40 Participants Who Took Part in the Experiment Question Easy to use Efficient in performing evaluation Interactive Well organised content Easy to move around the website Attractive user interface design Easy to read and comprehensible content Clear and easy to understand report Feel in control when using WEBUSE Gain better understanding about usability after using WEBUSE 1 2 3 10 9 5 9 10 3 7 10 10 8 10 9 7 9 8 7 8 10 8 9 10 9 7 9 9 7 9 9 9 8 Yes 4 10 10 10 10 8 8 8 8 10 8 Σ % 1 2 3 No 4 Σ % 40 37 29 37 35 25 32 38 37 33 100 92.5 72.5 92.5 87.5 62.5 80.0 95.0 92.5 82.5 0 1 5 1 0 7 3 0 0 2 0 1 3 1 2 3 2 0 2 1 0 1 3 1 1 3 1 1 1 2 0 0 0 0 2 2 2 2 0 2 0 3 11 3 5 15 8 2 3 7 0 7.5 27.5 7.5 12.5 37.5 20.0 5.0 7.5 17.5 Overall, participants were satisfied with WEBUSE. The mean value for positive feedback given to the 10 questions is 85.75%. This is an acceptable result but improvements could be done on the user interface design and interactive features as more than 25% of the participants gave negative feedback to these two aspects. Fig. 3 is a sample screenshot of WEBUSE. Fig. 3: Sample Screenshot of WEBUSE 7.0 CONCLUSION This project had contributed to the research in website usability in two aspects: § It summarises many website usability issues and groups the issues into a set of 24 usability guidelines. The guidelines can be used to evaluate usability of websites as well as help Web designers and developers to build more usable websites. § It uses the usability guidelines to build an evaluation tool, which can assist webmasters to improve their websites. The tool, namely WEBUSE, allows visitors to a website to perform evaluation on the website. Based on the responses provided by those visitors, webmasters will know the good and bad usability aspects of their websites from the perspective of the visitors. 55 Chiew and Salim REFERENCES [1] ISO 9241-11., Ergonomics Requirements for Office Work with Visual Display Terminals (VDTs) – Part 11: Guidance on Usability, 1998. [2] A. Dix, et. al., Human-Computer Interaction, 2nd Ed., London, Prentice Hall Europe, 1998. [3] J. Nielsen, “Getting Usability Used”, in Proceedings of Human-Computer Interaction: Interact ’95, London, Chapman & Hall, 1995. [4] S. B. Shum and C. McKnight, “World Wide Web Usability: Introduction to This Special Issue”, in International Journal of Human-Computer Studies, Vol. 47, No. 1, 1997, pp. 1-4. [5] R. L. Mack and J. Nielsen, “Executive Summary”, in Usability Inspection Methods, New York, John Wiley & Sons, 1994. [6] R. Benbunan-Fich, “Using Protocol Analysis to Evaluate the Usability of a Commercial Website”, in Information & Management, Vol. 39, No. 2, 2001, pp. 151-163. [7] J. R. Preece, et. al., Human-Computer Interaction. Wokingham, Addison-Wesley, 1994. [8] W. M. Newman and M. G. Lamming, Interactive System Design. Wokingham, Addison-Wesley, 1994. [9] WAMMI – Website Analysis and Measurement Inventory (Web Usability Questionnaire), http://www.ucc.ie/hfrg/questionnaires/wammi/index.html (Visited on 18 March 2003) [10] Web Metrics Testbed, http://zing.ncsl.nist.gov/WebTools /tech.html (Visited on 19 March 2003). [11] Bobby – About Bobby, http://bobby.watchfire.com/bobby/html/en/about.jsp (Visited on 20 March 2003). [12] J. Nielsen, Changes in Web Usability Since 1994, Jakob Nielsen’s Alertbox for December 01, 1997, http://www.useit.com/alertbox/9712a.html (Visited on 14 March 2003). [13] J. Nielsen, Persuasive Design: New Captology Book, Jakob Nielsen’s Alertbox for March 03, 2003, http://www.useit.com/alertbox/20030303.html (Visited on 14 March 2003) [14] J. Kirakowski and B. Cierlik, Measuring the Usability of Web Sites, in Human Factors and Ergonomics Society Annual Conference, Chicago, 1998. [15] J. Nielsen, The Alertbox: Current Issues in Web Usability, http://www.useit.com/alertbox/ (Visited on 13 March 2003). [16] S. Walther, Active Server Pages Unleashed, Indianapolis, Sams Net, 1998. [17] K. Instone, Usability Heuristics for the Web, Web Review, http://webreview.com/pub/ 97/10/10/usability/ sidebar.html (Visited on 13 April 2000). [18] J. Makulowich, User Friendly Web Needs Rigorous Study, Washington Technology July 05, 1999, http://www.wtonline.com/vol14_no7/cover632-1.html (Visited on 13 April 2000). [19] K. Kotwica, Survey: Web Site Navigation, CIO Magazine, July, 1999, http://www.cio.com/ forums/behavior/survey6.html (Visited on 13 April 2000) [20] S. Weinschenk, et. al., GUI Design Essentials, John Wiley & Sons, 1997. [21] W. Wiersma and S. G. Jurs, Educational Measurement and Testing, 2nd Ed., Boston, Allyn and Bacon, 1990. 56 WEBUSE: Website Usability Evaluation Tool BIOGRAPHY Thiam Kian Chiew is currently a lecturer at the Faculty of Computer Science and Information Technology, University of Malaya. He obtained his Master of Computer Science from University of Malaya in 2000. His current research interests include Web development and evaluation, human computer interaction, and information architecture. Siti Salwa Salim is currently an Associate Professor at the Faculty of Computer Science and Information Technology, University of Malaya. She obtained her PhD in Computer Science from the University of Manchester in 1998. Her current research interests include computer supported collaborative learning, human computer interaction, Web-agents, software requirements engineering and usability engineering. 57