






Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
The results of a usability test conducted on the AIDS.gov website. The report includes participant feedback, satisfaction ratings, task completion rates, ease or difficulty of completion ratings, time on task, errors, and recommendations for improvements. The test identified minor problems such as the lack of categorization of topics on funding pages and difficulty keeping track of location in the site.
What you will learn
Typology: Cheat Sheet
1 / 11
This page cannot be seen from the preview
Don't miss anything!
Table of Contents [insert Table of Contents]
The test identified only a few minor problems including: The lack of categorization of topics on the funding pages. Confusion over apparent duplicative treatment and care information. Lack of a fact sheet/brochure category section. Lack of HIPAA category section. Lack of a Mental Health category section. Lack of a site index. Lack of any categorization of news items on the news page. Lack of a section for HIV+ data (e.g., number of individuals infected) This document contains the participant feedback, satisfactions ratings, task completion rates, ease or difficulty of completion ratings, time on task, errors, and recommendations for improvements. A copy of the scenarios and questionnaires are included in the Attachments’ section. Methodology Sessions [Describe how the participants were recruited. Describe the individual sessions – length of time and what happened during those sessions. Explain what the participant was asked to do and what happened post test session. Describe any pre- or post-test questionnaires. Include the subjective and overall questionnaires in the attachments’ section.] For example: The test administrator contacted and recruited participants via AIDS.gov from the HPLA conference attendee list. The test administrator sent e-mails to attendees informing them of the test logistics and requesting their availability and participation. Participants responded with an appropriate date and time. Each individual session lasted approximately one hour. During the session, the test administrator explained the test session and asked the participant to fill out a brief background questionnaire (see Attachment A). Participants read the task scenarios and tried to find the information on the website. After each task, the administrator asked the participant to rate the interface on a 5-point Likert Scale with measures ranging from Strongly Disagree to Strongly Agree. Post-task scenario subjective measures included (see Attachment B): How easy it was to find the information from the home page. Ability to keep track of their location in the website. Accurateness of predicting which section of the website contained the information. After the last task was completed, the test administrator asked the participant to rate the website overall by using a 5-point Likert scale (Strongly Disagree to Strongly Agree) for eight subjective measures including: Ease of use Frequency of use Difficulty to keep track of location in website
Learn ability - how easy it would be for most users to learn to use the website Information facilitation – how quickly participant could find information Look & feel appeal – homepage’s content makes me want to explore the site further Site content – site’s content would keep me coming back Site organization In addition, the test administrator asked the participants the following overall website questions: What the participant liked most. What the participant liked least. Recommendations for improvement. See Attachment C for the subjective and overall questionnaires. Participants [Provide a description of the participants. Include the number of participants, dates and the number of participants on each testing day. Provide a summary of the results from the demographic/background questionnaire and display this information in a table.] For example: All participants were attendees at the HPLA Conference and HIV/AIDS community professionals. Sixteen participants were scheduled over the two testing dates. Thirteen of the sixteen participants completed the test. Seven participants were involved in testing on May 21st^ and six on May 22nd. Of the thirteen participants, six were male and seven were female. Role in HIV/AIDS Community Participants selected their role in the HIV/AIDS community from a general list. Roles included Federal Agencies, State and Public Health Departments, grantees, and research institutions. Some participants were involved in multiple roles. **Example of table Role Federal Staff/Agency State / Public Health Department Federal Grante e Medical Institution Research Institution
I was able to accurately predict which section of the website contained this information. The 5-point rating scale ranged from 1 (Strongly disagree) to 5 (Strongly agree). Agree ratings are the agree and strongly agree ratings combined with a mean agreement ratings of > 4.0 considered as the user agrees that the information was easy to find, that they could keep track of their location and predict the section to find the information. Ease in Finding Information [Describe the results for this rating variable. Begin with the highest mean rating tasks followed by the lowest mean rating tasks.] For example: All participants agreed it was easy to find treatment information (mean agreement rating = 4.7) and 86% found it easy to find the HIV Testing day (mean agreement rating = 4.3). Only 29% of participants found it easy to find brochures (mean agreement rating = 2.4) and only 43% found it easy to find funding information (mean agreement rating = 2.9). Keeping Track of Location in Site [Describe the results for this rating variable. Begin with the highest mean rating tasks followed by the lowest mean rating tasks.] For example: All the participants found it easy to keep track of their location in the site while finding treatment information (mean agreement rating = 4.7) and finding the HIV Testing Day (mean agreement rating = 4.7). In addition, 86% found it easy to keep track of their location while finding a news item (mean agreement rating = 4.0). However, only 67% of participants found it easy to keep track of their location while finding brochures (mean agreement rating = 2.9). Predicting Information Section [Describe the results for this rating variable. Begin with the highest mean rating tasks followed by the lowest mean rating tasks.] For example: All the participants agreed it was easy to predict where to find treatment information (mean agreement rating = 4.7) and 85% agreed it was easy to predict where to find HIV Testing day information (mean agreement rating = 4.6). However, only 29% agreed that it was easy to predict where to find brochures (mean agreement rating = 2.3) and only 44% agreed they could predict where to find funding information (mean agreement rating = 2.6). [Display the results in a table (see example tabular display).] Test 1 – Mean Task Ratings & Percent Agree Task (^) Finding InfoEase –^ Location in Site^ Predict Section Overall 1 – Find News Item 3.6 (57%) 4.0 (86%) 3.0 (29%) 3. 2 – Obtain Funding 2.9 (43%) 3.9 (72%) 2.6 (44%) 2.
Task (^) Finding InfoEase –^ Location in Site^ Predict Section Overall 3 – Find Treatment Info 4.7 (100%) 4.7 (100%) 4.7 (100%) 4. 4 – Find FAQ (HIPAA) 3.6 (57%) 3.3 (83%) 3.3 (57%) 3. 5 – Find Testing Day
6 – Find Brochures 2.4 (29%) 2.9 (67%) 2.3 (29%) 2. *Percent Agree (%) = Agree & Strongly Agree Responses combined Time on Task The testing software recorded the time on task for each participant. Some tasks were inherently more difficult to complete than others and is reflected by the average time on task. [Provide a task by task description – include the task title or goal and the mean time to complete. Provide the range of completion times.] For example: Task 6 required participants to find brochures and took the longest time to complete (mean = 210 seconds). However, completion times ranged from 110 (approximately 2 minutes) to 465 seconds (more than 7 minutes) with most times less than 200 seconds (less than 4 minutes). [Display the time data in participant by task table and include the mean total time by task.] For example: Time on Task P1 P2 P3 P4 P5 P6 P7 Avg. TOT Task 1* 65 95 61 310 210 71 50 123. Task 2 130 370 50 200 110 55 390 186. Task 3 20 215 15 80 120 30 35 73. Task 4 150 65 55 150 180 67 240 129. Task 5 43 127 29 60 79 30 115 69. Task 6 146 110 120 465 130 175 325 210. Errors [Insert who captured the errors here] captured the number of errors participants made while trying to complete the task scenarios. [Describe the task in which participants made the most errors. Describe any tasks that were made without a non-critical error. Provide the results in a table showing number of errors by participant and task. ] A non-critical error is an error that does not prevent successful completion of the scenario.
Strongly Disagree Disagree^ Neutral^ Agree^ Strongl y Agree Mean Rating Percent Agree keep track of where they were in website Thought most people would learn to use website quickly
Can get information quickly 1 2 8 2 3.9^ 77% Homepage’s content makes me want to explore site
Site’s content would keep me coming back
Website is well organized 5 6 2 3.8^ 62% *Percent Agree (%) = Agree & Strongly Agree Responses combined 4.6.2 Likes, Dislikes, Participant Recommendations Upon completion of the tasks, participants provided feedback for what they liked most and least about the website, and recommendations for improving the website. Liked Most The following comments capture what the participants liked most: [insert liked most comments here] Liked Least The following comments capture what the participants liked the least: [insert liked least comments here] Recommendations for Improvement [insert recommendations here] Recommendations The recommendations section provides recommended changes and justifications driven by the participant success rate, behaviors, and comments. Each recommendation includes a severity rating. The following recommendations will improve the overall ease of use and address the areas where participants experienced problems or found the interface/information architecture unclear. [Provide the task title and an overview of the task. In a table, present the change, justification for the change and the severity rating for the change. Do this for each recommendation] For example: Find Organizational or Individual Funding Information (Task 2) Task 2 required participants to find organization funding (Test 1) or individual funding (Test 2). Change Justification Severity Add categories to funding pages. Participants across both tests rated the ease of finding funding information with 2.9 (out of 5) and High
Change Justification Severity Add additional descriptive text on funding Opportunities home page. only 38% agreed that it was easy to find funding information. Funding information is not categorized and requires users to read through all the funding opportunities to find one of interest. Participant comments also included categorizing funding in a more concise manner so it is easier to find. Conclusion [Provide a short conclusion paragraph. Begin with an overall statement of what the participants found and what is key about the Web site/application]. Implementing the recommendations and continuing to work with users (i.e., real lay persons) will ensure a continued user-centered website. For example: Most of the participants found AIDS.gov to be well-organized, comprehensive, clean and uncluttered, very useful, and easy to use. Having a centralized site to find information is key to many if not all of the participants. Implementing the recommendations and continuing to work with users (i.e., real lay persons) will ensure a continued user-centered website. [Add Attachments. Attachments may include: Attachment A – Background Questionnaire, Attachment B – Post-Task Questionnaire, Attachment C – Post-session Overall Subjective Questionnaire, Attachment D – Task Scenarios]