Archivist Round Table Website Moderated Usability Testing
Overview
This was a team project conducting moderated remote usability tests to evaluate the usability of The Archivists Round Table of Metropolitan New York website in order to identify users’ search and navigation behaviors, potential issues, and design opportunities. The usability experts worked collaboratively to come up with user profiles, a screener, a recruitment strategy, tasks, a script, pre/post-task questionnaires, and a consent form to conduct the usability tests on UserZoom Go, and offered recommendations to the client to improve the accessibility, discoverability, and findability of content on the website
Purpose
Moderated Remote Usability Testing of a website
My Role
Research
Scripting
Storytelling
Mockup building
Tools
UserZoom Go
Validately App
Figma
Client
The Archivists Round Table of Metropolitan New York, Inc.
Project Type
Graduate Usability Theory Course Project
Team
Lu Pei-Yun Chung, Danielle Kingsberg, Yuki Shimano (Me), Yi Zhong
Period
Oct 2020 - Dec 2020
The Client
The Archivists Round Table of Metropolitan New York, Inc. is a non-profit organization founded in 1979 that represents a diverse group of more than 400 archivists, librarians and records managers in the New York metropolitan area. The A.R.T. website serves as an advocacy platform of collections, events, forums, archival journals, and mentorship programs for the archivist community to raise awareness of the cultural preservation of materials.
The Challenge
The main problem the A.R.T. website had been facing was it’s difficulty of use for first time or non-expert (in archives) users, and content discoverability. The A.R.T. wanted to make their website easy to navigate, informative about the skills of the organization, and have an organized content structure.
Goals
The study was designed relying on both qualitative data (test observations, notes, and video recordings) and quantitative data (metrics of task difficulty, findings severity, questionnaire responses, etc.) in order to:
Gather visitors’ first impressions of the website
Make sure the website tells who/what A.R.T. and archivists are to visitors
Make sure opportunities to get involved and network are discoverable
Find redundant information
Find out the expectations people have with the website
Make sure visitors can easily navigate the website on both desktop and mobile.
My Role
Me and three other usability experts designed the moderated remote usability tests, recruited test participants, conducted the tests, analyzed the collected data, and identified problems and recommendations for the A.R.T. In creating the final report and presentation, I was in charge of describing the findings and suggestions related to information on the Events, Members, and Mentorship pages, and minor content fixes.
Overall Process
The team decided to follow the process of meeting with clients, test preparation, data collection, data analysis, and delivering the report. The team made sure that suggestions to findings in the final deliverables were supported by user research and usability test results.
Client Meeting
The team met with the client in the first week of the project to ask about the website’s existing challenges and goals for the project (see The Challenge and Goals sections).
The client also wished to find out the expectations visitors had with the website and whether information on ways to get involved were discoverable, in order to change the layout and content of some of the pages, especially the homepage and About page.
The team also noted the key tasks or features that the client wanted to test:
Participant Recruitment
From the client meeting, the main users of the A.R.T. website were revealed to be archivists, A.R.T. members, employees of organizations that have archives who are looking for a community or for advice, non-expert people who want to connect with archive experts, and educators. Based on this information, a target/primary user persona was designed.
Screening
As per the user profile, participants for this study were expected to be interested in archiving and archiving events. The team recruited a total of 9 people (8 were students from the School of Information at Pratt Institute, and 1 was an alumni) who expressed interest in archiving content and going to museums to participate in the usability test. A recruitment email with a screener was sent to the Pratt Institute’s School of Information Listserv. The screening questions listed below allowed the research team to qualify or disqualify relevant participants:
What degree are you pursuing at Pratt Institute?
How often do you visit museums or libraries? (Including Pre-COVID)
The first question was used to identify participants who are in the Library and Information Science (MSLIS), Museums and Digital Culture (MS), or History of Art and Design (MSLIS/MA) degree, based on the assumption that students pursuing those degrees most likely wish to pursue a career in archiving.
The second question was asked based on the hypothesis that if a student frequented museums or libraries, they were more likely to pursue career opportunities in archiving or attend events hosted by the A.R.T.
Questionnaires
Pre-Test
The moderator asked the following questions similar to the screener before proceeding with the tasks portion of the session:
[If participant did not sign the consent form before the test] Can you review, sign, and date the consent form?
What degree are you currently pursuing?
Which websites do you commonly use for networking?
How frequently do you visit libraries? [can be pre-covid]
Post-Task
The following questions were asked after most of the tasks (tasks #2, 3, 4) in order to learn about the difficulty of each task felt by the user:
Is this experience what you expected?
Do you feel you have completed this task successfully?
From a scale of 1-5, 5 being very easy, how would you rate this task?
Post-Test
The following questions were asked after the test in order to learn more about the users’ behaviors, expectations, and experiences in summary.
How does this experience compare to other archive websites that you’ve used? (1- worse 5 - better)
Which of the following did you encounter as you were using the site? (Choose all that apply)
Hard to navigate; Too much scrolling; Not well organized; Too much reading. Text too small; Too much clicking; None of the above
What frustrated you most about this site?
What did you like most about this experience?
Did this activity improve your understanding of what archivists do? If not, what would help you to better understand their role?
Is there anything else that you would like to share or discuss?
Usability Test Scenarios and Tasks
Participants were asked to complete the following questions that were made to cover the goals and key tasks the client wished to have addressed:
Scroll about the homepage without clicking on anything. What are your first impressions?
Use the website to find an A.R.T. event that you would be interested in attending.
You met an A.R.T. member from Princeton University, but forgot to get their contact information. Use the website to find and contact the member.
You’re interested in progressing your career. Use the website to learn about the mentorship programs.
Usability Testing/Data Collection
The 9 participants were given a choice to take the moderated usability test via UserZoom Go on a Chrome browser from either their laptop or mobile device, and ultimately 7 tested for laptop and 2 for mobile. The research team conducted a pilot test before starting the series of usability tests to get used to the UserZoom Go tool, and resolve any technological issues. Each moderated usability test was conducted by two team members, one acting as the moderator who asks the participant the questions on the script, and one acting as the note-taker of the participant’s behavior and comments. Each team member acted as the moderator for at least two tests. Moderated usability testing was helpful in directly asking the participants insights about A.R.T.’s website in depth. Participants were asked to speak out loud their thoughts throughout the test.
The notes taken were made into cards that were compiled into one chart made using Miro, and sorted by participant and task. Observations about the participant behavior and quotes during the task and post-test questionnaire were written on the cards.
Data Analysis
The quotes and observations of the usability tests written on cards on the Miro Board were put into a spreadsheet and assigned a usability severity level (0-4, with 4 being most severe) and frequency (how many participants encountered the problem) to determine which issues required priority. There were a total of 183 pieces of findings aggregated from the sessions.
34% of these pieces of findings contained positive or neutral feedback (severity level 0 or 1) and 35% of the elements were identified as issues that should be prioritized (severity level 3 or 4. The team decided to focus on providing recommendations to the findings with severity level 3 and 4 issues.
From the collected answers to post-task question #3, it was found that participants had the most difficulty completing Task 3 of finding an A.R.T. member, with an averaged difficulty score of 2.2 out of 5. Four of the nine participants gave it a 1 on the rating scale, one stating
“I don't even know how I can find the directory. It's not what I expected when I clicked ‘Members’.”
Task 2 of finding an event was the next most difficult, with a 3.7 averaged difficulty score out of 5. Participant’s feedback surfaced concerns about missing information such as “links to certain websites” or a “price for the session” particularly in the case of non-members vs. member tickets.
Task 4 of finding mentorship programs was the least difficult; averaging at difficulty score of 3.8 out of 5. Some participants pointed out that they
“Couldn't see who the mentors or mentees are or detailed info”
However, most participants successfully navigated to the right information, and became interested in the program.
Findings and Recommendations
The participants provided positive feedback about the website’s posts of engagement opportunities, use of photographs, and not looking too busy as compared to other library websites. For areas to improve, the team came across the following key findings based on the usability test results:
Finding 1: Content appears too long and is hard to navigate on the homepage.
Participants found overload of unordered content on the homepage, redundant information linked from the homepage to ‘Open call to [activity]‘ pages, and difficulty of navigating the homepage and header on mobile devices.
Finding 2: Inconsistent purpose of search bar use and results.
The search bar was used by 4 of the 9 participants, and 3 of them gave feedback that the search capability could benefit from adding some standard features such as filters to make it clear what they can search for.
Finding 3: Missing information on ways to get involved.
On the ‘Events’ page, most participants preferred to use the calendar view than the list view to find upcoming events, because future events did not show up in list view and the long list of events written in the same font made it “hard for [their] eyes [and] to know where to go.”
On specific event pages, participants expected to see additional information such as other events happening in the same week, resources, discussions, and event facilitator contact.
Participants also expected to find a list of members for networking purposes when they clicked on the Members tab, but there was no members directory. The page title was also different: it was called ‘Membership’ instead of ‘Members’, and gave information on how to apply for membership, instead of contacts of individual members.
When searching for mentorship programs, at least five participants noticed that all of the links for the ‘Open Call for A.R.T. Mentorship Program’ were broken because it was past the application deadline. Participants wished the broken links to mentors and mentees information would be updated, or the error page could say where else to look for the information, because they would still be a good resource.
Finding 4: Minor content fixes
Minor issues that should be very easy to fix were found related to the About page and the font used overall. For the About page, participants expected to find a clear description of what the A.R.T. does. Two participants expected to find information about Mentors and Members in the page as well.
Secondly, while most participants praised the font colors of the website, the font size was “too small” and difficult for some participants to read.
Presentation
The deliverables to The A.R.T. included:
A written report of the moderated remote usability tests, and its findings, and recommendations from the team for improvement.
A slides presentation of the final findings and recommendations.
Client Response
“It was great to learn that the language was hard for non-native English speakers. I even think the font is small and don’t know if links are clickable.“
“I liked the highlighted side bar you guys made to scroll the homepage. It allows people to skim.“
“Hopefully we can work with the current Wild Apricot platform to make these changes.“
Conclusion
The Moderated Remote Usability Tests were able to accomplish the goals and answer the questions the client wanted to know.
The team was able to uncover the visitors’ first impressions of the website, which was that it made good use of images and looked informative about engagement opportunities, but had long content on the homepage.
It was found that participants were able to learn what archivists do from reading the engagement activities mentioned throughout the website, but had difficulty figuring out what exactly the A.R.T. organization does.
Some opportunities to get involved and network were hard to discover due to missing links or information.
Redundant information were found for Open Call pages, which mentioned the exact same description provided on the homepage.
The team found out that visitors expected to see more details about events, a members directory on the Members page, lists of mentors and mentees, and a clear description of the A.R.T.’s services.
It was found that navigation was extremely hard for the mobile version of the website due to a hidden navigation menu.
Challenges
The project faced the limitations of:
The qualitative data not being as rich, because there were only two participants who conducted the test for mobile, limiting the number of findings for the mobile version of the website.
Reduced engagement and adherence of participants, because some participants were too focused on the task, and forget to think and talk aloud.
Difficulty to control the process due to technological issues, because two participants who signed up for mobile testing encountered issues downloading the testing app on their device, and had to switch to testing on their laptop instead.
Next Steps
A richer qualitative data can be gathered if more usability tests can be conducted for the mobile version of the website. More participants who are non-experts on archives can also be gathered to collect feedback on whether the A.R.T. website is easy to understand and navigate.
Suggestions can also be presented better to the client in the form of a prototype. A mockup prototype of the entire website was not made due to time constraint, but a high-fidelity wireframe and prototype with simple clickable features of the homepage was made by team member, Lu Pei-Yun Chung.