Open Context

Client Problem Statement

Open Context is an open digital repository of cultural heritage and archaeology data run by the Alexandria Archive Institute. To date, the platform has published 150 projects, representing 2 million items contributed by more than 1,000 data authors.

Historically, the Open Context team has focused their product development on archiving technology. For privacy purposes, the organization does not track users; thus, little is known about how current users interact with the platform.

As the team embarks on a 2.0 build of the system, they hope to be provided recommendations on how to make the system more user friendly, therefore improving data understanding and reuse.

This engagement was part of the curriculum of SI 622: Needs and Usability Testing, a graduate course at the University of Michigan - School of Information.

Research Goals

  • Understand the motivations of Open Context‘s primary user groups in accessing archaeological data and how they apply the data.

  • Determine if Open Context users are able to find the information they seek in a format valuable to their needs.

  • Identify areas in which the user experience may be improved and provide concrete recommendations for future development.

Methodology

Phase 1: Understanding the system and core audience

  • Interaction map

  • User interviews

Phase 2: Understanding the competitive landscape and addressable market

  • Comparative evaluation

  • Survey

Phase 3: Measuring usability

  • Heuristic evaluation

  • Usability testing

My Role

As part of a 4-person team called The Emphasis Lab, my teammates and I rotated project management responsibilities for individual studies while maintaining specialized roles throughout the semester (such as client communications and graphic design).

I served as project manager on two studies (Interaction map, usability testing). I also served as overall program manager, overseeing client communication, status updates, and understanding dependencies between studies.

 Phase 1: Understanding the system
and core audience

Interaction Map

Study Research Objectives

  • Complete a present state site audit of the Open Context system

  • Understand system information hierarchy and how pages interlink to internal and external pages

The Method

Team members assigned site sections and mapped pages and elements to corresponding end points. The initial audit was then reviewed and checked for accuracy by randomly selected team members.

The interaction map was created in Miro, with various site mechanisms represented using legend icons and color coding.

An image of the Open Context site interaction map

The complete interaction map.

Click on the image to the right to access a high-res PDF.

 

Key Findings + Recommendations

Screenshot of the Open Context home page

A screenshot of the Open Context home page displaying the dropdown menu in which Open Context’s different search functions are accessed.

  • Although Open Context’s core product is Database Archive and Search, the search function is surprisingly difficult to find, where users must navigate to a specific dropdown menu from primary navigation.

    Recommendation: Represent the Search bar on the Home Page, similar to the Google home page and other database competitors.

  • To new users, the differences in Search types (Projects, Media, Data Records, Everything) is not immediately clear, since each Search type includes similar functions and advanced filters.

    Recommendation: In future studies, seek to understand user needs in order to inform which types of search to highlight in navigation. For example, are users looking to search by region? What does “data record” mean to them?

 


User Interviews

Study Research Objectives

Deepen understanding of the needs of Open Context’s assumed core user group:

  • Job responsibilities

  • Familiarity with archaeological databases

  • How they currently search for data, and how data is applied to their work

The Method

x6 60 minute interviews were completed with the following user profiles:

  • Experienced Open Context users

  • Users with no past experience with Open Context

  • Open Context team members

All interview participants were sourced by the Open Context team and comprised of members of their current audience: archaeological researchers, educators, and/or enthusiasts. All the interviews were virtual through Zoom.

Interview transcripts were analyzed via thematic analysis on Miro. Notes in similar themes were clustered and formatted into an affinity diagram

Image of the final affinity map from user interview interpretations

An image of the final affinity diagram. Each participant is color coded and categorized by labels.

Key Findings + Recommendations

  • Most participants believe database search is inherently imprecise and ambiguous. Due to “noisy” results, most users tend to begin searches with a specific need and purpose in mind and use filtering tools and keywords to narrow results. Broad searches are rarely attempted.

    Recommendation: Implement a user guide (in system or accessed through FAQs) that conveys best practices for calculated search and the system’s search logic.

  • Users rarely work with data directly in the system, rather extracting data in order to perform offline analysis. Typically, users combine data from various data sources in self-maintained spreadsheets. Although most archaeological databases have an export function, users find them unsatisfactory as it results in tedious data cleanup to match their needs.

    Recommendation: We suggest drilling into the Open Context export function to lessen the data manipulation user burden. There could also be methods for users to customize data output/input, such as system-generated templates.

  • Open Context is an open data repository, meaning all the information in the database is accessible by anyone on the internet. While the idea of open data is appealing to most users, two themes emerged justifying the need for data security:

    1. “Sensitive data” such as material from a disadvantaged community or restricted information held through governmental bodies

    2. Data produced in early-stage projects where the material is not yet published in a formal publication.

    Recommendation: Consider “sensitive data” tagging where data authors must approve access, or implement user account creation to scale security infrastructure.

  • “The university library website is difficult in that it's hard to distinguish different types of results. If I'm looking for a book, they may bring up a book review or other mediums. It's an extra step to filter out what I'm looking for.” - Participant 3

    I always start with a keyword search to see what’s in there … specific dates, a person, a location, that kind of thing … the stuff I research is so niche and specific…” -Participant 4

    “We tried to export it but it wasn’t a convenient format to pull the entries in a way we wanted. So we ended up exporting things manually into the Excel spreadsheets.” -Participant 3

    “I work with a lot of (sensitive content) shared by communities. Say you put up enough information so someone knows the info exists, but I have to get permission to access it. Being able to see it's there but there's a specific valid reason why it's not there.” -Participant 4

    I think open data is something that a project can and should prepare for ... but I don't think it's something that people are very well positioned to execute ... part of that boils down to a lot of our reservations about sharing data, and it just getting scooped ... There's still this attitude that until I formally publish this as an article, I'm not going to put the data out there ... if it's not accompanied by the article, how are people going to use that?” -Participant 5

 

Phase 2: Understanding the competitive landscape and addressable market

Comparative Analysis

Study Research Objectives

  • Understand the strengths and weaknesses of Open Context’s product within the competitive landscape

  • Provide guidance on options to improve user experience based on industry standards

The Method

The Open Context system was assessed against x7 comparable systems (direct, partial, parallel, indirect, analogous).

Based on user needs identified in earlier studies, the Emphasis Lab created a matrix of 20+ features and tools spanning:

  • Marketing positioning

  • Partnerships

  • System accessibility

  • Other common features

Key Findings + Recommendations

Final comparative matrix for Open Context against competitors.

Click on the image to access a high resolution version.

 
  • When reviewing Open Context’s direct and partial competitors, preservation and collaboration are dominant value propositions, with data as a secondary goal.

    TDAR: “long-term preservation of irreplaceable archaeological data and to broadening the access to these data.”

    DAACS: “as a model for the use of the Web to foster new kinds of scholarly collaboration and data sharing among archaeologists working in a single region.”

    In comparison, data is at the center of Open Context’s story, seeking to serve as the platform where raw data is transformed with annotation and archive integration. This messaging is more in line with indirect competitors such as Filemaker and Arches, who position themselves more as productivity tools.

    Recommendation: Consider merging their narratives of data transformation and storage with those of preservation, collaboration, and productivity in order to allow users to imagine how the product applies to their work.

  • All of Open Context’s competitors apart from DAACS and Google Maps allow users (logged into their accounts) to save their searches and view past searches and results from their browsing history. Currently, Open Context users may only revisit past searches by saving the unique URL created; there is no interface functionality present for Past Searches.

    Recommendation: Based on industry standards, offering the option to create a user profile could be beneficial to recurring Open Context users.

  • Open Context has an advanced function that allows users to customize their exported data by selecting and removing additional attributes. Of the industrial competitors that we researched, only FileMaker and Tableau support data report customization, which may indicate that improving upon Data Export Customization could become a competitive advantage for Open Context.

    Recommendation: To lower the barrier for non-technical users, user-friendly elements could be added to the data export window. For example, “drag and drop” may be more intuitive for a user to manipulate attributes in a data report.

 

Survey

The intro to a Qualtrics survey developed by Emphasis for Open Context

Study Research Objectives

  • Strengthen assumptions from previous analyses across a broader audience

  • Assess characteristics, behaviors and sentiments behind using open data

  • Understand the target audience's desired functionality in data repositories

The Method

Audience

A 16-question survey (qualitative and quantitative questions) was distributed among current and potential users:

  • All Open Context users

  • Archeological researchers, educators, and enthusiasts that may be the potential users

Based on total addressable market, the ideal sample size was 381 responses with a 50/50 distribution between current and potential users. Based on project constraints, the team aspired to receive a minimum of 50 responses.

Distribution and results

The client owned survey distribution, sourcing respondents across their official Twitter account and various industry email distribution lists (~3000 viewers). 108 people opened the survey, 76 people answered at least one question and 56 people completed the survey, with a completion rate of 74%.

Respondent demographics

  • 54% current users

  • 46% non-users

    • 92% of non-users were self described Archeology practitioners.

Key Findings + Recommendations

Bar chart displaying frequency of database access by Open Context user and non user

Open Context vs non-Open Context users against frequency of data repository access. 

Note that Open Context users tend to access databases more frequently than non-users


  • Open Context users are comfortable in working with and in open data and report higher frequencies of archival database use than non-users. Open Context users were also more well-versed with using open data, where twice as many Open Context users vs non-users had experience storing or sharing data through an online database repository.

    Coupled with the overall positive correlation between database frequency and expertise self-rating, we conclude that Open Context users are considered “expert” users.

    Recommendation:

    As Open Context’s goal is to expand their platform reach, the system must cater to new and recurring user experiences, many of whom may be less experienced with data than current users. We recommend establishing a guided onboarding experience within Open Context, where first-time users are provided an onboarding experience walking through key features and education about how open data repositories work (with opt out abilities).

  • Although database comfort differed between Open Context users and non-users, both groups display similar functionality needs.

    80% of respondents stated that Access to Open Data is the most important function of archaeological databases, and Data Export continues to have high placement in frequency and importance (ranked #2 and #3 respectively). Results Filtering, although placing in the lower half in terms of importance, nonetheless came in #2 in frequency of use.

    Recommendation:
    Continue prioritizing work focusing on areas that support the Search Filtering and Data Export experience. Areas of opportunity include usability of advanced search filters, keyword search, and ease of data export in desired file format.

  • Confirming findings from earlier analyses, tolerance with open data is variable based on the context of use. About twice as many Open Context users reported they were “Extremely Comfortable” with open data compared to non-users.

    For users reporting “Extremely Uncomfortable” or “Somewhat Uncomfortable,” concerns were for the following reasons:

    Data Privacy: Data protection and security concerns, especially if the data had to be pulled from international government sources

    Cultural Sensitivity: Research surrounding particular topics, such as ethnic group or tribal data that may be sensitive and personal to the groups themselves

    Ongoing Research: Until the research is fully published, the data should not be accessible to anyone yet

    Recommendation:
    Investment in education to alleviate concerns on Open Context’s handling of open data. Earlier interviews expressed the sentiment of wanting to know exactly what Open Context was and how it works. This could be improved by explicitly stating how the data repositories operate while addressing data privacy concerns.

Tables displaying ranking of darchaeological database functions by frequency and importance, separated by Open Context users vs non-users

Rankings of archaeological database function frequency of use and importance (1 being highest ranked, 6 lowest).

The rankings are separated by Open Context users vs non-users.

I work with material in a different country (not the US) and there are permits and permissions involved — the government branch that is in charge of cultural heritage would have to approve the open sharing of data and results.
— Survey respondent
 

Phase 3: Measuring Usability

Heuristic Evaluation

Study Research Objectives

Evaluate features against industry-standard heuristics, and determine which aspects of usability are fulfilled or need improvement 

Research Questions

  • Which areas of the site are the most difficult to navigate and why? Are these issues cosmetic or rooted in the site content/logic?

  • What are the accessibility and inclusion implications of the Open Context platform?

  • How do users’ needs from the surveys and interviews relate to the usability issues that need to be addressed?

The Method

x4 evaluators performed x2 independent assessments across the live site and x2 staging environments to review general interface feedback and consistency across system elements. This was done using a heuristic checklist based on Nielsen’s 10 Usability Heuristics for User Interface Design.

Independent assessment was followed by a team calibration to review identified problems by frequency and severity, and to decide upon a universal rating for each (4-point scale).

Key Findings + Recommendations

  • A great user experience requires system status visibility and feedback to give users awareness of where they are in the system, and whether their interactions with the system were successful.

    At times, Open Context does not provide this feedback. For example, some Open Context pages, such as Data Record entries, lack H1 / H2 headers which make it unclear where users are in the site and what type of record they’re reviewing.

    Recommendation:

    Include headings that reflect the organization of the page. We recommend for all pages to begin with a header referencing the object / record / media being reviewed, with a main heading and a sub-header describing the entry type.

  • Navigation plays an integral role in how users interact with a product; specifically, system navigation consistency supports users’ ability to undo / redo actions and to perform tasks in the order of personal preference. From a heuristic standpoint, Open Context site has room for improvement in validity, consistency and user experience.

    First, context labels, menu maps, and place markers as navigational aids between screens are inconsistent.

    Second, certain surfaces (such as Chart Filters) lack a clear user path for navigation back to previous screens.

    Recommendation:

    Prioritize consistency. Having navigation bars differ page by page could frustrate users and impose a heavy burden on them. According to Djonov (2007), a navigation bar should be consistent in two ways:

    • Consistent with the site’s design

    • Consistent on every page

    We recommend using the same UX navigation on every page and standardizing navigational aids across different search types.

    Additionally, navigation should be clearly labeled and signposted so users understand where they are and where they can go.

  • A common framework for web accessibility is the POUR framework. With this in mind, the team audited the site using accessibility evaluation tools and keyboard navigation.

    First, certain system elements such as the chronological distribution chart is inaccessible to users with lower hand dexterity due to the shallow vertical height of the sliding scale and timeline width.

    Second, certain elements of the site are inaccessible through keyboard-based navigation, such as tabs under Mapped Results.

    Lastly, stylistic choices throughout the system are disadvantageous toward color blindness, such as the small blue text on gray background in data / media records.

    Recommendation:

    We recommend considering methods to accompany mouse-based navigation with text entry or non-mouse based navigation. Additionally, we recommend that future development is accompanied by periodic assessments using one of many accessibility tools, including:

the context labels in the navigation bar (Person entry vs Homepage)

Examples of context labels in the navigation bar. Note the differences between a Person entry (top) and the Open Context Home Page (bottom).

An example of the structural hierarchy on a Data Records page in Open Context. Note the lack of clear visual header on the page.

An example of the structural hierarchy on a Data Records page in Open Context. Note the lack of clear visual header on the page.

Example of chronological distribution chart with time range between -6000 BCE–1800BCE. This range of ~5000 BCE has a display width of 6.5 cm, calculating to a 0.13 mm width per annual interval.

Example of chronological distribution chart with time range between -6000 BCE–1800BCE. This range of ~5000 BCE has a display width of 6.5 cm, calculating to a 0.13 mm width per annual interval.



Usability Testing

Study Research Objectives

  • Understand how new users to the Open Context system interact with it

  • Learn most common and uncommon user pathways and identify areas of user frustration

  • Provide user experience guidance

Research Questions

  • Which areas of the site are the most difficult to navigate and why?

  • What functions do not perform as users expect?

  • How do users interact with the Search and Filter tools?

  • What do users think of the Open Context live site and the staging site?

The Method

Study Design

x5 participants were asked to complete a series of 5 system tasks over moderated virtual sessions. These task observations were followed by post-task and post-test questionnaires.

Task Goals

  • Project Search and locating data records

  • Advanced filtering process

  • Narrowing results using the chronological distribution slider

  • Data export and report customization

  • Map-based search in both live and staging environments

Synthesis

Individual test data was translated into an Overall Observations view, and findings that resulted from 3 or more participants were given special importance and are highlighted further in client recommendations.

Participant recruitment

Participants were recruited from earlier survey respondents who opted into future studies, and direct sourced by the Open Context team. A screening questionnaire was distributed to ascertain fit to established criteria. 

A image of a spreadsheet detailing key participant details such as gender, ethnicity, age, and database experience

Participant demographics

Key Findings + Recommendations

  • Our usability tests revealed confusion with the filtering mechanism. Two participants focused on keyword search, ignoring the Filtering Options or misunderstanding how keyword search translated to menu filters. 60% of participants ignored the Filtering Options menu bar and were unaware that top-level filters held nested filters that could be expanded.

    60% of participants also commented on difficulty of navigating between the Filtering Options menu and Search results, as both cannot be viewed on the same screen simultaneously (see Figure 3).

    Recommendations:

    • Transition to a more traditional left-sided navigation bar (which outperforms horizontal filter bars in usability and accessibility)

    • Making filter headers with sub-groups look like dropdown menus signals to users how the objects should be interacted with.

    • Place Filtering Options on a static side navigation bar that moves with user scrolling and keeps filters omnipresent through all interactions.

    • Implement a Clear All function to allow users to retrace steps and easily correct errors

  • Data visualization is a key tool in Open Context, emphasized by the placement of the results heat map at the top of every results page. Most participants used the map to assist search and understand data distribution.

    However, accompanying icons and prompts are not always clear. For example, the filter icon on the side of the chronological distribution timeline is inconspicuous and unfamiliar, and most participants were not able to intuitively understand that the Filter Button refreshes their data set. Many relied on the helper language displayed when hovering over the filter button to understand its function.

    80% of participants expressed confusion regarding how the heat map’s squares translate to item density and distance. Although they could assume the darkest area may represent the highest density, they could not be fully sure without comparing it to a legend.

    Recommendations:

    • Use recognized iconography throughout the system or update icons to simple text where appropriate (ex. “Apply Filters”). Increasing the size of clickable icons.

    • Embed a heatmap legend to indicate data concentration on map visualization.

A screenshot of an initiated search, where users need to scroll down to see the returned data record results.

A screenshot of the chronological slider with filter button highlighted. Many users were unsure how to confirm a new search for the selected time range. We hypothesize this may be due to familiarized behavior wherein search results are automatically updated in Filtering Options and does not require confirmation.

 

Next steps

How might Open Context continue validating the findings proposed during this program?

To summarize the key findings throughout these studies, the research team provided the client with a video summary emphasizing key findings, recommendations, and next steps.

What would I do differently next time?

  • Methods used: As this client collaboration was arranged through a course at the University of Michigan - School of Information, the methods used were predetermined based on the course syllabus and against an aggressive timeline (2-3 weeks per study from kickoff -> synthesis -> shareout). At times, this resulted in redundancy across studies and premature closure against potential lines of inquiry.

    If this study were to take place outside of an academic setting, I would recommend focusing on the following methods based on the client problem statement.

    • Comparative evaluation

    • User interviews + usability testing

    • Survey

  • Participant recruitment: At times, the team relied on personal networks for study participation, which could generate familiarity bias during tests/interviews.