What happens when we don’t get to know our users?

ParisCafeDiscussion

One of the challenges I’ve experienced in my career as an information architect has been that clients don’t have confidence that IT can create technical solutions to their business pain. When solutions are developed without regular, frequent user input, we risk creating tools and systems that don’t meet users’ needs. Only when we understand how they conduct their work can we create valuable, usable solutions that address users’ pain. We may even be able to create more innovative solutions once we become more familiar with day-to-dayprocesses and workflows.

When users are left out of the solution design and development process, three things may happen:

We risk wasting time and money designing, building, and testing tools or services that will never be used. I have collaborated with clients to create online communities of practice with the goal of expanding information sharing across the institution. The key to these sites is they are accessible by all. Yet, in many cases they are rarely accessed. There could be many reasons users don’t visit the sites, but until we understand those reasons, we won’t be able to create a solution that becomes embedded in users’ daily practices.

This is where good data analysis comes in. With data that gives us a comprehensive picture of how our sites are being used, we can narrow the potential causes of disuse. Is the content updated regularly so users find new information each time they visit? Is it authoritative, correct, and of use to our audience? What is different between the sites that are heavily used and those that aren’t, in terms of content, maintenance, and outreach/promotions? Each question’s answer may suggest a different solution.

When we analyzed the usage data for these communities of practice, we noticed a trend that sites would have high visit numbers after their launch, but that this interest was not sustained. Over time, we discovered that sites with frequently updated content sustained more interest than those with static content. Further, the sites with more authoritative content were visited more than those did not. This suggested specific improvements, that, when implemented, increased site usage again.

We risk frustrating users when we create systems or tools that do not address their business pain. From the outside, it may look as though we don’t care about the actual problems our users are having, or about how we can support the ways they conduct their work. They may see that we are creating tools arbitrarily, based in a best practice or perceived need that does not match their actual needs.

Here, user feedback techniques can help. By interviewing users about their work, we can discover what tasks they are trying to achieve, and how they achieve them. What information do they need access to? How do they find it? Can they retrieve the information they need? When they get stalled in their work, why is that? What do they do to get around it? How do they use the tools they already have, and do they kludge anything to get the results they need? Each question here gives us an opportunity to develop more needs-based solutions.

Talking directly with users about their experiences with a search tool that aggregates documents, we learned that many power users had developed their search habits around an older system that had archaic rules for formulating searches. When we shared our more simplified version, they had trouble finding what they needed. We’re in the process now of mapping the old rules to the new system. By finding out how our users worked, we’re able to deliver a more satisfactory service.

We risk losing credibility and building a reputation for being out of touch with our users. Sometimes, this can go as far as creating an environment where multiple teams work toward the same goal from different starting points. We can create multiple products to solve the same problem, which adds to users’ confusion and frustration.

This is why we need to embed clients and users into the design process. Embedding users into the entire development and design process is crucial if we want our solutions to be embedded in users’ daily work. When combined, data analysis, user feedback, and strong relationships with clients and users make it much easier to develop solutions that users find valuable.

This may mean it takes more time to discover and understand changing requirements, but in the end the solutions will be much more valuable, because users have been able to share what works for them and what doesn’t. At the end of the day, we are trying to make their work easier. They will be the best judges of whether we have succeeded, or not. Over time, successful collaborations will build clients’ confidence that we will develop the right solutions when (or even before) users need them.

Image from Illustrated London News, 1870, by way of Wikimedia Commons

How Science Fiction Can Help Design Websites

102px-MBoehmeMeetingOriginally published on LinkedIn on January 26, 2016.

Information architecture may seem at first to have little in common with science fiction. After all, what does the organization, design, and structuring of information have to do with the imagination and exploration of what could have been and what might be? It turns out they have one thing in common. The same thing that explains why I’m so interested in both information architecture and science fiction.

The Terrans, by Jean Johnson, is an exciting novel about the first encounters between inhabitants of Earth and our solar system, and the denizens of an alien human civilization that emigrated from Earth millennia ago. This “first contact” trope, commonly used in science fiction, focuses on the interactions and tensions between the two civilizations. Different ceremonial protocols, communication styles, technologies, allergies, and ways of understanding the universe create misunderstandings and conflicts that need to be overcome and resolved before the two civilizations can work together to, in this case, defeat a common enemy.

This is all very well, you might be thinking, but what does it have to do with information architecture and web design?

In this conflict and tension described in a science fiction novel, I see the conflicts and tensions inherent in the connections between information and people. Content creators and content consumers engage, knowingly or unknowingly, in this “first contact.” Content creators have their own backgrounds, cultural norms, and perceptions of the world. They organize information using their worldviews as models. Content consumers, in exactly the same way, arrive at the information from their own perspective, informed by their own worldviews. If both individuals are from the same culture, the differences in the way they perceive, organize, and share information may be small. If, however, they have different backgrounds – let’s say one is a very technical or mathematical person, and the other is an intuitive, literary person – they may have very different ideas about what “makes sense.”

In creating an information resource such as a website, the best policy for the content creator or information architect is to understand who the other partner is in the conversation, and how that partner perceives and organizes information. In The Terrans, the main characters of one civilization make every effort to understand how the other civilization operates and to accommodate and be patient with any differences. When individuals from the other civilization do not make the same effort, diplomatic relations break down. When content creators do not understand their users, the connection between information and audience withers.

So, whether you’re a science fiction fan or not, it may help you to think of creating and organizing content on a website in terms of a “first contact” novel, with your users playing the role of the foreign civilization. Ask yourself, or better yet, ask them: What experiences and perceptions do they bring to the website? Where and how do they expect to find information? As soon as you can understand where the consumer is coming from, you will find it much easier to help them understand the information you share with them, which will encourage them to visit again.

Build the bridge between you and your user, and you will have established the foundations for a strong connection between your audience and your information. I can say from experience that creating this connection is one of the most satisfying things about designing information resources.

By the way, if you’re looking for a fantastic and complex science fiction novel about the interactions between two very different species, I highly recommend Foreigner by C.J. Cherryh.

 

Image: “Die Begegnung” – “The Meeting” by Michael Bohme, Wikimedia Commons

Digital Humanities and Digital Preservation: a new series

Last weekend, I attended the very educational and inspiring, fun and interesting Data Driven: Digital Humanities in the Library conference at the College of Charleston. I have a lot of information to digest, and in the next few posts I will write a series of some of my notes, and some implications for my projects at the DC/SLA.

In this post, I begin with my notes from the pre-workshop readings for “From Theory to Action: A Pragmatic Approach to Digital Preservation Strategies and Tools” Workshop at the conference in Charleston, SC, June 20-22 2014.

 

Pre-workshop readings:

NDSA Levels of Preservation (an assessment tool for institutions and organizations)

You’ve Got To Walk Before You Can Run (high-level view of the basic requirements to make digital preservation operational)

Walk This Way (detailed steps for implementing DP – introductions to each section were recommended reading)

Library of Congres DPOE (optional)

POWRR website (optional – the group that taught the workshop is POWRR – Lots of good here)

 

NDSA Levels of Preservation – Where I see the DC/SLA Archives Committee:

  1. Storage and Geographic location
    1. Level 0 – still determining where things are, how they have been stored
  2. File fixity and Data integrity
    1. What is fixity? (I learned fixity is, for example, running checksums to determine if materials/digital objects have changed or been corrupted over time. Checksums are algorithm-produced unique identifiers that correspond to the contents of a file and are assigned to a specific version of a file or item).
  3. Information Security
    1. Level 0-1 – We have determined in policy documents who *should* have read authorization (the general public, in most cases, with some redactions/delays in dissemination for PII and financials)
    2. The Archives Committee will be the only ones, aside from possibly a Board liaison, to have other authorizations (edit, delete, etc.)
  4. Metadata
    1. Level 0 – We will soon be conducting an inventory of content, which will include an investigation into what metadata has been included
  5. File formats
    1. Level 0 – We will soon determine what formats have been and should be used

So, clearly, we still have a lot of work to do.

“You’ve got to walk before you can run: first steps for managing born-digital content received on physical media” (OCLC/Ricky Erway, 2012)

  • Audience: those who have or are currently acquiring such born-digital materials, but have not yet begun to manage them
  • identifying and stabilizing holdings
  • Four Essential Principles
    • Do no harm (to physical media or content)
    • Don’t do anything that unnecessarily precludes future action and use
    • Don’t let the first two principles be obstacles to action
    • Document what you do!!
  • Survey and Inventory Materials in your Current Holdings
    1. Locate existing holdings
      1. Gather info about digital media already in collections
      2. Do collections inventory to locate computer media in any physical form
    2. Count and describe all identified media (NOT mounting or viewing content on media)
      1. Gather info from donor files, acquisition records, collections, etc.
      2. Remove media but retain order by photographing digital media and storing printouts in physical collection
        1. Alternative: place separator sheets in physical collection
      3. Assign appropriate inventory # / barcode to each physical piece
      4. Record location, inventory #, type of physical medium, any identifying info found on labels /media, e.g., Creator, Title, etc.
      5. Record anything known about hardware, operating system, software; use consistent terms
      6. Count # of each media type, and indicate max capacity of each media type, max amount of data stored, then calculate overall total for the collection
      7. Return physical media to suitable storage
      8. Add summary description of digital media to any existing accession accession record, collection level record, or finding aid
    3. Prioritize collections for futher treatment, based on:
      1. value, importance, needs of collection as a whole and level of use (anticipated use) of collection
      2. whether there is danger of loss of content
      3. whether appears to be significant digital content not replicated among analog materials
      4. whether use of digital content that is replicated in analog form would add measurably to users’ ability to analyze or study content
      5. when just a few files can be represented on a page; whether printouts might suffice
    4. Repeat these steps every time you receive new media.

 

Walk This Way (OCLC/Julianna Barrera-Gomez, and Ricky Erway, 2013)

  • Draft a workflow before beginning? Revise during execution?
  • Existing digital preservation policies may include donor agreements (which can explain what info may be transferred from digital media) and policies on accessioning or  de-accessioning records or physical media
  • Consult policies (IT?) on software use or server backups
  • AIMS project 2012 report about digital collection stewardship provides objectives for informing policy and glossary for non-archivists
  • Documenting the project
    • What info about the process will be needed in future to understand scope, steps taken, and why?
    • provides context to ensure process; forms key part of evidence for provenance; indicates authenticity of material
    • manage associated metadata (auto-generated or manually created)
    • content management systems: Archon, Archivist’s Toolkit: use to create accession records to link from project’s documentation to other holdings
    • Create a physical project directory with folders
      • subfolders:
        • Master Folder (Preservation Copy, Archival Copy Folder)  – holds master copies of files
        • Working Folder – holds working copies of master files
        • Documentation Folder – to hold metadata and other information associated with the project
  • Preparing the Workstation (Mandatory) – this may be a problem, unless we find a way around having a physical workstation for preservation work.
    • dedicated workstation to connect to source media
    • start with a single type of media from a collection to aid efficiency and keeping track of materials, metadata.
    • What alternatives to this? Physical space and financial obstacles for DC/SLA
    • Use a computer that is regularly scanned for viruses
    • consider keeping it non-networked until a connection is needed (e.g., for file transfers, software/virus definition updates)
    • DO NOT open files on source media!
  • Connect the source media
    • Examine media for cracks/breaks/defects
    • Consider removing sticky notes or other ephemera (take digital photo first)
    • DO NOT attempt to open files yet!
  • Transfer Data
    • Copy files or create a disk image
      • Copy files individually or in groups – practical way for new archivists to get started
      • Disk image – more info is captured, easier to ensure authenticity. Makes exact, sector-by-sector bit stream copy of a disk’s contents, retaining original metadata. Make a single file containing an authentic copy of the files and file system structure on a disk.
        • Forensic images image everything, including deleted files and unallocated space. Logical copies omit deleted files and unallocated space. 
  • Check for viruses
  • Record the file directory
    • Make a copy of the directory tree
  • Run Checksums or Hashes
    • unique value, based on contents of a file and is generated by specific algorithms (different ones – consistency is important)
    • identify whether/when a file has changed
    • regularly hashing a file or image you have copied and checking those new hashes against the hashes made at the time of the transfer should be part of your digital curation workflow
  • Securing project files
    • consolidate documentation
  • Prepare for Storage
    • arrange for space on a backed-up network server that is secure
  • Transfer to a secure location
    • additional copies – preservation master copies that must be kept safe from unintentional alteration
  • Store or de-accession source media
    • if destruction, use a secure method in conjunction with donor agreement and policies
  • Validate file types
    • determine whether you can open and read the contents of digital files (from the working copies!)
    • use working copies
    • hex editors – show file properties (byte representation)
  • Assess Contet (optional)
    • use working copies
  • Reviewing files 
    • only working copies
  • Finding duplicate files
    • if you delete, you will need to delete from the Master Folder already moved to secure storage
  • Dealing with Personally Identifying or Sensitive information
    • sensitive information must be kept restricted and secure on workstations, file servers, backup or transfer copies
    • Redact or anonymize before making available to users 

In the News

Life happened the past few weeks, but I’m looking forward to getting back into the swing of things here on Cultural Heritage and Information. Today I thought I’d post some interesting stories from around the web that relate to topics here on the blog.

HathiTrust Digital Library Wins Latest Round in Battle With Authors

With new publishing technologies and research practices, the copyright debates will continue to evolve in legal and other settings. The Chronicle of Higher Education posted a summary (by Jennifer Howard) on June 10th about the latest developments in the HathiTrust Digital Library vs. Authors Guild case. Ultimately, the U.S. Court of Appeals for the Second Circuit (in New York) decided for the library. Its decision will allow a searchable, full-text database of the Library’s works under the “fair use” clause, and will also allow dissemination of works in different formats for vision-impaired users.

What happens When Preservation and Innovation Collide?

The National Trust for Historic Preservation reflects on two years of innovation strategy development with EmcArts’ Innovation Lab for Museums. In the post (by Estevan Rael-Galvez), they share their ideas, challenges, and successes. Most interesting, in my opinion, is their idea to transition traditional historic house museums (which I adore), from static contrived experiences to more integrated, immersive experiences that stimulate all the senses of visitors.

Bit Rot: The Limits of Conservation

Hyperallergic.com discussed (in a post by Martha Buskirk on June 9) how time affects access and preservation of electronic media. The article supports “lots of copies keep stuff safe” as a general strategy to work toward in preservation and conservation of cultural and art artifacts. It also describes common obstacles, such as getting artists’ input on migration to new technologies, obsolescence of older technologies, copyright issues, determining in what aspects of the works the value lies, and the consequences of benign neglect. Best practice? Awareness and vigilance about what we want to save, and what has value to us.

How difficult can your manuscripts be?

The National Conservation Service in the UK blogged about some challenges that crop up when digitizing manuscripts. Some issues they faced during the digitization process for Khojki manuscripts from the Institute of Ismaili Studies include illegible text located in awkward places (e.g., the gutters), curved and warped pages, and ink degradation.

World Cups

Just for fun, I’m sharing the Horniman Museum and Gardens‘ World Cup tie-in, about a digital exhibit they created on cups from around the world (“world cups”… get it?). Cups from locations such as Burma, China, Japan, Indonesia, and Colombia feature in the exhibit.

Deepening my knowledge of Dublin Core, part 2

One of the things that strikes me about Dublin Core is its flexibility. There are formatting rules, and guidelines, but there are often multiple ways to enter the same data. For this session, I’ve been using the Creating Metadata User Guide from the Dublin Core wiki (henceforth: “user guide”).

Firstly, each field is divided into a property and a value – the value describing the property of the resource (e.g., the value “A Christmas Carol” corresponds to the property “title”). Sometimes properties have what look like sub-properties to me, but are described as “creating a relationship between the described resource and a more detailed title description”. In effect (and using the example given in the user guide), “title” is the main property, and sub-properties (more detailed descriptions) are “in greek” and “in latin” for transliterations of a title. The values then correspond to the sub-titles, so “in greek” equals the title written out in Greek, and “in latin” equals the title written out in Latin, as you see below.

Property Detailed Property Value
title
in greek (title in Greek characters)
in latin Oidipous Tyrannous

If I understand the table above correctly, it formulates a relationship (of parent to child, perhaps) between the general “title” property and the more detailed properties that describe the transliterations.

Continue reading