Emerging technologies, the top trend in academic libraries – surprised no one, ever

It will come as no surprise that many of the top trends in academic libraries relate to digital technologies.

According to the ACRL Research Planning and Review Committee’s “2012 Top Ten Trends in Academic Libraries,” Data Curation and Digital Preservation were two of the top ten trends in academic libraries in 2012. Some of the predictions are: cloud-based repositories will become more popular, and librarians and information professionals will play a critical role in designing and implementing strategies for data description, storage, management, and reuse. Digital preservation is becoming more important as research increasingly depends on digital technologies, but we still lack standardized policies. Continue reading

Advertisements

Fighting, err… working with SharePoint

For the past several months I have been creating, adding, and quality controlling metadata in a SharePoint 2010 document library.* I thought those months had given me some crucial SharePoint skills, and that SharePoint had become relatively intuitive.

There are four kinds of people in the world, according to an Arab proverb (according to Bartleby):

… those who don’t know that they don’t know; those who know that they don’t know; those who don’t know that they know; and those who know that they know.**

Reflecting on my experiences this past week (read on, I get to that shortly), I used to be the first kind of people, and now I feel pretty confident that I’m in the second category.

Continue reading

Book blogging and HTML: Linking Images

I’ve recently started a book blog (check it out), in which one of the features is a set of “read-alikes” placed at the end of each post. They take the form of cover images. I really like the idea of linking those images to lead readers to more information about these books (from Goodreads, at the moment). With a link to a description and social media/community reviews, they “don’t have to take my word for it!”

Although I haven’t memorized the HTML code yet, I have re-figured out how to do this a few times (precisely because I haven’t been able to memorize it). So in this post, I’m going to go through the steps I use to link a cover image to a Goodreads book description page. Continue reading

Digital preservation in a webinar, part four

Finally! The recap of the fourth Introduction to Digital Preservation webinar, hosted by ASERL.

[To listen to the recordings and view the power point presentations, see ASERL’s archive]

The title of this webinar: “Using FITS to Identify File Formats and Extract Metadata.” It was presented by Andrea Goethals, of Harvard University.

The highlights:

What is FITS?

  • “File Information Tool Set”

Some complications

  • Format specifications often have different versions.
  • Specifications are things that file formats conform to.
  • Authoritative specification information does not always exist for files. Sometimes it can be unclear, complex or long, it can reference other file formats, and can depend on other specifications.

Further complications for tool builders and users

  • OpenDoc formats are packaged as ZIP files, which information is not sufficient for preservation.
  • Many formats (e.g., XML) are text formats.
  • Some formats lack obvious identifying features.

Implications

  • File formats can be difficult to accurately identify.
  • Some are more specific than others (inconsistent).

How does FITS help?

  • Combines the functionality of different file format identification tools.

Why build FITS?

  • The motivation at Harvard was to offset the risk of accepting any format (including web archives, email attachments, donated external hard drives).
  • Additionally, to integrate into existing preservation workflows.
  • Strategy: to develop a tool manager instead of a tool, and to account for tool inaccuracy: to check tools against each other, and to verify results.

What is required?

  • XML, for tools without a graphics interface tool

What does FITS do?

  • Identifies many file formats
  • Validates a few file formats
  • Extracts metadata
  • Calculates basic file information
  • Outputs technical metadata
  • Identifies problem files (e.g., conflicting opinions on format, metadata values; unidentifiable formats)

The Process

  • FITS translates tool output to a common XML file type, consolidates them into one FITS XML format, and then translates the FITS XML file to standard XML.
  • You can store the FITS XML files wherever you store metadata in the repository.
  • The file is not modified during the process.

Normalization (translation)

  • The key to using multiple tools
  • Assists with tools that provide different names for the same format
  • Assists with tools that provide different values for the same metadata
  • Assists with tools that provide different ways of saying when they can’t identify the format of a file

[Then we watched nifty demonstrations in Windows as Andrea Goethals took us through what FITS does and how it does it. I discovered I can read basic XML.]

At Harvard University Libraries

  • They store metadata in XML form in a metaschema
  • Output is parsed and packaged
  • Some of FITS data fits well into PREMIS
  • Standard metadata block is added
  • Other information is included with administrative metadata

Questions & Answers

Q: Are there plans to integrate FITS into large systems/repositories?

A: ArchiveMata uses it. DuraCloud looked into it, but it is mostly used in individual repositories.

Q: Do you need to have the individual File Format Identification tools loaded locally?

A: All necessary tools are downloaded with FITS

Q: When FITS notes conflicts between tools’ results, how do you know which one is right?

A: Conflicts often occur in relatively unused formats. There is an XML file included that can be used to educate oneself, to determine if it is really a more specific version of a broader format. (It provides a format tree).

Where to find FITS

fits.googlecode.com

  • Download the link
  • OSS
  • The mailing list is good for new versions, other news.

My Take-aways

This whole series has been incredibly informative. Having listened to these experts talk about common/important tools that they use for digital preservation, I now have a better idea of not only the processes involved in digital preservation, but also how the different pieces fit together. The project planning information was pretty straightforward, and generally, not very different from many projects I’ve worked on in the past or learned about in library school.

Now that I know that some FITS information fits well into PREMIS, and that other information from FITS fits into administrative metadata sections, and that XML can carry them all, I have a better idea of how to use the metadata categories described in the second webinar.

I know which kinds tools are meant to be used for various tasks in digital preservation projects, and I know what I need to learn (and what I don’t) in order to use them. I can point to FITS and PREMIS and say that they may be used in the implementation stage.

Lastly, I know so much more about where to go to find out more about the tools, processes, best practices, and current projects.

Digital preservation in a webinar, part three

I not-so-recently “went to” the third Introduction to Digital Preservation webinar hosted by ASERL (Association of Southeastern Research Libraries).

[To listen to the recordings and view the power point presentations, see ASERL’s archive]

This webinar was titled: “Management of Incoming Born-Digital Special Collections. Presented by Gretchen Gueguen, of the University of Virginia.

Without further ado, my notes:

What is born-digital?

There are two layers:

  1. Content
  2. Supporting software and operating system(s) (OS)

The same software/OS can be used for multiple files.

The Crucial Dependency

  • Hardware. Including (but not limited to) ports, wires, ribbons, drives, connectors
  • Translation between older and newer hardware can be achieved by write-blockers

The process

Imagine a doughnut.

“Preserve” is positioned in the doughnut hole, smack dab in the middle. Around the edges of the donut (starting on the left and moving clockwise, if you’re curious) reside:

  • “Provide Access”
  • “Appraise”
  • “Accession”
  • “Arrange/Describe”

Appraisal

Including old collections and new collections, legacy material (that has already been collected).

Do you further process these legacy collections, or deaccession them?

Appraisal Phase 1: Inventory

(the following list contains information/data you may want to collect in the inventory phase)

  • Disk #
  • ID #
  • Collection name/title
  • Record # (MARC or EAD)
  • Media type
  • Manufacturer
  • Capacity
  • Date (from label info)
  • Color
  • Damage
  • Label info

The above information can be used in cataloging, and can help future identification/location of items in the collection.

It may be necessary to:

  • Research accession records
  • Search the stacks
  • Conduct a physical survey of a statistically significant sample of disks

Appraisal Phase 2: Evaluate

Legacy collections

  • Available resources (work, costs)
  • File types and formats present in materials
  • Volume of data vs. capacity to take it
  • Condition of content: changed? corrupted?
  • Dependencies on software, hardware
  • Institution’s commitment to the content
  • Migration or transformation required?
  • Can you appraise/view the intellectual content?

New acquisitions

  • Policy framework (update it frequently and proactively)
  • What capacity do you have for acquiring new born-digital collections?
  • How will you deal with certain scenarios?
  • Do you need special hardware to read the content?
  • Do you have that hardware?
  • Does someone else? (e.g., eBay) : Given the scarcity of obsolete hardware, there is a growing interest in sharing equipment
  • Is the disk/drive natively Read Only?

Accessioning

Hardware types

  • Zip
  • DVD/BluRay
  • JAZ
  • Others (e.g., floppies) are difficult
  • Write blockers/forensic bridges: Hardware devices or software that block any writing onto disks (e.g., Tableau, Wiebe Tech; SAFE Block XP, MacForensicLab)

Software barriers

How to transfer the data to a new medium?

1. Disk imaging – one file, bit-level copy

  • Captures unused space, sometimes called “file slack,” made up of binary zeros, can take up a lot of space
  • Benefits: compact, single file, intact, complete
  • Drawbacks: can capture unwanted data, requires specialized tech, can transfer across write-blocker if file is still readable

2. Logical imaging – select what you want and create an image

Transfer using (examples)

  • NTFS (New Tech File System)
  • MAC : HFS (Hierarchical File System)

Rendering files

  • Transfer methods: over a network, using Duke Data Accessioner, or Bagit, or FTP transfer tool such as FileZilla, CyberDuck (how about these names?).
  • Web harvesting (e.g., Internet Archive)
  • Save to modern media (CD, external hard drive)
  • Image the hard drive in person

Management

Is the file corrupted, lost, or changed?

  • Checksums. If these haven’t changed, the file hasn’t changed.
  • Check for viruses (stabilizing material): Do this in an un-networked space BEFORE uploading the files to a network!
  • Search for Personally Identifiable Information (PII)
  • Search for duplicate files using checksums.

Arrange/Describe

Metadata

  • Use media inventory
  • File inventory of contents (e.g., date, size, file name, type)
  • Extract technical, forensic, and preservation metadata (using PREMIS, PBCore, for examples)
  • Use a spreadsheet if you don’t have fancy infrastructure to record this information

Storage

  • Make multiple copies! (Lots of Copies Keep Stuff Safe, heh heh)
  • Use repositories or a managed service system for metadata and storage
  • If you don’t have one, how will you store and track content? (Spreadsheet and storage database)

Questions (a selection)

Q: Do you have any rules of thumb for materials NOT to accession?

A: The folks at the University of Virginia have not seen anything that they have decided not to take – nothing too unusual. Make sure you have access to the hardware to read the data/content. For some formats UVA doesn’t actually have, they obtained copies of the software from the donor.

Q: Do you manage the bit-stream or physical for commercially produced materials such as DVDs related to other materials?

A: Only physical management at the moment.

Q: Does UVA’s gift agreement contain language for digital preservation?

A: The agreement does state that the donor will agree not to offer the same content to other sources or institutions. It provides information about intellectual property rights. UVA reserves the right to do whatever is needed to preserve the content. Allows donors to ask for access restrictions. It does not contain any statement to the effect that UVA agrees to preserve content via a particular material or for a specific time.

Final words

Appraisal and accession are CRUCIAL.

Metadata is important – use checksums, spreadsheets.

Consider consortia – have someone else read the disks you can’t, and vice versa.