Updated Harvesting Process from the Internet Archive
Note: This is a revision of our previous blog post that described our process for harvesting digitized books from the Internet Archive. Their query interface changed, and we’ve updated our process & documentation accordingly.
Disclaimer: BHL is not directly or indirectly involved with the development of this query interface. We scan books through Internet Archive and are consumers of their services & interfaces. We have provided this documentation to help inform others of our process. Questions or comments concerning the query interface, results returned, etc., should be directed to the Internet Archive.
Overview
The following steps are taken to download data from Internet Archive and host it on the Biodiversity Heritage Library. Diagrams of the process are available in PDF.
- Get item identifiers from Internet Archive for items in the “biodiversity” collection that have been recently added/updated.
- For each item identifier:
- Get the list of files (XML and images) that are available for download.
- Download the XML and image files
- Download the scan data if it is not included with the other downloaded files
- Extract the item metadata from the XML files and store it in the import database.
- Extract the OCR text from the XML files and store it on the file system (one file per page).
For each “approved” item, clean up and transform the metadata into an “importable” format and store the results in the import database.Read all data that is ready for import and insert/update the appropriate data in the production database.Internet Archive Metadata Files
The following table lists the key XML files containing metadata for items hosted by Internet Archive. It is possible that one or more of these files may not exist for an item. However, most items that have been “approved” (i.e. marked as “complete” by Internet Archive) do include each of these files.
Filename |
Description |
*iaidentifier*_files.xml |
List of files that exist for the given identifier |
*iaidentifier*_dc.xml |
Dublin Core metadata. In many cases the data include here overlaps with the data in the _meta.xml file. |
*iaidentifier*_meta.xml |
Dublin Core metadata, as well as metadata specific to the item on IA (scan date, scanning equipment, creation date, update date, status of the item, etc) |
*iaidentifier*_metasource.xml |
Identifies the source of the item… not much meaningful data here |
*iaidentifier*_marc.xml |
MARC data for the item. |
*iaidentifier*_djvu.xml |
The OCR for the item, formatted as XML. |
*iaidentifier*_scandata.xml |
Raw data about the scanned pages. In combination with the OCR text (_djvu.xml), the page numbers and page types can be inferred from this data. This file may not exist, though in most cases it does. For the most part, only materials added to IA prior to late summery 2007 are likely to be missing this file |
*iaidentifier*_scandata.xml |
Raw data about the scanned pages.If there is no *iaidentifier*_scandata file for an item, we look in scandata.zip (via an IA API) for this file, which contains the same information. |
Internet Archive Services
Search for Items
Internet Archive items belong to one or more collections. To search a particular Internet Archive collection for items that have been updated between two dates, use the following query:
http://www.archive.org/advancedsearch.php
?q={0}+AND+oai_updatedate:[{1}+TO+{2}] &fl;[]=identifier&fl;[]=oai_updatedate&fmt;=xml&xmlsearch;=Search
where
{0} = name of the Internet Archive collection; in our case, “collection:biodiversity”
{1} = start date of range of items to retrieve (YYYY-MM-DD)
{2} = end date of range of items to retrieve (YYYY-MM-DD)
To limit the item search to a particular contributing institution, modify the query as follows:
http://www.archive.org/advancedsearch.php
?q={0}+AND+oai_updatedate:[{1}+TO+{2}]+AND+contributor:(MBLWHOI Library)
&fl;[]=identifier&fl;[]=oai_updatedate
&rows;=100000&fmt;=xml&xmlsearch;=Search
To limit the results of the query to a particular number of items, modify the query as follows:
http://www.archive.org/advancedsearch.php
?q={0}+AND+oai_updatedate:[{1}+TO+{2}] &fl;[]=identifier&fl;[]=oai_updatedate
&rows;=100000&fmt;=xml&xmlsearch;=Search
To search for one particular item, use:
http://www.archive.org/advancedsearch.php
?q={0}&fl;[]=identifier&fl;[]=oai_updatedate
&fmt;=xml&xmlsearch;=Search
where
{0} = an Internet Archive item identifier
Download Files
To download a particular file for an Internet Archive item, use the following query:
http://www.archive.org/download/{0}/{1}
where
{0} = an Internet Archive item identifier
{1} = the name of the file to be downloaded
Downloading Files Contained In ZIP Archives
In some cases, a file cannot be downloaded directly, and may instead need to be extracted from a ZIP archive located at Internet Archive. One example of this is the scandata.xml file, which in some cases must be extracted from the scandata.zip file. To do this, two queries must be made. First invoke this query to get the physical file locations (on IA servers) for the given item:
http://www.archive.org/services/find_file.php
?file={0}
&loconly;=1where
{0} = and Internet Archive item identifier
Then, invoke the second query to extract the scandata.xml file from the scandata.zip file (using the physical file locations returned by the previous query):
http://{0}/zipview.php
?zip={1}/scandata.zip
&file;=scandata.xmlwhere
{0} = host address for the file
{1} = directory location for the file
Note that the second query can be generalized to extract the contents of other zip files hosted at Internet Archive. The format for the query is:
http://{0}/zipview.php
?zip={1}/{2}
&file;={3}.jpgwhere
{0} = host address for the file
{1} = directory location for the file
{2} = name of the zip archive from which to extract a file
{3} = the name of the file to extract from the zip archive
Documentation written by Mike Lichtenberg.
You can use the bookmark link. Just change the xmlsearch param to Search.
http://www.archive.org/advancedsearch.php?q=Frankenstein&fl;%5B%5D=identifier&sort;%5B%5D=&sort;%5B%5D=&sort;%5B%5D=&rows;=50&indent;=yes&fmt;=json&xmlsearch;=Search
This is what we’re doing in the Umlaut:
http://umlaut.rubyforge.org/svn/U2/lib/service_adaptors/internet_archive.rb