Author Archives: jefferson

“Community Webs” Receives Additional Funding to Further Public Library Local History Web Collecting

In 2017, our Archive-It service was awarded funding from the Institute of Museum and Library Services (IMLS) for the 2-year project “Community Webs: Empowering Public Librarians to Create Community History Web Archives.” The program has been providing training and technical infrastructure for a diverse group of librarians nationwide to develop expertise in creating collections of historically valuable web-published materials documenting their local communities and under-represented communities. In response to an unexpectedly large group of applicants, and with additional internal funding, we were able to expand the cohort to a total of 28 libraries from 16 states. The launch announcement and the dedicated website have further information about the program and its progress.

We are excited to announce that IMLS has recently provided additional supplementary funding to Community Webs! The additional funding will allow us to focus on program evaluation, expansion, and strategic planning. We are very pleased to be working with the Educopia Institute in support of this work and will benefit from their vast expertise in community cultivation and program facilitation.

Over the course of the original 2-year Community Webs program, the 28 participating libraries created hundreds of archived collections totaling more than 40 terabytes of data, gave dozens of professional presentations at local and national conferences, held many public programs and patron-facing events, and attended numerous meet-ups and cohort events. As well, the program created a suite of open educational resources, online courses, and other training materials supporting digital curation skills development, local history web collecting, and community formation. Some sample collections created as part of the program include:

#HashtagSyllabusMovement by Schomburg Center for Research in Black Culture
LGBTQ in Alabama by Birmingham Public Library
D.C. Punk (Web) Archive by DC Public Library, Special Collections
North Bay Fires, 2017 by Sonoma County Public Library
Food Culture by Athens (GA) Regional Library System
Movimiento Cosecha Grand Rapids by Grand Rapids Public Library

The program’s website has links to each participating institution’s collections page.

We are grateful to IMLS for the additional funding to continue this popular program, excited to work with Educopia on further community development, and encourage any public libraries interested in participating to contact us.

Archiving Information on the Novel Coronavirus (Covid-19)

The Internet Archive’s Archive-It service is collaborating with the International Internet Preservation Consortium’s (IIPC) Content Development Group (CDG) to archive web-published resources related to the ongoing Novel Coronavirus (Covid-19) outbreak. The IIPC Content Development Group consists of curators and professionals from dozens of libraries and archives from around the world that are preserving and providing access to the archived web. The Internet Archive is a co-founder and longtime member of the IIPC. The project will include both subject-expert curation by IIPC members as well as the inclusion of websites nominated by the public (see the nomination form link below).

Due to the urgency of the outbreak, archiving of nominated web content will commence immediately and continue as needed depending on the course of the outbreak and its containment. Web content from all countries and in any language is in scope. Possible topics to guide nominations and collections: 

  • Coronavirus origins 
  • Information about the spread of infection 
  • Regional or local containment efforts
  • Medical/Scientific aspects
  • Social aspects
  • Economic aspects
  • Political aspects

Members of the general public are welcomed to nominate websites and web-published materials using the following web form: https://forms.gle/iAdvSyh6hyvv1wvx9. Archived information will also be available soon via the IIPC’s public collections in Archive-It. [March 23, 2020 edit: the public collection can now be found here, https://archive-it.org/collections/13529.]

Members of the general public can also take advantage of the ability to upload non-web digital resources directly to specific Internet Archive collections such as Community Video or Community Texts. For instance, see this collection of “Files pertaining to the 2019–20 Wuhan, China Coronavirus outbreak.” We recommend using a common subject tag, like coronavirus to facilitate search and discovery. Fore more information on uploading materials to archive.org, see the Internet Archive Help Center.

A special thanks to Alex Thurman of Columbia University and Nicola Bingham of the British Library, the co-chairs of the IIPC CDG, and to other IIPC members participating in the project. Thanks as well to any and all public nominators assisting with identifying and archiving records about this significant global event.

Archiving Online Local News with the News Measures Research Project

Over the past two years Archive-It, Internet Archive’s web archiving service, has partnered with researchers at the Hubbard School of Journalism and Mass Communication at University of Minnesota and the Dewitt Wallace Center for Media and Democracy at Duke University in a project designed to evaluate the health of local media ecosystems as part of the News Measures Research Project, funded by the Democracy Fund. The project is led by Phil Napoli at Duke University and Matthew Weber at University of Minnesota. Project staff worked with Archive-It to crawl and archive the homepages of 663 local news websites representing 100 communities across the United States. Seven crawls were run on single days from July through September and captured over 2.2TB of unique data and 16 million URLs. Initial findings from the research detail how local communities cover core topics such as emergencies, politics and transportation. Additional findings look at the volume of local news produced by different media outlets, and show the importance of local newspapers in providing communities with relevant content. 

The goal of the News Measures Research Project is to examine the health of local community news by analyzing the amount and type of local news coverage in a sample of community. In order to generate a random and unbiased sample of communities, the team used US Census data. Prior research suggested that average income in a community is correlated with the amount of local news coverage; thus the team decided to focus on three different income brackets (high, medium and low) using the Census data to break up the communities into categories. Rural areas and major cities were eliminated from the sample in order to reduce the number of outliers; this left a list of 1,559 communities ranging in population from 20,000 to 300,000 and in average household income from $21,000 to $215,000. Next, a random sample of 100 communities was selected, and a rigorous search process was applied to build a list of 663 news outlets that cover local news in those communities (based on Web searches and established directories such as Cision).

The News Measures Research Project web captures provide a unique snapshot of local news in the United States. The work is focused on analyzing the nature of local news coverage at a local level, while also examining the broader nature of local community news. At the local level, the 100 community sample provides a way to look at the nature of local news coverage. Next, a team of coders analyzed content on the archived web pages to assess what is being covered by a given news outlet. Often, the websites that serve a local community are simply aggregating content from other outlets, rather than providing unique content. The research team was most interested in understanding the degree to which local news outlets are actually reporting on topics that are pertinent to a given community (e.g. local politics). At the global level, the team looked at interaction between community news websites (e.g. sharing of content) as well as automated measures of the amount of coverage.

The primary data for the researchers was the archived local community news data, but in addition, the team worked with census data to aggregate other measures such as circulation data for newspapers. These data allowed the team to examine the amount and type of local news changes depending on the characteristics of the community. Because the team was using multiple datasets, the Web data is just one part of the puzzle. The WAT data format proved particularly useful for the team in this regard. Using the WAT file format allowed the team to avoid digging deeply into the data – rather, the WAT data allowed the team to examine high level structure without needing to examine the content of each and every WARC record. Down the road, the WARC data allows for a deeper dive,  but the lighter metadata format of the WAT files has enabled early analysis.

Stay tuned for more updates as research utilizing this data continues! The websites selected will continue to be archived and much of the data are publicly available.

The Whole Earth Web Archive

As part of the many releases and announcements for our October Annual Event, we created The Whole Earth Web Archive. The Whole Earth Web Archive (WEWA) is a proof-of-concept to explore ways to improve access to the archived websites of underrepresented nations around the world. Starting with a sample set of 50 small nations we extracted their archived web content from the Internet Archive’s web archive, built special search and access features on top of this subcollection, and created a dedicated discovery portal for searching and browsing. Further work will focus on improving IA’s harvesting of the national webs of these and other underrepresented countries as well as expanding collaborations with libraries and heritage organizations within these countries, and via international organizations, to contribute technical capacity to local experts who can identify websites of value that document the lives and activities of their citizens.

whole earth web archive screenshot

Archived materials from the web play an increasingly necessary role in representation, evidence, historical documentation, and accountability. However, the web’s scale is vast, it changes and disappears quickly, and it requires significant infrastructure and expertise to collect and make permanently accessible. Thus, the community of National Libraries and Governments preserving the web remains overwhelmingly represented by well-resourced institutions from Europe and North America. We hope the WEWA project helps provide enhanced access to archived material otherwise hard to find and browse in the massive 20+ petabytes of the Wayback Machine. More importantly, we hope the project provokes a broader reflection upon the lack of national diversity in institutions collecting the web and also spurs collective action towards diminishing the overrepresentation of “first world” nations and peoples in the overall global web archive.

As with prior special projects by the Web Archiving & Data Services team, such as GifCities (search engine for animated Gifs from the Geocities web collection) or Military Industrial Powerpoint Complex (ebooks of Powerpoints from the archive of the .mil (military) web domain), the project builds on our exploratory work to provide improved access to valuable subsets of the web archive. While our Archive-It service gives curators the tools to build special collections of the web, we also work to build unique collections from the pre-existing global web archive.

The preliminary set of countries in WEWA were determined by selecting the 50 “smallest” countries as measured by number of websites registered on their national web domain (aka ccTLD) — a somewhat arbitrary measurement, we acknowledge. The underlying search index is based on internally-developed tools for search of both text and media. Indices are built from features like page titles or descriptive hyperlinks from other pages, with relevance ranking boosted by criteria such as number of inbound links and popularity and include a temporal dimension to account for the historicity of web archives. Additional technical information on search engineering can be found in “Exploring Web Archives Through Temporal Anchor Texts.”

We intend both to do more targeted, high-quality archiving of these and other smaller national webs and also have undertaking active outreach to national and heritage institutions in these nations, and to related international organizations, to ensure this work is guided by broader community input. If you are interested in contributing to this effort or have any questions, feel free to email us at webservices [at] archive [dot] org. Thanks for browsing the WEWA!

Internet Archive and Center for Open Science Collaborate to Preserve Open Science Data

Open Science and research reproducibility rely on ongoing access to research data. With funding from the Institute of Museum and Library Services’ National Leadership Grants for Libraries program, the Internet Archive (IA) and Center for Open Science (COS) will work together to ensure that open data related to the scientific research process is archived for perpetual access, redistribution, and reuse. The project aims to leverage the intersection between open research data, the long-term stewardship activities of libraries, and distributed data sharing and preservation networks. By focusing on these three areas of work, the project will test and implement infrastructure for improved data sharing in further support of open science and data curation. Building out interoperability between open data platforms like the Open Science Framework (OSF) of COS, large scale digital archives like IA, and collaborative preservation networks has the potential to enable more seamless distribution of open research data and enable new forms of custody and use. See also the press release from COS announcing this project.

OSF supports the research lifecycle by enabling researchers to produce and manage registrations and data artifacts for further curation to foster adoption and discovery. The Internet Archive works with 700+ institutions to collect, archive, and provide access to born-digital and web-published resources and data. Preservation at IA of open data on OSF will enable further availability of this data to other preservation networks and curatorial partners for distributed long term stewardship and local custody by research institutions using both COS and IA services. The project will also partner with a number of preservation networks and repositories to mirror portions of this data and test additional interoperability among additional stewardship organizations and digital preservation systems.

Beyond the prototyping and technical work of data archiving, the teams will also be conducting training, including the development of open education resources, webinars, and similar materials to ensure data librarians can incorporate the project deliverables into their local research data management workflows. The two-year project will first focus on OSF Registrations data and expand to include other open access materials hosted on OSF. Later stage work will test interoperable approaches to sharing subsets of this data with other preservation networks such as LOCKSS, AP Trust, and individual university libraries. Together, IA and COS aim to lay the groundwork for seamless technical integration supporting the full lifecycle of data publishing, distribution, preservation, and perpetual access.

Project contacts:
IA – Jefferson Bailey, Director of Web Archiving & Data Services, jefferson [at] archive.org
COS – Nici Pfeiffer, Director of Product, nici [at] cos.io

Internet Archive Partners with University of Edinburgh to Provide Historical Web Data Supporting Machine Translation

The Internet Archive will provide portions of its web archive to the University of Edinburgh to support the School of Informatics’ work building open data and tools for advancing machine translation, especially for low-resource languages. Machine translation is the process of automatically converting text in one language to another.

The ParaCrawl project is mining translated text from the web in 29 languages.  With over 1 million translated sentences available for several languages, ParaCrawl is often the largest open collection of translations for each language.   The project is a collaboration between the University of Edinburgh, University of Alicante, Prompsit, TAUS, and Omniscien with funding from the EU’s Connecting Europe Facility.  Internet Archive data is vastly expanding the data mined by ParaCrawl and therefore the amount of translated sentences collected. Lead by Kenneth Heafield of the University of Edinburgh, the overall project will yield open corpora and open-source tools for machine translation as well as the processing pipeline.  

Archived web data from IA’s general web collections will be used in the project.  Because translations are particularly scarce for Icelandic, Croatian, Norwegian, and Irish, the IA will also use customized internal language classification tools to prioritize and extract data in these languages from archived websites in its collections.

The partnership expands on IA’s ongoing effort to provide computational research services to large-scale data mining projects focusing on open-source technical developments for furthering the public good and open access to information and data. Other recent collaborations include providing web data for assessing the state of local online news nationwide, analyzing historical corporate industry classifications, and mapping online social communities. As well, IA is expanding its work in making available custom extractions and datasets from its 20+ years of historical web data. For further information on IA’s web and data services, contact webservices at archive dot org.

“Make It Weird”: Building a collaborative public library web archive in an arts and counterculture community

This post is reposted from the Archive-It blog and written by guest author Dylan Gaffney of the Forbes Library, one of the public libraries participating in the Community Webs program.

Whether documenting the indie music scene of the 1990s, researching the history of local abolitionists and formerly enslaved peoples in the 1840s, or helping patrons research the early LGBT movement in the area, I am frequently reminded of what was not saved or is not physically present in our collections. These gaps or silences often reflect subcultures in our community, stories that were not told on the pages of the local newspaper, or which might not be reflected in the websites of city government or local institutions. In my first sit down with a fellow staff member to talk about the prospects for a web archive, we brainstormed how we could more completely capture the digital record of today’s community. We discussed including lesser known elements like video of music shows in house basements, the blog of a small queer farm commune in the hills, the Instagram account of the kid who photographs local graffiti, etc. My colleague Heather whispered to me excitedly: “We could make it weird!” I knew immediately I had found my biggest ally in building our collections.

The Forbes Library was one of a few public libraries chosen nationwide for the Community Webs cohort, a group of public libraries organized by the Internet Archive and funded by the Institute of Museum and Library Services to expand web archiving in local history collections. As a librarian in a small city of 28,000 people, who works in a public library with no full-time archivists, the challenge of trying to build a web archive from scratch that truly reflected our rich, varied and “weird” cultural community, the arts and music scenes, and the rich tradition of activism in Western Massachusetts was a daunting but exciting project to embark on.

We knew we would have to leverage our working relationships with media organizations, nonprofits, city departments, the arts and music community, and our staff if we truly hoped to build something which reflected our community as it is. Our advantage was that we had such relationships, and could pitch the idea not only through traditional means like press releases and social media, but by chatting after meetings typically spent coordinating film screenings, gallery walks, and lawn concerts. We knew if we became comfortable enough with the basic concepts of archiving the web, that we could pick the brains of activists planning events in our meeting rooms, friends at shows, the staff of our local media company who lend equipment to aspiring filmmakers, and the folks who sell crops from small family farms in the community at the Farmer’s Markets.

We started by training just a few Information Services staff in one-on-one sessions and shared Archive-It training videos. This helped to broaden the number of librarians familiar with the Archive-It software in general, but also got the wheels turning amongst our reference and circulation staffs–our front lines of communication with the public–in particular. We talked a great deal about what we wish we had in our current archive, about filling in gaps and having the archive more accurately reflect and represent our community.

In order to solicit ideas from the community for preservation, we put together a Google form to be posted online, which was almost entirely cribbed from my Community Webs cohort colleagues at East Baton Rouge Parish Library, Queens Public Library and others. We also set up in-person, one-on-one meetings with community partners and academic institutions that were already engaged in web archiving. We put out press releases and generally just talked to and at anyone who would listen. As a result, nearly all of our first web archival acquisitions come directly from recommendations by the public and our community partners.

For instance, one of the first websites that I knew I wanted to preserve was From Wicked to Wedded, a great site which preserves the history of the LGBTQ community in our area. It was gratifying when two of the first responses to our online outreach also mentioned the site and we had a great conversation with its creator, who researches at the library, and who, like all the content creators we’ve approached thus far, was excited to be included.

Creating an accurate and exciting overview of the lively arts scene in Northampton and the surrounding area seemed like a daunting task at first, but by crawling the websites of notable galleries, arts organizations, and Northampton’s monthly gallery walk, we found that we were quickly able to capture a really interesting cross-section of local artists’ work. We have subsequently begun working with the local arts organizations directly  to identify artists who may have their own websites worthy of inclusion.

Similarly, Northampton has a rich music scene for a city of its small size. With the number of people already documenting live music these days, we weren’t sure how to contribute with our own selection and curation, and so asked several folks embedded in the scene to curate some of their own favorite content, then reached out to the bands themselves to get their thoughts. We are still early in this process, but the response has been encouraging and the benefits to the library in building relationships with folks who are documenting the music scene have already led to physical donations to the archive as well.

It was important to us from the beginning to also consult with Northampton Community Television. NCTV partners with the library on film programming to preserve a record of all they do for the community–teaching filmmaking, lending equipment, training and empowering citizen journalists.. They, in turn, have pointed us to local filmmakers, and through our ongoing collaborations around film programming and the Northampton film festival, we have a platform for outreach in that community as well.

Staff members and local activists pointed us in the direction of other new local radio shows and citizen journalism websites, both of which give personal takes on local politics. One was a wonderful radio show called Out There by one of our bicycle trash pickup workers Ruthie. In a single episode, Ruthie will talk to everybody from the mayor, environmental activists and farmers, to the random junior high kids that she runs into hanging out on the bike path under a bridge.  The other recommendation was for a new citizen journalism site called Shoestring which asks common sense questions of people in power in local government and places them in a national context. The folks from Shoestring stopped by the library’s Arts and Music desk to ask about our bi-weekly Zine Club meeting, which gave us an opportunity to talk about including their site in our web archive and led to physical donation to the archive as well!

At numerous people’s suggestion, we are preserving the Instagram account of our gruff looking former video store clerk turned City Council president Bill Dwight. Bill has a great camera, a great eye and has the ability to capture a wonderful cross-section of the community in his feed. Dann Vazquez has an instagram feed dedicated to capturing oddball moments, new building developments and local graffiti, (one of the more ephemeral of our community’s arts) which gives a unique day to day perspective of change on the streets of our city.

We are a community rich in activism, with a long tradition that, like our LGBTQ history, has not been properly reflected in our archives. For years, the personal and organizational archives of local activists have found homes at the larger colleges and Universities in the Five College Area. Now, by including the websites of long-running and new nonprofits and activist organizations, we are able to create a richer archive for future generations to learn from their pioneering work.

We have tried to remain conscious of what communities are being left out of the collections we are developing, such as the non-English speaking communities with whom we need to improve our outreach and individuals and organizations that might not have a digital presence currently. As we  have the ability to offer basic training at the library and through our community partners,we have recently been exploring the idea of creating a website or Instagram account designed to give individuals and organizations the opportunity to try out these technologies without the weight of a long-term commitment, but with the assurance that their content would be preserved among our web archives.

It still feels that we are in the earliest phases of this endeavour, but we have tried to build a collaborative system of curation which could be sustained going forward. By spreading the role of curation across the community, we can prevent staff burnout on the project and ensure that the perspectives represented in the archive are broader, more varied, and thus more reflective of our small city as it is.

Additional credits: IA staff Karl-Rainer Blumenthal who edits the Archive-It blog and Maria Praetzellis, who manages the Community Webs program.

Internet Archive, Code for Science and Society, and California Digital Library to Partner on a Data Sharing and Preservation Pilot Project

Research and cultural heritage institutions are facing increasing costs to provide long-term public access to historically valuable collections of scientific data, born-digital records, and other digital artifacts. With many institutions moving data to cloud services, data sharing and access costs have become more complex. As leading institutions in decentralization and data preservation, the Internet Archive (IA), Code for Science & Society (CSS) and California Digital Library (CDL) will work together on a proof-of-concept pilot project to demonstrate how decentralized technology could bolster existing institutional infrastructure and provide new tools for efficient data management and preservation. Using the Dat Protocol (developed by CSS), this project aims to test the feasibility of a decentralized network as a new option for organizations to archive and monitor their digital assets.

Dat is already being used by diverse communities, including researchers, developers, and data managers. California Digital Library is building innovative tools for data publication and digital preservation. The Internet Archive is leading efforts to advance the decentralized web community. This joint project will explore the issues that emerge from collecting institutions adopting decentralized technology for storage and preservation activities. The pilot will feature a defined corpus of open data from CDL’s data sharing service. The project aims to demonstrate how members of a cooperative, decentralized network can leverage shared services to ensure data preservation while reducing storage costs and increasing replication counts. By working with the Dat Protocol, the pilot will maximize openness, interoperability, and community input. Linking institutions via cooperative, distributed data sharing networks has the potential to achieve efficiencies of scale not possible through centralized or commercial services. The partners intend to openly share the outcomes of this proof-of-concept work to inform further community efforts to build on this potential.

Want to learn more? Representatives of this project will be at FORCE 2018, Joint Conference on Digital Libraries, Open Repositories, DLF Forum, and the Decentralized Web Summit.

More about CSS: Code for Science & Society is a nonprofit organization committed to building public interest technology and low-cost decentralized tools with the Dat Project to help people share and preserve versioned digital information. Read more about CSS’ Dat in the Lab project, our recent Community Call, and other activities. (Contact: Danielle Robinson)

More about CDL UC3: The University of California Curation Center (UC3) at the California Digital Library (CDL) provides innovative data curation and digital preservation services to the 10-campus University of California system and the wider scholarly and cultural heritage communities. https://uc3.cdlib.org/. (Contact: John Chodacki)

More about IA: The Internet Archive is a non-profit digital library with the mission to provide “universal access to all knowledge.” It works with hundreds of national and international partners providing web, data, and preservation services and maintains an online library comprising millions of freely-accessible books, films, audio, television broadcasts, software, and hundreds of billions of archived websites. https://archive.org/. (Contact: Jefferson Bailey)

Internet Archive and New York Art Resources Consortium Receive Grant for a National Forum to Advance Web Archiving in Art and Museum Libraries

We are pleased to announce that the Institute of Museum and Library Services (IMLS) has recently awarded a collaborative grant to the New York Art Resources Consortium and our Archive-It group to host a national forum event, along with associated workshops and stakeholder meetings, to catalyze collaboration among art libraries in the stewardship of historically valuable art-related materials published on the web. The New York Art Resources Consortium (NYARC) consists of the research libraries and archives of three leading art museums in New York City: The Brooklyn Museum, The Frick Collection, and The Museum of Modern Art. Archive-It is the web archiving service of the Internet Archive that works with hundreds of heritage organizations, including an international set of museums and art libraries, to preserve and provide access to web-published resources. Archive-It and NYARC will jointly run the project, Advancing Art Libraries and Curated Web Archives: A National Forum.

This National Leadership Grant in the Curating Collections program category to conduct a National Forum and affiliated meetings builds on NYARC’s and Archive-It’s work together expanding web archiving amongst art and museum libraries and archives, including through the ARLIS/NA Web Archiving Special Interest Group, as well as their individual efforts to advance born-digital collection building. In Reframing Collections for the Digital Age, NYARC focused on web archiving program development, including technical work to integrate Archive-It and its discovery services that can inform work in similar institutions. Archive-It, with its Community Webs program, is working with dozens of public libraries on cohort building, educational resources, and network development supporting community history web archiving — a model that can be adopted by the national art library community to scale out its coordinated efforts. In addition, Archive-It has led, and NYARC operationalized, collaborative efforts towards joint API-based systems integrations research and development to further joint services and interoperability. 

By mobilizing a broad effort through an invitational forum, the project aims to achieve national scale through network building and shared infrastructure planning that the project team will foster through a program of discussion, training, and strategic roadmapping. The project will include the contribution of a diverse group of members of the art library community, lead to published outputs on strategic directions and community-specific training materials, and launch a multi-institutional effort to scale the extent of web-published, born-digital materials preserved and accessible for art scholarship and research. Thank you to IMLS for their continued support of work advancing web archiving and the overall national digital platform initiative.

Andrew W. Mellon Foundation Awards Grant to the Internet Archive for Long Tail Journal Preservation

The Andrew W. Mellon Foundation has awarded a research and development grant to the Internet Archive to address the critical need to preserve the “long tail” of open access scholarly communications. The project, Ensuring the Persistent Access of Long Tail Open Access Journal Literature, builds on prototype work identifying at-risk content held in web archives by using data provided by identifier services and registries. Furthermore, the project expands on work acquiring missing open access articles via customized web harvesting, improving discovery and access to this materials from within extant web archives, and developing machine learning approaches, training sets, and cost models for advancing and scaling this project’s work.

The project will explore how adding automation to the already highly automated systems for archiving the web at scale can help address the need to preserve at-risk open access scholarly outputs. Instead of specialized curation and ingest systems, the project will work to identify the scholarly content already collected in general web collections, both those of the Internet Archive and collaborating partners, and implement automated systems to ensure at-risk scholarly outputs on the web are well-collected and are associated with the appropriate metadata. The proposal envisages two opposite but complementary approaches:

  • A top-down approach involves taking journal metadata and open data sets from identifier and registry sources such as ISSN, DOAJ, Unpaywall, CrossRef, and others and examining the content of large-scale web archives to ask “is this journal being collected and preserved and, if not, how can collection be improved?”
  • A bottom-up approach involves examining the content of general domain-scale and global-scale web archives to ask “is this content a journal and, if so, can it be associated with external identifier and metadata sources for enhanced discovery and access?”

The grant will fund work to use the output of these approaches to generate training sets and test them against smaller web collections in order to estimate how effective this approach would be at identifying the long-tail content, how expensive a full-scale effort would be, and what level of computing infrastructure is needed to perform such work. The project will also build a model for better understanding the costs for other web archiving institutions to do similar analysis upon their collection using the project’s algorithms and tools. Lastly, the project team, in the Web Archiving and Data Services group with Director Jefferson Bailey as Principal Investigator,  will undertake a planning process to determine resource requirements and work necessary to build a sustainable workflow to keep the results up-to-date incrementally as publication continues.

In combination, these approaches will both improve the current state of preservation for long-tail journal materials as well as develop models for how this work can be automated and applied to existing corpora at scale. Thanks to the Mellon Foundation for their support of this work and we look forward to sharing the project’s open-source tools and outcomes with a broad community of partners.