Author Archives: Brewster Kahle

Discogs Thank You! A commercial community site with bulk data access

https://thequietus.com/articles/24529-discogs-more-than-200-million-dollars

Discogs has cracked the nut, struck the right balance, and is therefore an absolute Internet treasure– Thank you.

If you don’t know them, Discogs is a central resource for the LP/78/CD music communities, and as Wikipedia said “As of 28 August 2019 Discogs contained over 11.6 million releases, by over 6 million artists, across over 1.3 million labels, contributed from over 456,000 contributor user accounts—with these figures constantly growing…”

When I met the founder, Kevin Lewandowski, a year ago he said the Portland based company supports 80 employees and is growing. They make money by being a marketplace for buyers and sellers of discs.  An LP dealer I met in Oklahoma sells most of his discs through discogs as well as going at record fairs.

The data about records is spectacularly clean. Compare it to Ebay, where the data is scattershot, and you have something quite different and reusable. It is the best parts of musicbrainz, CDDB, and Ebay– where users can catalog their collections and buy/sell records. By starting with the community function, Kevin said, the quality started out really good, and then adding the market place later led it to its success.

But there is something else Discogs does that sets it apart from many other commercial websites, and this makes All The Difference:

Discogs also makes their data available, in bulk, and with a free-to-use API.

The Great 78 Project has leveraged this bulk database to help find the date of release for 78’s.  Just yesterday, I downloaded the new dataset and added it to our 78rpm date database, and in last year 10’s of thousands more 78’s were added to discogs, and we found 1,500 more dates for our existing 78’s.   Thank you!

The Internet Archive Lost Vinyl Project leverages the API’s by looking up records we will be digitizing to find track listings.

A donor to our CD project used the public price information to appraise the CDs he donated for a tax write-off.

We want to add links back from Discogs to the Internet Archive and they have not allowed that yet (please please), but there is always something more to do.

I hope other sites, even commercial ones, would allow bulk access to their data (an API is not enough).   

Thank you, Discogs.

FOSS wins again: Free and Open Source Communities comes through on 19th Century Newspapers (and Books and Periodicals…)

I have never been more encouraged and thankful to Free and Open Source communities. Three months ago I posted a request for help with OCR’ing and processing 19th Century Newspapers and we got soooo many offers to help.  Thank you, that was heart warming and concretely helpful– already based on these suggestions we are changing over our OCR and PDF software completely to FOSS, making big improvements, and building partnerships with FOSS developers in companies, universities, and as individuals that will propel the Internet Archive to have much better digitized texts.  I am so grateful, thank you.   So encouraging.

I posted a plea for help on the Internet Archive blog: Can You Help us Make the 19th Century Searchable? and we got many social media offers and over 50 comments the post– maybe a record response rate.   

We are already changing over our OCR to Tesseract/OCRopus and leveraging many PDF libraries to create compressed, accessible, and archival PDFs.

Several people suggested the German government-lead initiative called OCR-D that has made production level tools for helping OCR and segment complex and old materials such as newspapers in the old German script Fraktur, or black letter.  (The Internet Archive had never been able to process these, and now we are doing it at scale).   We are also able to OCR more Indian languages which is fantastic.  This Government project is FOSS, and has money for outreach to make sure others use the tools– this is a step beyond most research grants. 

Tesseract has made a major step forward in the last few years.  When we last evaluated the accuracy it was not as good as the proprietary OCR, but that has changed– we have done evaluations and it is just as good, and can get better for our application because of its new architecture.   

Underlying the new Tesseract is a LSTM engine similar to the one developed for Ocropus2/ocropy, which was a project led by Tom Breuel (funded by Google, his former German University, and probably others– thank you!). He has continued working on this project even though he left academia.  A machine learning based program is introducing us to GPU based processing, which is an extra win.  It can also be trained on corrected texts so it can get better.  

Proprietary example from an Anti-Slavery newspaper from my blog post:

New one, based on free and open source software that is still faulty but better:

The time it takes on our cluster to compute is approximately the same, but if we add GPU’s we should be able to speed up OCR and PDF creation, maybe 10 times, which would help a great deal since we are processing millions of pages a day.

The PDF generation is a balance trying to achieve small file size as well as rendering quickly in browser implementations, have useful functionality (text search, page numbers, cut-and-paste of text), and comply with archival (PDF/A) and accessibility standards (PDF/UA). At the heart of the new PDF generation is the “archive-pdf-tools” Python library, which performs Mixed Raster Content (MRC) compression, creates a hidden text layer using a modified Tesseract PDF renderer that can read hOCR files as input, and ensures the PDFs are compatible with archival standards (VeraPDF is used to verify every PDF that we generate against the archival PDF standards). The MRC compression decomposes each image into a background, foreground and foreground mask, heavily compressing (and sometimes downscaling) each layer separately. The mask is compressed losslessly, ensuring that the text and lines in an image do not suffer from compression artifacts and look clear. Using this method, we observe a 10x compression factor for most of our books.

The PDFs themselves are created using the high-performance mupdf and pymupdf python library: both projects were supportive and promptly fixed various bugs, which propelled our efforts forwards.

And best of all, we have expanded our community to include people all over the world that are working together to make cultural materials more available. We have a slack channel for OCR researchers and implementers now, that you can join if you would like (to join, drop an email to merlijn@archive.org). We look to contribute software and data sets to these projects to help them improve (lead by Merlijn Wajer and Derek Fukumori).

Next steps to fulfill the dream of Vanevar Bush’s Memex, Ted Nelson’s Xanadu, Michael Hart’s Project Gutenberg, Tim Berners-Lee’s World Wide Web,  Raj Ready’s call for Universal Access to All Knowledge (and now the Internet Archive’s mission statement):

  • Find articles in periodicals, and get the titles/authors/footnotes
  • Linking footnote citations to other documents
  • OCR Balinese palm leaf manuscripts based 17,000 hand entered pages.
  • Improve Tesseract page handling to improve OCR and segmentation
  • Improve epub creation, including images from pages
  • Improve OCRopus by creating training datasets

Any help here would be most appreciated.

Thank you, Free and Open Source Communities!  We are glad to be part of such a sharing and open world.

Want Some Terabytes from the Internet Archive to Play With?

There are many computer science projects, decentralized storage, and digital humanties projects looking for data to play with. You came to the right place– the Internet Archive offers cultural information available to web users and dataminers alike.

While many of our collections have rights issues to them so require agreements and conversation, there are many that are openly available for public, bulk downloading.

Here are 3 collections, one of movies, another of audio books, and a third are scanned public domain books from the Library of Congress. If you have a macintosh or linux machine, you can use those to run these command lines. If you run each for a little while you can get just a few of the items (so you do not need to download terabytes).

These items are also available via bittorrent, but we find the Internet Archive command line tool is really helpful for this kind of thing:

$ curl -LOs https://archive.org/download/ia-pex/ia
$ chmod +x ia
$ ./ia download –search=”collection:prelinger” #17TB of public domain movies
$ ./ia download –search=”collection:librivoxaudio” #20TB of public domain audiobooks
$ ./ia download –search=”collection:library_of_congress” #166,000 public domain books from the Library of Congress (60TB)

Here is a way to figure out how much data is in each:

apt-get install jq > /dev/null
./ia search “collection:library_of_congress” -f item_size | jq -r .item_size | paste -sd+ – | bc | numfmt –grouping
./ia search “collection:librivoxaudio” -f item_size | jq -r .item_size | paste -sd+ – | bc | numfmt –grouping
./ia search “collection:prelinger” -f item_size | jq -r .item_size | paste -sd+ – | bc | numfmt –grouping

Sorry to say we do not yet have a support group for people using these tools or finding out what data is available, so for the time being you are pretty much on your own.

Can You Help us Make the 19th Century Searchable?

In 1847, Frederick Douglass started a newspaper advocating the abolition of slavery that ran until 1851.  After the Civil War, there was a newspaper for freed slaves, the Freedmen’s Record.  The Internet Archive is bringing these and many more works online for free public access. But there’s a problem: 

Our Optical Character Recognition (OCR), while the best commercially available OCR technology, is not very good at identifying text from older documents.  

Take for example, this newspaper from 1847. The images are not that great, but a person can read them:

The problem is  our computers’ optical character recognition tech gets it wrong, and the columns get confused.

What we need is “Culture Tech” (a riff on fintech, or biotech) and Culture Techies to work on important and useful projects–the things we need, but are probably not going to get gushers of private equity interest to fund. There are thousands of professionals taking on similar challenges in the field of digital humanities and we want to complement their work with industrial-scale tech that we can apply to cultural heritage materials.

One such project would be to work on technologies to bring 19th-century documents fully digital. We need to improve  OCR to enable full text search, but we also need help segmenting documents into columns and articles. The Internet Archive has lots of test materials and thousands are uploading more documents all the time.    

What we do not have is a good way to integrate work on these projects with the Internet Archive’s processing flow.  So we need help and ideas there as well.

Maybe we can host an “Archive Summer of CultureTech” or something…Just ideas.   Maybe working with a university department that would want to build programs and classes around Culture Tech… If you have ideas or skills to contribute, please post a comment here or send an email to info@archive.org with some of this information.

Libraries lend books, and must continue to lend books: Internet Archive responds to publishers’ lawsuit

Yesterday, the Internet Archive filed our response to the lawsuit brought by four commercial publishers to end the practice of Controlled Digital Lending (CDL), the digital equivalent of traditional library lending. CDL is a respectful and secure way to bring the breadth of our library collections to digital learners. Commercial ebooks, while useful, only cover a small fraction of the books in our libraries. As we launch into a fall semester that is largely remote, we must offer our students the best information to learn from—collections that were purchased over centuries and are now being digitized. What is at stake with this lawsuit? Every digital learner’s access to library books. That is why the Internet Archive is standing up to defend the rights of  hundreds of libraries that are using Controlled Digital Lending.

The publishers’ lawsuit aims to stop the longstanding and widespread library practice of Controlled Digital Lending, and stop the hundreds of libraries using this system from providing their patrons with digital books. Through CDL, libraries lend a digitized version of the physical books they have acquired as long as the physical copy doesn’t circulate and the digital files are protected from redistribution. This is how Internet Archive’s lending library works, and has for more than nine years. Publishers are seeking to shut this library down, claiming copyright law does not allow it. Our response is simple: Copyright law does not stand in the way of libraries’ rights to own books, to digitize their books, and to lend those books to patrons in a controlled way.  

What is at stake with this lawsuit? Every digital learner’s access to library books. That is why the Internet Archive is standing up to defend the rights of  hundreds of libraries that are using Controlled Digital Lending.

“The Authors Alliance has several thousand members around the world and we have endorsed the Controlled Digital Lending as a fair use,” stated Pamela Samuelson, Authors Alliance founder and Richard M. Sherman Distinguished Professor of Law at Berkeley Law. “It’s really tragic that at this time of pandemic that the publishers would try to basically cut off even access to a digital public library like the Internet Archive…I think that the idea that lending a book is illegal is just wrong.”

These publishers clearly intend this lawsuit to have a chilling effect on Controlled Digital Lending at a moment in time when it can benefit digital learners the most. For students and educators, the 2020 fall semester will be unlike any other in recent history. From K-12 schools to universities, many institutions have already announced they will keep campuses closed or severely limit access to communal spaces and materials such as books because of public health concerns. The conversation we must be having is: how will those students, instructors and researchers access information — from textbooks to primary sources? Unfortunately, four of the world’s largest book publishers seem intent on undermining both libraries’ missions and our attempts to keep educational systems operational during a global health crisis.

Ten percent of the world’s population experience disabilities that impact their ability to read. For these learners, digital books are a lifeline. The publishers’ lawsuit against the Internet Archive calls for the destruction of more than a million digitized books.

The publishers’ lawsuit does not stop at seeking to end the practice of Controlled Digital Lending. These publishers call for the destruction of the 1.5 million digital books that Internet Archive makes available to our patrons. This form of digital book burning is unprecedented and unfairly disadvantages people with print disabilities. For the blind, ebooks are a lifeline, yet less than one in ten exists in accessible formats. Since 2010, Internet Archive has made our lending library available to the blind and print disabled community, in addition to sighted users. If the publishers are successful with their lawsuit, more than a million of those books would be deleted from the Internet’s digital shelves forever.

I call on the executives at Hachette, HarperCollins, Wiley, and Penguin Random House to come together with us to help solve the pressing challenges to access to knowledge during this pandemic. Please drop this needless lawsuit.

Libraries have been bringing older books to digital learners: Four publishers sue to stop it

I wanted to share my thoughts in response to the lawsuit against the Internet Archive filed on June 1 by the publishers Hachette, Harpercollins, Wiley, and Penguin Random House.

I founded the Internet Archive, a non-profit library, 24 years ago as we brought the world digital. As a library we collect and preserve books, music, video and webpages to make a great Internet library.

We have had the honor to partner with over 1,000 different libraries, such as the Library of Congress and the Boston Public Library, to accomplish this by scanning books and collecting webpages and more. In short, the Internet Archive does what libraries have always done: we buy, collect, preserve, and share our common culture.

But remember March of this year—we went home on a Friday and were told our schools were not reopening on Monday. We got cries for help from teachers and librarians who needed to teach without physical access to the books they had purchased.

Over 130 libraries endorsed lending books from our collections, and we used Controlled Digital Lending technology to do it in a controlled, respectful way.  We lent books that we own—at the Internet Archive and also the other endorsing libraries. These books were purchased and we knew they were not circulating physically. They were all locked up. In total, 650 million books were locked up just in public libraries alone.  Because of that, we felt we could, and should, and needed to make the digitized versions of those books available to students in a controlled way to help during a global emergency. As the emergency receded, we knew libraries could return to loaning physical books and the books would be withdrawn from digital circulation. It was a lending system that we could scale up immediately and then shut back down again by June 30th.

And then, on June 1st, we were sued by four publishers and they demanded we stop lending digitized books in general and then they also demanded we permanently destroy millions of digital books. Even though the temporary National Emergency Library was closed before June 30th, the planned end date, and we are back to traditional controlled digital lending, the publishers have not backed down.

Schools and libraries are now preparing for a “Digital Fall Semester” for students all over the world, and the publishers are still suing.

Please remember that what libraries do is Buy, Preserve, and Lend books.

Controlled Digital Lending is a respectful and balanced way to bring our print collections to digital learners. A physical book, once digital, is available to only one reader at a time. Going on for nine years and now practiced by hundreds of libraries, Controlled Digital Lending is a longstanding, widespread library practice.

What is at stake with this suit may sound insignificant—that it is just Controlled Digital Lending—but please remember– this is fundamental to what libraries do: buy, preserve, and lend.   

With this suit, the publishers are saying that in the digital world, we cannot buy books anymore, we can only license and on their terms; we can only preserve in ways for which they have granted explicit permission, and for only as long as they grant permission; and we cannot lend what we have paid for because we do not own it.  This is not a rule of law, this is the rule by license. This does not make sense. 

We say that libraries have the right to buy books, preserve them, and lend them even in the digital world. This is particularly important with the books that we own physically, because learners now need them digitally.

This lawsuit is already having a chilling impact on the Digital Fall Semester we’re about to embark on. The stakes are high for so many students who will be forced to learn at home via the Internet or not learn at all.  

Librarians, publishers, authors—all of us—should be working together during this pandemic to help teachers, parents and especially the students.

I call on the executives at Hachette, HarperCollins, Wiley, and Penguin Random House to come together with us to help solve the pressing challenges to access to knowledge during this pandemic. 


Please drop this needless lawsuit.  

–Brewster Kahle, July 22, 2020

Temporary National Emergency Library to close 2 weeks early, returning to traditional controlled digital lending

Within a few days of the announcement that libraries, schools and colleges across the nation would be closing due to the COVID-19 global pandemic, we launched the temporary National Emergency Library to provide books to support emergency remote teaching, research activities, independent scholarship, and intellectual stimulation during the closures. 

We have heard hundreds of stories from librarians, authors, parents, teachers, and students about how the NEL has filled an important gap during this crisis. 

Ben S., a librarian from New Jersey, for example, told us that he used the NEL “to find basic life support manuals needed by frontline medical workers in the academic medical center I work at. Our physical collection was closed due to COVID-19 and the NEL allowed me to still make available needed health informational materials to our hospital patrons.” We are proud to aid frontline workers.

Today we are announcing the National Emergency Library will close on June 16th, rather than June 30th, returning to traditional controlled digital lending. We have learned that the vast majority of people use digitized books on the Internet Archive for a very short time. Even with the closure of the NEL, we will be able to serve most patrons through controlled digital lending, in part because of the good work of the non-profit HathiTrust Digital Library. HathiTrust’s new Emergency Temporary Access Service features a short-term access model that we plan to follow. 

We moved up our schedule because, last Monday, four commercial publishers chose to sue Internet Archive during a global pandemic.  However, this lawsuit is not just about the temporary National Emergency Library. The complaint attacks the concept of any library owning and lending digital books, challenging the very idea of what a library is in the digital world. This lawsuit stands in contrast to some academic publishers who initially expressed concerns about the NEL, but ultimately decided to work with us to provide access to people cut off from their physical schools and libraries. We hope that similar cooperation is possible here, and the publishers call off their costly assault.

Controlled digital lending is how many libraries have been providing access to digitized books for nine years.  Controlled digital lending is a legal framework, developed by copyright experts, where one reader at a time can read a digitized copy of a legally owned library book. The digitized book is protected by the same digital protections that publishers use for the digital offerings on their own sites. Many libraries, including the Internet Archive, have adopted this system since 2011 to leverage their investments in older print books in an increasingly digital world.

We are now all Internet-bound and flooded with misinformation and disinformation—to fight these we all need access to books more than ever. To get there we need collaboration between libraries, authors, booksellers, and publishers.  

Let’s build a digital system that works.

Four commercial publishers filed a complaint about the Internet Archive’s lending of digitized books

This morning, we were disappointed to read that four commercial publishers are suing the Internet Archive.

As a library, the Internet Archive acquires books and lends them, as libraries have always done. This supports publishing, authors and readers. Publishers suing libraries for lending books, in this case protected digitized versions, and while schools and libraries are closed, is not in anyone’s interest. 

We hope this can be resolved quickly.

Thank you for helping us increase our bandwidth

Last week the Internet Archive upped our bandwidth capacity 30%, based on increased usage and increased financial support.  Thank you.

This is our outbound bandwidth graph that has several stories to tell…

A year ago, usage was 30Gbits/sec. At the beginning of this year, we were at 40Gbits/sec, and we were handling it.  That is 13 Petabytes of downloads per month.  This has served millions of users to materials in the wayback machine, those listening 78 RPMs, those browsing digitized books, streaming from the TV archive, etc.  We were about the 250th most popular website according to Alexa Internet.

Then Covid-19 hit and demand rocketed to 50Gbits/sec and overran our network infrastructure’s ability to handle it.  So much so, our network statistics probes had difficulty collecting data (hence the white spots in the graphs).   

We bought a second router with new line cards, and got it installed and running (and none of this is easy during a pandemic), and increased our capacity from 47Gbits/sec peak to 62Gbits/sec peak.   And we are handling it better, but it is still consumed.

Alexa Internet now says we are about the 160th most popular website.

So now we are looking at the next steps up, which will take more equipment and is more wizardry, but we are working on it.

Thank you again for the support, and if you would like to donate more, please know it is going to build collections to serve millions.  https://archive.org/donate

The National Emergency Library – Who Needs It? Who Reads It? Lessons from the First Two Weeks

At a time when every day can feel like a month, it’s hard to believe that the National Emergency Library has only existed for two weeks. Recognizing the unique challenges of connecting students and readers with books now on shelves they cannot reach, the Internet Archive loosened the restrictions on our controlled digital lending library to allow increased lending of materials. Reactions have been passionate, to say the least—elation by teachers able to  access our virtual stacks, concern by authors about the program’s impact, and fundamental questions about our role as a library in these dire times when one billion students worldwide are cut off from their classrooms and libraries.

For those of you who are being introduced to us for the first time due to the National Emergency Library: Welcome! The doors of the Internet Archive have been open for nearly 25 years and we’ve served hundreds of millions of visitors—we’ve always got room to welcome one more. And for those of you who have tracked our evolution through the years, we know you have questions.

When we turned off waitlists for our lending library on March 24th, it was in response to messages and requests we’d been getting from many sources—librarians who were closing their doors in response to lockdowns, school teachers who were concerned their students could no longer do research and discovery through the primary sources they had on campus, and organizations we respected who knew we had the capability to fill an unexpected gap. A need that we knew we could provide quickly in response.

We moved in “Internet Time” and the speed and swiftness of our solution surprised some and caught others off guard. In our rush to help we didn’t engage with the creator community and the ecosystem in which their works are made and published. We hear your concerns and we’ve taken action: the Internet Archive has added staff to our Patron Services team and we are responding quickly to the incoming requests to take books out of the National Emergency Library. While we can’t go back in time, we can move forward with more information and insight based on data the National Emergency Library has generated thus far.

The Internet Archive takes reader privacy seriously, so we don’t have specific analytics or logs to share (we took the government to court to assure we didn’t have to do that,) but we do have some general information that may be of use to authors, publishers and readers about the ways patrons are using the National Emergency Library. We will be sharing more in the coming weeks of this crisis.

Majority of books are borrowed for less than 30 minutes

Even with a preview function where readers can see the first few pages of a book, most people who go through the check out process are looking at the book for less than 30 minutes, with no more interactions until it is automatically returned two weeks later. We suspect that fewer than 10% of books borrowed are actually opened again after the first day (but we have more work to do to confirm this). Patrons may be using the checked-out book for fact checking or research, but we suspect a large number of people are browsing the book in a way similar to browsing library shelves.

The total number of books that are checked out and read is about the number of books borrowed from a town library

Trying to compare a physical check-out of a book with a digital check-out is difficult. Assuming that the number of physical books borrowed from a library corresponds to digitally borrowed books that are read after the first day, then the Internet Archive currently lends about as many as a US library that serves a population of about 30,000.

Our usage pattern may be more like a serendipitous walk through a bookstore or the library stacks. In the real world, a patron takes a book off the shelf, flips through to see if it’s of interest, and then either selects the book or puts it back on the shelf. However, in our virtual library, to flip fully through the book you have to borrow it. The large number of books that have no activity beyond the first few minutes of interaction suggest patrons are using our service to browse books.

90% of the books borrowed were published more than 10 years ago, two-thirds were published during the 20th century

The books in the National Emergency Library were published between 1925 and 5 years ago, because books older than that are in the public domain—out of copyright and fully downloadable. Books newer than 5 years are not in the National Emergency Library. Unlike the age of most books in bookstores, the books readers are borrowing are older books, with 10% being from the last 10 years. Two-thirds of these books were published during the 20th century.

And when people find what they need, it solves a problem, such as this subject librarian who found a book published in 1975:

A bit of Fun: Some of the least common subject catagories of borrowed books

These subject tags come from library catalog records and other annotations by organizations such as ISKME has done with the Universal School Library collection, assigned to aid search and discovery of resources for educators.

We’ll continue to glean and share what we can as this project continues and we hope that the needs that gave rise to the National Emergency Library come to an end soon.