What just happened on archive.org today, as best we know:
Tens of thousands of requests per second for our public domain OCR files were launched from 64 virtual hosts on amazon’s AWS services. (Even by web standards,10’s of thousands of requests per second is a lot.)
This activity brought archive.org down for all users for about an hour.
We are thankful to our engineers who could scramble on a Sunday afternoon on a holiday weekend to work on this.
We got the service back up by blocking those IP addresses.
But, another 64 addresses started the same type of activity a couple of hours later.
We figured out how to block this new set, but again, with about an hour outage.
—-
How this could have gone better for us:
Those wanting to use our materials in bulk should start slowly, and ramp up.
Also, if you are starting a large project please contact us at info@archive.org, we are here to help.
If you find yourself blocked, please don’t just start again, reach out.
Again, please use the Internet Archive, but don’t bring us down in the process.
While there are plenty of items at the Internet Archive that have no obvious home elsewhere online, there are also cases where we hold a copy of a frequently-available set of material, but we can provide it for much easier distribution and preview, including the ability to download the entire original set of files in one fell swoop.
Such it is with the USC SOUND EFFECTS LIBRARY, a collection of .WAV files taken from rapidly crumbling magnetic tape and presented for reference, enjoyment and even projects.
The world of sound effects is two-fold interesting:
There’s the interesting way we use recorded sound, cut together from various sources and even spliced from organic and generated sources, to provide the audio soundtrack for visual experiences in a way the audience thinks sounds “natural”.
And there’s the actual process of sound effects, of engineers going into the field or into a studio and generating sound after speculative sound, trying to find just the right combination of noise and speech to create just what they might need in the future.
As long as there has been performance on the Radio and to mediums beyond, the generating of sound effects live and recorded is a fascinating skill, shared among many different people, and is rightly considered an awards-worthy occupation. While not everyone is fascinated at this sort of work, many people are, and there’s a childlike delight in going through a “sound library” of effects and noises, getting ideas of how they might be used later.
As explained in a blog entry written by Craig Smith, a variety of tapes called the “Red” and “Gold” libraries of recorded sound effects were joined by a third set from a sound company called Sunset Editorial, who worked on hundreds of films over the years.
This collection has now been mirrored at the Internet Archive.
In the USC Optical Effects Library are over 1,000 digitized tapes of sound effects, including not just the sounds themselves but the voices of many different engineers bracketing them with explanations, cajoling and call-outs while they’re being made. We hear not just a dog panting, but an engineer talking to the dog that they’re doing a good job. Some recordings clearly have a crew sitting around while recordings are being made, and they hush with the sound of professionals knowing they can’t just edit the noise out if they talk over it.
There are machines: Planes, Cars and Weapons. There are explosions, fire and footsteps. There’s effects just called SCIFI or MAGIC, where the shared culture of Hollywood’s take on what things “sounded like” makes itself known.
The pleasant stroll of “just playing” the effects in our browser-based player belies the fact that at one time, this was magnetic reels, sliced with razors and joined with tape, used to remix and reconstitute environments of sound for entertainment. The push to digital allows for much more experimentation and mixing without generational loss and huge amounts of precious time, but in these versions we can hear how much work went into the foundational soundscape of entertainment in the 20th century.
Craig Smith, who made this collection available, goes into great detail in his blog entry about how fragile these tapes had become before being transferred, and how some were lost along the way. Folks unfamiliar with “Sticky Shed Syndrome” and the process of “baking tapes” will be surprised to know how quickly and dramatically tapes can fall apart after a passage of time. With large efforts by a number of people, the amount that was saved is now available at the Archive.
There is extensive metadata in each item, captured as spreadsheets and documents about the assumed sources or credits of the sound. They’re important to bring along with these noises if a patron wants to maintain a local copy.
Speaking of which.
In this collection is a massive compilation of all the data related to the project. It’s located in an item called “Sound Effect Libraries (Red, Gold, Sunset Editorial)”. Patrons whose immediate urge is to grab their own private set of the data to keep “safe” will want to go to this item, using either the direct download of the three .ZIP files inside, or to click on the TORRENT link to download the 20+ gigabytes of files. Depending on your bandwidth, it will take some time to download, but you can be assured that you got “all” the data from this amazing collection. This, in some ways, is the Internet Archive’s greatest strength – direct access to the original files for others to have, instead of adding a layer of processing and change as the presentation mediums of the day require modification for “ease”.
Art historians, critics, curators, humanities scholars and many others rely on the records of artists, galleries, museums, and arts organizations to conduct historical research and to understand and contextualize contemporary artistic practice. Yet, much of the art-related materials that were once published in print form are now available primarily or solely on the web and are thus ephemeral by nature. In response to this challenge, more than 40 art libraries spent the last 3 years developing a collective approach to preservation of web-based art materials at scale.
Supported by the Institute of Museum and Library Services and the National Endowment for the Humanities, The Collaborative ART Archive (CARTA) community has successfully aligned effort across libraries large and small, from Manoa, Hawaii to Toronto, Ontario and back resulting in preservation of and access to 800 web-based art resources, organized into 8 collections (art criticism, art fairs and events, art galleries, art history and scholarship, artists websites, arts education, arts organizations, auction houses), totalling nearly 9 TBs of data with continued growth. All collections are preserved in perpetuity by the Internet Archive.
Today, CARTA is excited to launch the CARTA portal – providing unified access to CARTA collections.
🎨 CARTA portal 🎨
The CARTA portal includes web archive collections developed jointly by CARTA members, as well as preexisting art-related collections from CARTA institutions, and non-CARTA member collections. CARTA portal development builds on the Internet Archive’s experience creating the COVID-19 Web Archive and Community Webs portal.
CARTA collections are searchable by contributing organization, collection, site, and page text. Advanced search supports more granular exploration by host, results per host, file types, and beginning and end dates.
🔭 CARTA search 🔭
In addition to the CARTA portal, CARTA has worked to promote research use of collections through a series of day long computational research workshops – Working to Advance Library Support for Web Archive Research – backed by ARCH (Archives Research Compute Hub). A call for applications for the next workshop, held concurrent to the annual Society of American Archivists meeting, is now open.
Moving forward CARTA aims to grow and diversify its membership in order to increase collective ability to preserve web-based art materials. If your art library would like to join CARTA please express interest here..
Welcome to the No Ethics in Big Tech NSA 10th Annual Comedy Night produced by friend of the Internet Archive, Vahid Rezavi.
The evening will feature the comedy of Will Durst, Mean Dave, Chloe McGovern, and Alicia Dattner, accompanied by talented musician Mike Rufo.
But the real stars of the evening are the speakers from No Ethics in Big Tech, including the Electronic Frontier Foundation, the Media Alliance, Veterans for Peace and Common Dreams. These experts will discuss the ethical implications of technology, the latest developments in the tech world, and the importance of a free and independent press in the age of algorithmic news feeds.
Get Tickets to Virtual Event Here Saturday, May 20, 2023 6:00-8:00 pm Pacific
After the declaration of Democracy’s Library at the 2022 Internet Archive Annual Event [video], the U.S. team underwent a 4-month landscape analysis to discover the state of the United States’ collective knowledge management.
Over the course of this blog series we’ll discuss our findings, including the various ways in which our federated national infrastructure contributes to the immense complexity which inhibits easy and meaningful access to the public’s information.
But for now, we would like to share our executive summary. This piece is informed from interviews with librarians, archivists, information professionals, after review of various pieces of legislation, government agency reports, as well as consultation with government representatives at various departments, technologists working on civic-tech and gov-tech applications, and users of government information.
A huge thanks again to all who were interviewed, involved, and are excited about this program.
EXECUTIVE SUMMARY OF THE DEMOCRACY LIBRARY (U.S.) REPORT
Every year, the United States government spends billions of dollars generating data: including reports, research, records, and statistics. Both governments and corporations know that this data is a highly valuable strategic asset. Yet meaningful access to this critical data is effectively kept out of the public’s hands. Though much of it is intended to be publicly accessible, we do not have a publicly-accessible central repository where we can search for all government artifacts. We do not have a public library of all government data, documents, research, records, and publications. These artifacts are not easy for everyone to get a hold of.
Instead, this data is organized only to be kept behind paywalls, vended to multinational corporations, guarded by “data cartels,” or sits inaccessibly among thousands of disjointed agency websites, with non-standardized archival systems that are stewarded by under-resourced librarians and archivists. This data is siloed within agencies, never before linked together. Although by law, we are entitled to this data – by default, journalists, activists, democracy technologists, academics, and the public are deprived of meaningful access. Instead, it’s a pay to play system in which many are priced out.
However, if we could reduce the public burden in accessing this knowledge – as the federal government has stated is a priority – then it might be the lynchpin to transforming democratic systems and making them more efficient, actionable, and auditable in the future. This work could potentiate a big data renaissance in political science and public administration. It could equip every local journalist with comprehensive, ‘investigative access’ to policy-making across the country. It could even provide key insights which ensure that democracy survives, thrives, adapts, and evolves in the 21st century; like so many desperately want it to and yet so many fear that it may never. To make our democracy more resilient and prepared for the digital age, we need Democracy’s Library.
Democracy’s Library is a 10 year, multi-pronged, partnership effort to collect, preserve, and link our democracy’s data in a centralized, queryable repository. This repository of data will be sourced from all levels of the U.S. government, for the purpose of informing innovation, enabling transparency, advancing new fields like mass political informatics, and overall, digitizing our democracy. Access to this data is a necessary substrate for that innovation, and to propel our antiquated system into a lightning fast future, we need to overcome challenges from the artifact-level to the systems-level.
Fortunately, the Internet Archive is perfectly primed to comprehensively take on these challenges alongside our partners (like the Filecoin Foundation) through this new initiative, supported by a groundswell of legislative and political support. The time is right, the network is primed, and most of the tools are already built and being deployed. So, the only thing that remains is for funding partners to step up to scale the effort to revolutionize the U.S. government once again.
To librarians and archivists: please know we are still collecting feedback from government information professionals. So if you are a librarian or archivist, we would love to hear from your experience. If you’re interested in sharing, please fill out this survey.
Guest Post by Daniel Van Strien, Machine Learning Librarian, Hugging Face
Machine learning has many potential applications for working with GLAM (galleries, libraries, archives, museums) collections, though it is not always clear how to get started. This post outlines some of the possible ways in which open source machine learning tools from the Hugging Face ecosystem can be used to explore web archive collections made available via the Internet Archive’s ARCH (Archives Research Compute Hub). ARCH aims to make computational work with web archives more accessible by streamlining web archive data access, visualization, analysis, and sharing. Hugging Face is focused on the democratization of good machine learning. A key component of this is not only making models available but also doing extensive work around the ethical use of machine learning.
Below, I work with the Collaborative Art Archive (CARTA) collection focused on artist websites. This post is accompanied by an ARCH Image Dataset Explorer Demo. The goal of this post is to show how using a specific set of open source machine learning models can help you explore a large dataset through image search, image classification, and model training.
Later this year, Internet Archive and Hugging Face will organize a hands-on hackathon focused on using open source machine learning tools with web archives. Please let us know if you are interested in participating by filling out this form.
Choosing machine learning models
The Hugging Face Hub is a central repository which provides access to open source machine learning models, datasets and demos. Currently, the Hugging Face Hub has over 150,000 openly available machine learning models covering a broad range of machine learning tasks.
Rather than relying on a single model that may not be comprehensive enough, we’ll select a series of models that suit our particular needs.
A screenshot of the Hugging Face Hub task navigator presenting a way of filtering machine learning models hosted on the hub by the tasks they intend to solve. Example tasks are Image Classification, Token Classification and Image-to-Text.
Working with image data
ARCH currently provides access to 16 different “research ready” datasets generated from web archive collections. These include but are not limited to datasets containing all extracted text from the web pages in a collection, link graphs (showing how websites link to other websites), and named entities (for example, mentions of people and places). One of the datasets is made available as a CSV file, containing information about the images from webpages in the collection, including when the image was collected, when the live image was last modified, a URL for the image, and a filename.
Screenshot of the ARCH interface showing a preview for a dataset. This preview includes a download link and an “Open in Colab” button.
One of the challenges we face with a collection like this is being able to work at a larger scale to understand what is contained within it – looking through 1000s of images is going to be challenging. We address that challenge by making use of tools that help us better understand a collection at scale.
Building a user interface
Gradio is an open source library supported by Hugging Face that helps create user interfaces that allow other people to interact with various aspects of a machine learning system, including the datasets and models. I used Gradio in combination with Spacesto make an application publicly available within minutes, without having to set up and manage a server or hosting. See the docs for more information on using Spaces. Below, I show examples of using Gradio as an interface for applying machine learning tools to ARCH generated data.
Exploring images
I use the Gradio tab for random images to begin assessing images in the dataset. Looking at a randomized grid of images gives a better idea of what kind of images are in the dataset. This begins to give us a sense of what is represented in the collection (e.g., art, objects, people, etc.).
Screenshot of the random image gallery showing a grid of images from the dataset.
Introducing image search models
Looking at snapshots of the collection gives us a starting point for exploring what kinds of images are included in the collection. We can augment our approach by implementing image search.
There are various approaches we could take which would allow us to search our images. If we have the text surrounding an image, we could use this as a proxy for what the image might contain. For example, we might assume that if the text next to an image contains the words “a picture of my dog snowy”, then the image contains a picture of a dog. This approach has limitations – text might be missing, unrelated or only capture a small part of what is in an image. The text “a picture of my dog snowy” doesn’t tell us what kind of dog the image contains or if other things are included in that photo.
Making use of an embedding model offers another path forward. Embeddings essentially take an input i.e. text or image, and return a bunch of numbers. For example, the text prompt: ‘an image of a dog’, would be passed through an embedding model, which ‘translates’ text into a matrix of numbers (essentially a grid of numbers). What is special about these numbers is that they should capture some semantic information about the input; the embedding for a picture of a dog should somehow capture the fact that there is a dog in the image. Since these embeddings consist of numbers, we can also compare one embedding to another to see how close they are to each other. We expect the embeddings for similar images to be closer to each other and the embeddings for images which are less similar to each other to be farther away. Without getting too much into the weeds of how this works, it’s worth mentioning that these embeddings don’t just represent one aspect of an image, i.e. the main object it contains but also other components, such as its aesthetic style. You can find a longer explanation of how this works in this post.
Finding a suitable image search model on the Hugging Face Hub
To create an image search system for the dataset, we need a model to create embeddings. Fortunately, the Hugging Face Hub makes it easy to find models for this.
The Hub has various models that support building an image search system.
Hugging Face Hub showing a list of hosted models.
All models will have various benefits and tradeoffs. For example, some models will be much larger. This can make a model more accurate but also make it harder to run on standard computer hardware.
Hugging Face Hub provides an ‘inference widget’, which allows interactive exploration of a model to see what sort of output it provides. This can be very useful for quickly understanding whether a model will be helpful or not.
A screenshot of a model widget showing a picture of a dog and a cat playing the guitar. The widget assigns the label `”playing music`” the highest confidence.
For our use case, we need a model which allows us to embed both our input text, for example, “an image of a dog,” and compare that to embeddings for all the images in our dataset to see which are the closest matches. We use a variant of the CLIP model hosted on Hugging Face Hub: clip-ViT-B-16. This allows us to turn both our text and images into embeddings and return the images which most closely match our text prompt.
Aa screenshot of the search tab showing a search for “landscape photograph” in a text box and a grid of images resulting from the search. This includes two images containing trees and images containing the sky and clouds.
While the search implementation isn’t perfect, it does give us an additional entry point into an extensive collection of data which is difficult to explore manually. It is possible to extend this interface to accommodate an image similarity feature. This could be useful for identifying a particular artist’s work in a broader collection.
Image classification
While image search helps us find images, it doesn’t help us as much if we want to describe all the images in our collection. For this, we’ll need a slightly different type of machine learning task – image classification. An image classification model will put our images into categories drawn from a list of possible labels.
We can find image classification models on the Hugging Face Hub. The “Image Classification Model Tester” tab in the demo Gradio application allows us to test most of the 3,000+ image classification models hosted on the Hub against our dataset.
This can give us a sense of a few different things:
How well do the labels for a model match our data?A model for classifying dog breeds probably won’t help us much!
It gives us a quick way of inspecting possible errors a model might make with our data.
It prompts us to think about what categories might make sense for our images.
A screenshot of the image classification tab in the Gradio app which shows a bar chart with the most frequently predicted labels for images assigned by a computer vision model.
We may find a model that already does a good job working with our dataset – if we don’t, we may have to look at training a model.
Training your own computer vision model
The final tab of our Gradio demo allows you to export the image dataset in a format that can be loaded by Label Studio, an open-source tool for annotating data in preparation for machine learning tasks. In Label Studio, we can define labels we would like to apply to our dataset. For example, we might decide we’re interested in pulling out particular types of images from this collection. We can use Label Studio to create an annotated version of our dataset with these labels. This requires us to assign labels to images in our dataset with the correct labels. Although this process can take some time, it can be a useful way of further exploring a dataset and making sure your labels make sense.
With a labeled dataset, we need some way of training a model. For this, we can use AutoTrain. This tool allows you to train machine learning models without writing any code. Using this approach supports creation of a model trained on our dataset which uses the labels we are interested in. It’s beyond the scope of this post to cover all AutoTrain features, but this post provides a useful overview of how it works.
Next Steps
As mentioned in the introduction, you can explore the ARCH Image Dataset Explorer Demo yourself. If you know a bit of Python, you could also duplicate the Space and adapt or change the current functionality it supports for exploring the dataset.
Internet Archive and Hugging Face plan to organize a hands-on hackathon later this year focused on using open source machine learning tools from the Hugging Face ecosystem to work with web archives. The event will include building interfaces for web archive datasets, collaborative annotation, and training machine learning models. Please let us know if you are interested in participating by filling out this form.
This Spring, the Internet Archive hosted two in-person workshops aimed at helping to advance library support for web archive research: Digital Scholarship & the Web and Art Resources on the Web. These one-day events were held at the Association of College & Research Libraries (ACRL) conference in Pittsburgh and the Art Libraries Society of North America (ARLIS) conference in Mexico City. The workshops brought together librarians, archivists, program officers, graduate students, and disciplinary researchers for full days of learning, discussion, and hands-on experience with web archive creation and computational analysis. The workshops were developed in collaborationwith the New York Art Resources Consortium (NYARC) – and are part of an ongoing series of workshops hosted by the Internet Archive through Summer 2023.
Internet Archive Deputy Director of Archiving & Data Services Thomas Padilla discussing the potential of web archives as primary sources for computational research at Art Resources on the Web in Mexico City.
Designed in direct response to library community interest in supporting additional uses of web archive collections, the workshops had the following objectives: introduce participants to web archives as primary sources in context of computational research questions, develop familiarity with research use cases that make use of web archives; and provide an opportunity to acquire hands-on experience creating web archive collections and computationally analyzing them usingARCH (Archives Research Compute Hub) – a new service set to publicly launch June 2023.
Internet Archive Community Programs Manager Lori Donovan walking workshop participants through a demonstration of Palladio using a dataset generated with ARCH at Digital Scholarship & the Web In Pittsburgh, PA.
In support of those objectives, Internet Archive staff walked participants through web archiving workflows, introduced a diverse set of web archiving tools and technologies, and offered hands-on experience building web archives. Participants were then introduced to Archives Research Compute Hub (ARCH). ARCH supports computational research with web archive collections at scale – e.g., text and data mining, data science, digital scholarship, machine learning, and more. ARCH does this by streamlining generation and access to more than a dozen research ready web archive datasets, in-browser visualization, dataset analysis, and open dataset publication. Participants further explored data generated with ARCH in Palladio, Voyant, and RAWGraphs.
Network visualization of the Occupy Web Archive collection, created using Palladio based on a Domain Graph Dataset generated by ARCH.
Gallery visualization of the CARTA Art Galleries collection, created using Palladio based on an Image Graph Dataset generated by ARCH.
At the close of the workshops, participants were eager to discuss web archive research ethics, research use cases, and a diverse set of approaches to scaling library support for researchers interested in working with web archive collections – truly vibrant discussions – and perhaps the beginnings of a community of interest! We plan to host future workshops focused on computational research with web archives – please keep an eye on our Event Calendar.
If you’ve ever been to a typical tech event, full of neon-lit booths and cavernous main stage talks, you know it’s just about the last place you’d want to bring your kids.
So why would we make children such an integral part of DWeb Camp? It harks back to some of our core principles:
At DWeb Camp we don’t think you should have to choose between spending time with your family and growing professionally. We realize that building a better online world takes many kinds of people, imaginations, and skills.
So at DWeb Camp, it makes sense to us to have children at the front and center. They are the reason we are building a better web in the first place.
So please bring your entire family to DWeb Camp 2023. We’ve created some great family packages to make that more affordable. At the center of our Family program is its curator, Andi Wong. She’s an arts educator, an ocean advocate, a storyteller, and a historian. Andi weaves all these elements together to create a magical experience for the children and families at DWeb Camp.
“I like to get to know the kids and know what THEY are interested in,” explains Andi. “The goal is to get to know each other well enough so they can form a community. We try to introduce them to things in nature so they’ll understand there are all these invisible forces and flows they may not have thought about.”
What can your family expect at DWeb Camp?
Educator-led program just for kids
Indigenous storytellers exploring creation myths
Juggling lessons with flow artists
Daily lessons in animal kung fu from a wonderful Sifu
Exploring soundscapes–rain, ocean, river, forest –with the Del Sol Quartet
Open Play with clay, cardboard, string, paint — materials you can recreate at home
Archery, rock climbing, hiking, and a swimming hole
Scavenger hunts to help understand decentralized technologies
Evening talent show, game night, campfires with s’mores
Stargazing with an astronomer and a concert under the night sky
Sunset movies where you can drop off the kids
Giant puppet-making & a puppet parade at the end of Camp
Andi says there’s room for everyone from “babes in arms” to tweens and teens, plus their parents, who also form a close cohort. She weaves a rich curriculum drawn from the skills of the campers themselves. So artists, dancers, storytellers, coders: what do you have to share with our youngest campers?
At Camp Navarro, accommodations range from private cabins to glamping tents with comfy mattresses and linens for 3-4 people. Or you can bring your own tent or RV. The showers are hot and the flush toilets are clean. So make DWeb Camp a family affair this year, and discover the flows of nature, technology, community, and your own perfect family flow.
Chatbots, like OpenIA’s ChatGPT, Google’s Bard and others, have a hallucination problem (their term, not ours). It can make something up and state it authoritatively. It is a real problem. But there can be an old-fashioned answer, as a parent might say: “Look it up!”
Imagine for a moment the Internet Archive, working with responsible AI companies and research projects, could automate “Looking it Up” in a vast library to make those services more dependable, reliable, and trustworthy. How?
The Internet Archive and AI companies could offer an anti-hallucination service ‘add-on’ to the chatbots that could cite supporting evidence and counter claims to chatbot assertions by leveraging the library collections at the Internet Archive (most of which were published before generative AI).
By citing evidence for and against assertions based on papers, books, newspapers, magazines, books, TV, radio, government documents, we can build a stronger, more reliable knowledge infrastructure for a generation that turns to their screens for answers. Although many of these generative AI companies are already, or are intending, to link their models to the internet, what the Internet Archive can uniquely offer is our vast collection of “historical internet” content. We have been archiving the web for 27 years, which means we have decades of human-generated knowledge. This might become invaluable in an age when we might see a drastic increase in AI-generated content. So an Internet Archive add-on is not just a matter of leveraging knowledge available on the internet, but also knowledge available on the history of the internet.
Is this possible? We think yes because we are already doing something like this for Wikipedia by hand and with special-purpose robots like Internet Archive Bot Wikipedia communities, and these bots, have fixed over 17 million broken links, and have linked one million assertions to specific pages in over 250,000 books. With the help of the AI companies, we believe we can make this an automated process that could respond to the customized essays their services produce. Much of the same technologies used for the chatbots can be used to mine assertions in the literature and find when, and in what context, those assertions were made.
The result would be a more dependable World Wide Web, one where disinformation and propaganda are easier to challenge, and therefore weaken.
Yes, there are 4 major publishers suing to destroy a significant part of the Internet Archive’s book corpus, but we are appealing this ruling. We believe that one role of a research library like the Internet Archive, is to own collections that can be used in new ways by researchers and the general public to understand their world.
What is required? Common purpose, partners, and money. We see a role for a Public AI Research laboratory that can mine vast collections without rights issues arising. While the collections are significant already, we see collecting, digitizing, and making available the publications of the democracies around the world to expand the corpus greatly.
We see roles for scientists, researchers, humanists, ethicists, engineers, governments, and philanthropists, working together to build a better Internet.
If you would like to be involved, please contact Mark Graham at mark@archive.org.
Internet Archive’s Digital Library of Amateur Radio & Communications continues to expand its collection of online resources about ham radio, shortwave, amateur television, and related communications. The library has grown to more than 75,000 items, with new resources including newsletters, podcasts, and conference presentations.
DLARC has recently added hundreds of presentations recorded by RATPAC, the Radio Amateur Training Planning and Activities Committee, and dozens of talks given at the MicroHams Digital Conference.
Internationally known radio host Glenn Hauser has allowed decades of his radio content to be archived in the DLARC library, including 1,200 episodes of World of Radio, which explores communications from around the world, especially shortwave radio; Informe DX and Mundo Radial, Spanish language translations of World of Radio; Continent of Media, a program about media around the American continent; and Hauserlogs, shortwave listening diaries.
International Radio Report, a program about radio in Montreal Canada and around the world, has also been archived in the library with episodes going back to 2000. Many of these episodes, spanning May 2000 through March 2005, have not been available online for more than a decade, restoring access to important contemporary reporting.
DLARC continues to expand its collection of ham radio e-mail and Usenet conversations from the early days of the Internet, with the addition of nearly 3,500 QRP-L Digest mailings spanning 1993 through 2004. QRP-L was an early Internet e-mail list for discussion of the design, construction, and use of low-power radio equipment.
The Digital Library of Amateur Radio & Communications is funded by a grant from Amateur Radio Digital Communications (ARDC) to create a free digital library for the radio community, researchers, educators, and students. DLARC invites radio clubs and individuals to submit material in any format. To contribute or ask questions about the project, contact: