Digital opportunity for the academic scholarly record

[MIT Libraries is holding a workshop on Grand Challenges for the scholarly record.  They asked participants for a problem/solution statement.  This is mine. -brewster kahle]

The problem of academic scholarly record now:

University library budgets are spent on closed rather than open: We invest dollars in closed/subscription services (Elsevier, JSTOR, Hathi) rather than ones open to all users (PLOS, Arxiv, Internet Archive, eBird)– and for a reason.  There is only so much money and our community demands access to closed services, and the open ones are there whether we pay for them or not.

We want open access AND digital curation and preservation– but have no means to spend cooperatively.

University libraries funded the building of Elsevier / JSTOR / HathiTrust: closed, subscription services.

We need to invest most University Library acquisition dollars in open: PLOS, Arxiv, Wikipedia, Internet Archive, eBird.

We have solved it when:

Anyone anywhere can get ALL information available to an MIT student, for free.

Everyone everywhere has the opportunity to contribute to the scholarly record as if they were MIT faculty, for free.

What should we do now?

Analog -> Digital conversion of all published scholarly must be completed soon.   And completely open, available in bulk.

Curation and Digital Preservation of born-digital research products: papers/websites/research data.

“Multi-homing” digital research product (papers, websites, research data) via peer-to-peer backends.

Who can best implement?

Vision and tech ability: Internet Archive, PLOS, Wikipedia, arxiv.

Funding now is coming from researchers, individuals, rich people.

Funding should come from University Library acquisition budgets.

Why might MIT lead?

OpenCourseware was bold.  MIT might invest in opening the scholarly record.

How might MIT do this?

Be bold.

Spend differently.

Lead.

2 thoughts on “Digital opportunity for the academic scholarly record

  1. Jose Neto

    Hello everyone, everything good?

    Is there any way to parse multiple addresses at one time by checking only the last time it was indexed by web.archive?

    Thank you! I have already donated to the project, I use it every day.

Comments are closed.