Over the weekend, my wife’s mom, Emily, set aside this awesome article from the New York Review of Books. Go Emily!
Written by Anthony Grafton and Jeffrey Hamburger, it is titled “Save the Warburg Library!”. In it, they explain the crisis the Warburg is in financially (hence the title). But more importantly, they describe what makes this Art History library so unique — the cataloging scheme. The stacks are open, and the juxtaposition of books “will bring the reader not only to the books he or she is looking for, but also to their unexpected ‘good neighbours’.”
From the Warburg Institute Library’s web site:
The 350,000 or so volumes are classified in four sections: social and political history (fourth floor); religion, history of science and philosophy (third and fourth floors); literature, books, libraries and education (second floor and basement); history of art (first floor, with classical art and archaeology in the basement). There are c. 2,500 runs of periodicals, about half of them current (mobile stacks in the basement). Readers have free access to the Library Holdings.
Needless to say, this is very different from how the Library of Congress organizes things. But who’s to say that the LC system is better? Different researchers have different needs, so this begs the question: Can we make Harvard’s collection “look” like the Warburg’s?
I’ve been looking into visualizations book collections, so what if we rendered a shelf of virtual books based on the Warburg logic? Or Princeton Univerity’s Richardson system which groups all the books by each author together?
What are the benefits to our library singing Karaoke?
BBC has a brilliant new site, Dimensions, another wonder of design by BERG. It is an effort to communicate scale (of real life events and disasters) in personalized and meaningful, ways. From my POV, it’s the best google maps mashup out there. Again, visit:
http://howbigreally.com/ (and great name too)
It reminds me of Sherman Williams’ logo, which is a truth:
I think communicating the profound scale of collections, and the human hours that go into creating it, could be useful. Perhaps it would help better communicate the respect which libraries are perhaps due.
How many linear miles is Harvard’s Collection?
How man human years have gone into cataloging all of its records?
I’d like to wrangle these.
Sorry. A week late with these staff meeting notes (Still not sure there’s value in posting them, but that’s not why there was a delay.)
We’re going to look at the Australian national Trove system for metadata standards.
The new Web server is up.
We are working on getting data from Cognos more directly and usefully.
We are continuing to look for a contract developer to help with some legacy projects that are important but are distracting us from tasks more in line with Lil’s mission.
We are working with the Harvard Coop to get a list of books ordered for courses.
We are pretty consistently working with Harvard information systems that were not designed for sharing data across departments. People are being very helpful. But we should be documenting this process because other schools are facing the same issues. We’re going to set up a meeting with one of the main info shops in the school to see how we can collaborate on this.
We are beginning to investigate doing a book locator that shows books in physical maps of a library. This requires numbering the stacks physically. We talked about which library to start with.
StackView is looking at how to visualize multiple subject neighborhoods.
We are continuing to investigate real time notification systems.
We reported on our meeting with Jim Neal, University Librarian of Columbia University, in which we talked about ShelfLife, LibraryCloud, and the possibilities of collaboration.
We’ve spoken a lot about books friending books, people friending books, books updating their status, etc. We’ve even had library circulation events fire a tweet.
Here’s an interesting version of that idea, but for trees:
A good thought experiment, swapping out book for tree, what would all these fields look like?
XPERT aggregates e-learning materials and makes them available publicly:
XPERT (Xerte Public E-learning ReposiTory) project is a JISC funded rapid innovation project (summer 2009) to explore the potential of delivering and supporting a distributed repository of e-learning resources created and seamlessly published through the open source e-learning development tool called Xerte Online Toolkits. The aim of XPERT is to progress the vision of a distributed architecture of e-learning resources for sharing and re-use.
Learners and educators can use XPERT to search a growing database of open learning resources suitable for students at all levels of study in a wide range of different subjects.
We spent almost the entire status meeting going through the list of projects for which we are planning on applying for Harvard Library Lab grants. This is the first time the Library Lab (note: The larger Library Lab, not our group; our group is changing its name) has awarded grants, so we are all feeling our way.
The Oxford English Dictionary has announced that it will not print new editions on paper. Instead, there will be Web access and mobile apps.
According to the article in the Telegraph, “A team of 80 lexicographers has been working on the third edition of the OED – known as OED3 – for the past 21 years.”
The trajectory toward digitization has been long for the OED. In the 1990s, the OED’s desire to produce a digital version (remember books on CD?) stimulated search engine innovation. To search the OED intelligently, the search engine would have to understand the structure of entries, so that it could distinguish the use of a word as that which is being defined, the use of it within a definition, the use of it within an illustrative quote, etc. SGML was perfect for this type of structure, and the Open Text SGML search engine came out of that research. On the other hand, initially, the OED didn’t want to attribute the origins of the word “blog” to Peter Merholz because he coined it in his own blog, and the OED would only accept print attributions. (See here, too.) It got over this prejudice for printed sources, however, and gave Peter proper credit.
This morning we had a very productive conference call (yes, there are such things, you cynics!) with Steve Midgley about the federal Learning Registry.
The Learning Registry is a new project coming out of the Dept. of Education and the Defense Department, intended to provide easier, smarter access to federal content and beyond. The LR will list sources and provide ways to subscribe to metadata about the content at those sources. (There’s more in this blog post.)
We’d like to be involved in some way because (i) the LR might provide a transport/notification/subscription mechanism for those who want to use the metadata that Library Lab apps will be making available (even though the LR is apparently designed only to give access to metadata about federal content); (ii) the LR may enable our apps to subscribe to metadata from many other sources; (iii) we’d like to help the LR accommodate the needs and gifts of research libraries.
So, we’ll be talking more with Steve and the Learning Registry.
A study by Gunther Eysenbach in PLoS Biology suggests that open access articles “are more immediately recognized and cited by peers than non-OA articles published in the same journal.” Therefore, he concludes, “OA is likely to benefit science by accelerating dissemination and uptake of research findings.”
The study consisted of comparing citations among OA and non-OA articles published June 8, 2004 – December 20, 2004, in PNAS: Proceedings of the National Academy of Sciences. (Thanks to Don Marti for the link.)