OpenURL resolver available for testing
BHL has released a beta version of its OpenURL Resolver API for testing. A full description of the service is available at http://www.biodiversitylibrary.org/openurlhelp.aspx.
Any repository containing citations to biodiversity literature can use this API to determine whether a given book, volume, article, and/or page is available online through BHL. The service supports both OpenURL 0.1 and OpenURL 1.0 query formats, and can return its response in JSON, XML, or HTML format, providing flexibility for data exchange.
One issue still under consideration is whether to assign an API key to each user of the service, similar to Google Maps and many other data providers. Some advantages to assigning a key include having a method by which to contact users to notify them of updates & service availability, and yes, to track usage. Some disadvantages include a perceived restriction on access to the service and concerns about privacy & data tracking, among others. In truth we are less concerned about restricting users and are more interested in finding a way to monitor use of the service and to communicate with its users.
We started a discussion of the pros and cons of the API key approach on Twitter but the text size restriction made responses laughable. We’re interested in your views as either potential consumers of the BHL OpenURL API or from your experience in managing similar services for your project. We’re also especially interested in viewpoints from those with experience deploying OpenURL, which we’ve come to learn is a fairly niche group. Please leave your comments below so that we can continue the dialogue using more than 140 characters!
David – You can now wrap the JSON response in a callback function. Updated documentation at http://www.biodiversitylibrary.org/openurlhelp.aspx
David – Never too late in custom software development! Good suggestion – I'll let you know when it's available.
Little late to the party, but could you also wrap a callback function around the JSON format? That way, a consumer need not use any server-side programming to make use of the service in a front-end application (i.e. could use jQuery or any js library to immediately consume the js object)
Ok.
In FishBase (FB), we would like to link to BHL pages from our BIBLIO table where we record where the name has been used in which reference. It could be the same for Catalog of Fishes (CofF) by the way, and even more because it records more old stuff than FB.
The resolver works nice, but works only on textual parameters, with all the problems of misspelling, different publication dates, etc.
I suggest that a document resolver could deliver the Id of the document, so we could store the Id in our REFERENCES table, and have the page resolver working also with the Id. So when we are sure that the documents found are the good ones, we can tag it as checked.
What do you think? Or will the ambiguous cases less frequent than I think?
Thanks.
Nicolas.
Off topic but still..
1. Can you make the masthead clickable to the homepage please (I can only see the little 'home' link down the bottom when I arrive on an individual post page). No biggy of course; just a thought.
2 Are you going to write about (what I presume is) the new loading architecture of the digital books? It is new innit?? {yeah, I confess to getting MBG & BHL idiosyncrasies I mean UI individuality confuzzled at times; of which now may be a prime example)
One thing I was wondering about it: how much continues to load in the background?? If I go to a particular page in a book, is it just the page ahead/behind preloading or everything?? It's really a personal bandwidth question that may kind of dictate the rapidity/m.o. of my visits.
Taa
Nick, please post comments here
Is this the place to post technical comments or is there another blog?
Nicolas Bailly, FishBase
The resolver works fairly good, at least for monographs. It was easy to successfully construct some URLs leading to titles in BHL. In the description of the service, I think, it would be nice to have examples for addressing journal articles via openURL (not just books). Links from article metadata in bibliographic databases, like Zoological Record archives, to article full texts in BHL could be a great thing.
As I've stated via twitter, I think that the key should be an opt-in option – here's my reasoning:
1 Users of this service will not necessarily be the same as consumers of the service – there will be times when you have someone who is using this service as part of a service they provide. I would probably fall into that category, given time, and would not be interested in being part of a mailing list. I wouldn't mind registering for a key, so I could be notified of any significant changes to this service (long downtime, dropping support for OpenURL 0.1, etc.). To me, registering for keys should be a mutually beneficial system, for service-critical notification.
2 This also allays any privacy concerns – though you can still monitor usage via IP address, as Anthony has suggested.
3 This also allows a tiered approach to communication – critical issues to all tiers, including key-registration users; development discussion; and user-level discussion.
Thanks for moving the discussion beyond the 140 char. limit!
aside from the authentication and administration overhead that the key adds, I'm not sure what is really gained.. IMO, if there are users abusing the service, they might not always need to be banned, just throttled – and then, if it's part of the terms of using the service, this shouldn't be a reactive process but it should be built into the system – that is, no single IP is allowed to make more than n requests per sec/min/hr.. I think forcing people to join a mailing list is an unnecessary step if they just want to get some data. Of course, they should sign up so they're not surprised when the service becomes unavailable due to a scheduled outage, but that should be up to them. If people share a key, then tracking their use over multiple subnets/IPs is a benefit you'll get from keys, but you can easily get usage stats per IP from your logs
My two cents.. 🙂