« Brief e-Book usage survey | Main | Now don't tell me I've nothin' to do »

June 17, 2010

Where next for resource licensing?

Five hours of presentations and discussion about scholarly resource licensing probably doesn't strike most people as a 'good day out' but, actually, yesterday's joint JIBS/Eduserv Where next for Resource Licensing? event was a surprisingly enjoyable and interesting experience.

My live-blogged notes of all the talks are available on eFoundations LiveWire. On that basis, I won't go into the details of any of the talks here. Rather, I'll focus on my overall impressions and thoughts (all of which is very much a personal view)...

Firstly, the academic landscape is changing, both in terms of student expectations and in terms of the nature of university 'business' practice (e.g. greater intra-UK and international collaboration around course delivery). A number of the talks provided evidence for this. Now, of course, we already knew that the landscape was changing... but it doesn't do any harm to keep reminding ourselves of how (and how much) and it was particularly pleasing (for me) to see Owen Stephens (who gave the opening keynote) quoting a couple of the speakers (Paul Golding and Chris Sexton) at our recent symposium by way of evidence.

Secondly, there is something of a tension between wanting to grow the complexity of our resource licences (to take account of newly emerging business practices and user groups for example) and the desire to consolidate, and indeed grow, our existing use of a small number of 'model' licences. (Clearly, this is an area in which the Eduserv Licence Negotiation team has had a big impact over the last 10 to 15 years). In theory, the emerging technical possibility for machine-readable licences (Mark Bide of EDItEUR gave an interesting talk about ONIX-PL for example) means that we can leave software to deal with making access decisions based on a growing collection of different licences. Yet there seemed to be little appetite for this in the room. (Indeed, I'm not even sure such a scenario is really possible or effective for a variety of reasons). As a counterpoint, my colleague Martyn Jansen put forward some suggestions in the final talk of the day to simplify the existing standard Chest Agreement, both in terms of having a smaller number of classes of users and in terms of simplifying the types of use allowed. For my part, this feels like a sensible way forward.

Thirdly, the idea of allowing 'walk-in users' in the digital age was called into question. Owen Stephens referred to the whole notion as "stupid" in his opening talk, suggesting that we need to completely revisit what we are trying to achieve by it and, more importantly, talk to publishers about what we want to do. Sticking my neck out a little, my personal take on this is that in the age of the Web and widely implemented federated access management it is somewhat unreasonable of academic institutions to expect publishers to provide any access to digital resources by walk-in users. But perhaps I'm just being naive about the issues here?

Fourthly, there was some discussion around overseas students. Louise Cole of Kingston University noted, with some irony, that in some cases walk-in users with no affiliation to the institution can get a better deal in terms of access to resources than registered students of that institution who happen to be based overseas. Again, I'll stick my neck out with a personal view (quite possibly a view not shared by my colleagues here!). Geography has become irrelevant and should play no part in our licensing deals. A university with 6000 undergrads should be dealt with as a university of 6000 undergrads, irrespective of whether 3000 of them happen to be based overseas. If this gives publishers problems in terms of pricing across different geographic markets, get over it. The world is largely flat.

And finally, another personal view about something that didn't really come up during the day (at least until drinks in the pub afterwards!) but which increasingly struck me as the day progressed. We seem to be hitting something of a disconnect between theory and practice in this area - which is probably something that neither institutions nor publishers really like to acknowledge. On the one hand, we have relatively complex discussions around licensing terms and conditions, coupled potentially with relatively detailed ways of exchanging those licences in a machine-readable form. At the same time we have an over-arching emphasis on security and data protection in the way our access management federation is delivered (in a way that I've not really seen justified in terms of the risk of abuse of the resources being made available thru that federation). Meanwhile, on the other hand (err... back in the real world?) Shibboleth and OpenAthens system administrators are nearly always just setting the simplest kind of "This person is a member of the institution" attribute, passing it to the service provider and having them gain access to the resource as a result.

Are we routinely comparing our technology choices against a measure of the risk we are dealing with? Are we joining up our discussions about new kinds of users and usages in our licences with the same constructs in our SAML attribute sets? And finally, are we taking note of whether people on the ground are actually acting in line with our somewhat theoretical technology-centric positions?

Or is the reality that the people doing the day job are getting by with a just good enough approach and that, actually, publishers are perfectly happy with that provided the university pays the subscription fee?


TrackBack URL for this entry:

Listed below are links to weblogs that reference Where next for resource licensing?:


Hi Andy, while, as I said, I think walk-in access is 'stupid' my underlying point was (intended to be anyway) that if libraries are to negotiate access to their resources for any people outside their 'core users', then to do so on the basis of the people who can get themselves to the physical location is stupid - this is a real legacy of the print world, and it doesn't make sense to me to perpetuate this now.

However, I don't agree that libraries are being 'somewhat unreasonable' to expect that they should be able to extend access to their collection to people who are not part of their core user group - although at heart I think this all about who is paying how much and for what, so our views may not be that far apart. My view would be that for the amount libraries pay, and the (in my view) very low likelihood of any actual loss of income to the publishers, a small amount of use outside the 'core users' is not unreasonable. However, the thing about moving this from a 'walk-in' definition is that I'd argue we need to pin this down a bit more - how many users? for what purposes? etc. - which would essentially mean working out what was 'reasonable' anyway.

I think there are some other issues here about content 'rental' as opposed to ownership, and the situation libraries find themselves in with digital content vs print content - this needs more space and thought than a comment, but I suspect that arguing about 'walk-in' access is really a proxy for other issues related to content ownership.

Hi Andy

I think that there is something in what you say, but I am not sure whether it is about the level of security, but perhaps an overspecification of technology? After all, a username and password set declaring only that a user is a member of an institution is pretty low on authentication strength and identity assurance. So I wouldn't say the technology is too secure for the risk. I do think the lack of takeup of granular access clearly demonstrates a lack of requirement for these features in the scholarly publishing world - but we are seeing them used in other areas such as on student ticketing systems, blogs, recruitment platforms etc. etc.

I'm definitely interested in seeing more work to say do libraries actually need granular access for resource procurement or is the baseline of Member@ simply 'good enough'??

As people will know from Twitter, I agree with Owen that 'walk-in' access as a concept is stupid in this day and age, you canna walk in to an electronic resource. Some sort of definition and extension of visitor access would be more appropriate...and perhaps this is the usecase to push granular access (see above)?!

@Owen - as you say, we are probably not too far apart on this... perhaps just a different emphasis. For my part, for an academic library I'd start from a position where only members of the institution get access to stuff (your 'core' users) and then consider some well defined additions with well defined permissions and I'd expect institutions to pay for that.

I suggest that coming up with 'well defined additions' with 'well defined permissions' will not be easy, especially given the fact that we are talking about online access and the over-arching emphasis on non-commercial use. I suspect that there is a general view, for example, that any non-core users are only interested in getting access to resources for some kind of 'commercial' use.

Anyway... the emphasis, as you say, is on working out what is reasonable.

@Nicole - your phrase "overspecification of technology" is probably better than mine. My point, on the access management side of things, is that I sense that much of the activity in this area is really driven by techies - and techies (especially techies with an interest in security) tend to get over-excited about the technology at the expense of all other considerations. So at your FAM09 event, for example, I sensed an over-arching emphasis on security and privacy and much less emphasis on "is this what the users want?" and/or "is this appropriate given the risks associated with the content being accessed?". I don't understand the risks, so when someone says to me, "oh, we can't possibly use OpenID, we have to use SAML, because OpenID is far too insecure" I can't argue against it because I have nothing to set the benchmark against. (Note: I'm only using OpenID as an example here - I'm not arguing that we should be using it). But in an area where, historically, there has traditionally been a lot of "just good enough" activity going on (I won't say, "sharing of username and passwords" but that might be a typical example! :-) ) it feels like there is a danger of a bit of a disconnect with the way things really work on the ground in the way we are currently specifying the technology.

I certainly agree that 'good enough' is enough for many publishers, and that often their requirements are less onerous than we expect. The biggest education I got in this area was with music licensing around the PRIMO repository project a few years ago (http://primo.sas.ac.uk/) It's taking in recordings and performances of what is often contemporary music. One of the project reviewers said we would never negotiate the resultant licence minefield. In fact, a UK-wide license from PRS turned out to be easy to obtain and relatively low-cost,

More relevant to this discussion is that they were happy for us to use source IP combined with services like GeoIP to determine whether someone is in the UK or not (much as the BBC iPlayer does.) It's crude, it's easy to circumvent, but it did the job for them and us.

The comments to this entry are closed.



eFoundations is powered by TypePad