« February 2008 | Main | April 2008 »

March 29, 2008

Open cultural heritage

JISC have announced five new digitisation projects, funded jointly with US’s National Endowment for the Humanities (NEH).

Looking at the announcement text, I am slightly worried about the licences under which the resulting digitised resources will be made available. Yes, I know I bang on about this all the time but we seem to have a well ingrained habit in this country (the UK more so than the US I think) of publicly funding digitisation projects which result in resources being freely available on the Web, but not being open.  I, for one, would feel reassured if such things were made more explicit.

Now, the word open is used in multiple ways, so I should explain.  I'm using it here as in open content (from Wikipedia):

[Open content is] any kind of creative work published in a format that explicitly allows copying and modifying of its information by anyone, not exclusively by a closed organization, firm or individual.

This usually implies the use of an explicit open content licence, such as those provided by Creative CommonsFree content on the other hand, is typically available only for viewing by the end-user, with copyright and/or other restrictions typically limiting other usage to 'personal educational' use at best.

Based on the minimal information provided about the five projects, only one explicitly mentions the use of Creative Commons, one mentions the development of open source software and one talks about results being freely available (though as mentioned above, being free and being open are two different things).

Why does this matter?  Well, it seems to me that whenever possible (and I accept that there may be situations in which it is not possible) publicly funded digitisation of our cultural heritage should result in resources that can be re-purposed freely by other people.  That means, for example, that any lecturer or teacher who wants to take the digitised cultural heritage resource and build it into a learning object in their VLE, or an exhibit in Second Life, or whatever, can do so freely, without needing to contact the content provider.

Open content is what makes the Web truly mashable, and we should look to the cultural heritage sector for our richest and most valued mashable content.  Free content is not sufficient.

There is probably a useful debate to be had around whether the cultural resources produced by publicly funded digitisation should be able to be re-used in commercial activities as well as non-profit ones.  My personal view is that anything that adds value is fair game, including commercial activities, but I accept that there are other views on this issue.  Whatever, re-use for non-profit purposes is an absolute minimum.

To conclude... I really hope that I'm wasting blog space here, and that the conditions of funding in this case mandated that the resulting resources be made open rather than just free.  And further, that such a condition is already (or rapidly becomes) the norm for publicly funded digitisation of our cultural heritage everywhere.  I'm keeping my fingers crossed.

March 28, 2008

Science Museum Library & Archives now open in Swindon

New facilities to house the Science Museum Library & Archives are now open in Swindon:

After several years of intensive work, the project to reconfigure the Science Museum Library & Archives across the two sites has been successful. The collections are now housed in better conditions than previously, and the newly-created and refurbished facilities are now open to researchers.

The BBC featured the library, and the significant science and technology collections based at Wroughton near Swindon, during its Inside Out West early evening local news programme today.  The National Museum of Science and Industry Inspire project hoped to build new visitor facilities on the Wroughton site but failed to win lottery funding last year.

March 25, 2008

OpenID review

The JISC are currently funding a review of OpenID, looking at its potential use in higher education.  The review will do this by:

  • Determining potential use cases through structured interviews with a representative sample of stakeholders throughout the academic community;
  • Evaluating the potential use cases by performing a risk assessment of them using the known security and trust properties of OpenID, in order to determine a set of valid use cases;
  • Building working demonstrators to ensure our understanding of the technology is robust. to allow the community to experiment with OpenID within the context of the UK Access Management Federation, and, if possible, addressing a sample of the valid use cases, using federation compliant Identity Providers;
  • Producing a final report describing our conclusions and recommendations for the future use of OpenID in the UK academic community.

Looks interesting.  The work is being lead by Sandy Shaw at EDINA and is due to complete in June this year.

March 19, 2008

The 5 Ps of e-portfolios

I'm not sure whether this is helpful, and I'm possibly guilty of simply making up words for the sake of it, but having listened to Graham Attwell's excellent podcast on e-portfolio development yesterday I woke up this morning with 5 P-words in my head that try to capture what learners can do with their e-portfolio.  In no particular order:

plan
Graham refers to "personal development planning portfolios" in his podcast and it seems to me that this is one of the most important aspects of what an e-portfolio can enable.  Being able to assess where one is in a learning journey and, more importantly, being able to plan for what needs to come next is a critical learning skill and an e-portfolio is one of the tools that supports that process.
ponder
Such planning comes in part from being able to reflect on the learning that has already taken place.  I must admit that this P-word is probably the most contrived out of the five but it is no less important for that.  This reflective activity appears to fall within what Graham refers to as a "personal learning portfolio".
promote
There is a sense in which an e-portfolio becomes a self-promotion tool, functioning more or less like a curriculum vitae would do, either as part of getting a job, or during transition between different phases of education.  (Note: the P-word present, as in Graham's "presentation portfolio" would be an alternative here but for some reason I think that promote works better).
prove
Being able to prove that learning has taken place is an important function of the e-portfolio, either as evidence to support the assessment process (c.f. Graham's "assessment portfolio") or as part of the promote function (c.f. Graham's "presentation portfolio").
preserve
Finally, there is a life-long aspect to e-portfolios which, while it may not fall under a traditional interpretation of "digital preservation", it seems to me is a long enough period to give us significant headaches about how we manage digital material for that length of time, especially given that we are talking about personally managed information by and large.  An e-portfolio, and the systems around it, should help us to maintain a life-log record of our learning and, as I say, that is a non-trivial functional requirement to meet currently.

March 18, 2008

Response to grants call

The response to this year's grants call has been pretty overwhelming - 128 bids were received by the close of play on Friday (about 30 more than last year).  That gives us 256 sides of A4 to read and review.

It is clear that our three themes (online identity, the open social graph, always-on Internet access and mobile computing) generated a lot of interest.

The next step is for us to ask about 15 of the bidders to come back to us with a more detailed proposals, of which we'll interview about half, and fund 3 or 4.

On that basis can I just say, "apologies in advance" to the ~124 of you that don't get funded :-(.

e-Portfolio development and implementation

This is quite old I think (middle of 2007?), but none the worse for that and well worth sharing here...

On the face of it, this video by Graham Attwell of Pontydysgu (created as part of the European Mosep project I think) allows him to share his thoughts on the fairly narrow topic of the development and implementation of e-portfolios.  The reality though is much broader - and the result is a very nice, and quite general, overview of how the learning agenda is evolving.

Despite being a firm believer that a picture is worth a thousand words, my only minor quibble with the video lies with the diagrams that Graham uses towards the end, neither of which I found overly compelling (particularly not the first which appears, at least at first glance, to position ELGG as a fairly central component of the learning landscape - not that I have anything against ELGG you understand... I just don't get a sense that anything needs to be positioned so centrally - perhaps it is just a layout thing?).

Anyway, putting that to one side, the video is well worth watching if you are interested in such things and have 30 minutes or so to spare.

IMLS Study on the use of libraries, museums and the Internet

IMLS have released the results of a large-scale study looking at current use of libraries, museums and the Internet.

The study concludes that “the amount of use of the Internet is positively correlated with the number of in-person visits to museums and has a positive effect on in-person visits to public libraries.”

For an overview of the conclusions of the study, please see the Conclusions Overview [PowerPoint, 6.8 MB].

I guess that two of the key take-home messages here are that libraries and museums continue to "evoke consistent, extraordinary public trust among diverse adult users" and that the "amount of use of the Internet is positively correlated with the number of in-person visits to museums and has a positive effect on in-person visits to public libraries".  Having said that, I must admit that I'd like to know more about how that positive correlation plays out since, on its own, it's a kind of "well, duh..." statement.  Nonetheless, this looks like an interesting and useful report.

March 17, 2008

Hiding Magna Carta on the Web

Magnacarta The BL have made a digitised copy of the Magna Carta available on the Web:

Magna Carta is one of the most celebrated documents in history. Examine the British Library's copy close-up, translate it into English, hear what our curator says about it, and explore a timeline.

So says the introductory blurb.

Well... if it's so "celebrated" and important can someone please explain why the digitised version has been hidden behind a Shockwave viewer that makes it pretty much impossible to do anything other than browse it on the BL's Web site?  Yes, there is a simple version, which does not require a browser plugin, but the copyright statement and complete lack of CC licence (or anything remotely like it) makes it clear that re-use wasn't high on the BL's agenda.

Shame on them.

Come on BL, you can spend our money better then this!

Technologies for open social networking

Over on ReadWriteWeb, Sean Ammirati provides a quick introduction to four of the key technologies that underpin open social networks - hCard, XFN, FOAF, OpenID, and OAuth... wait... that's five... five of the key technologies that underpin open social networks... NOBODY expects the Spanish Inquisition!

March 16, 2008

Hack Day: and now for something completely different

P1050734 Friday was Hack Day at Eduserv, an internal event that allowed some of our techies to take time away from their normal day-to-day activities in order to think about and work on something new.  The day was one part of a programme of stuff to try and put innovation back at the heart of what Eduserv does.  I think it worked pretty well for a first attempt and I certainly hope we repeat it.  We had people working on things as diverse as integrating OpenID with an open source blogging system, Shibboleth with a commercial social networking tool, MyAthens with Windows Live Messenger, Google Maps to plot usage of SP resources, and a local positioning system for the Eduserv offices based on triangulating the wireless signal strengths from multiple wireless access points.

For my part, I spent some time investigating the possibility exposing an RSS feed of the list of registered services providers within the UK Access Management Federation (UKAMF).  The point of choosing RSS was not to support news and alerting - rather that RSS is a good machine-readable format for anything that looks like a list of URLs.

Plan A was to take the UKAMF Metadata and transform it into RSS using a Yahoo Pipe, Perl script or XSL transformation.  Unfortunately, I quickly realised both that the metadata doesn't contain any information about the human-oriented services associated with the SAML end-points (to be fair, that is not its function) and that the XML file is so large that processing it in anything becomes rather difficult - it is certainly too big to process using Yahoo Pipes.  I must admit, it hadn't occurred to me before now what an odd architectural decision it is to store all the UKAMF metadata in a single XML file at a single point on the network - I suspect this will lead to significant scalability problems as the Federation grows.  The Federation must have been designed by the same person who came up with the Windows registry :-(.

Plan B was less than ideal and involved tagging all the registered UKAMF services in del.icio.us, using the tag 'ukamfsp' as a unique key and a variety of more normal tags to indicate the subject matter of the services.  RSS is one of the main access mechanisms for del.icio.us content, so an RSS feed for the 'ukamfsp' resources is readily available.

Of course, this approach is nothing more than a proof of concept since I am not in a position to maintain the set of resources tagged in del.icio.us.  However, I'd encourage the UKAMF to maintain this RSS feed in some appropriate way, and using an external tool like del.icio.us brings with it some significant advantages.  Having got an RSS feed, writing a Perl, PHP or Ruby script to re-purpose it into XHTML is very easy to do.

This activity raises a couple of other questions...

Firstly, who is likely to be interested in such a list?  Certainly not end-users, who are interested in the set of resources that they need to get a job done but who have no interest in how they are accessed.

Secondly, where and how are services best described?  At the moment the JISC is funding Intute, the IE Service Registry and the UKAMF, all of which contain some aspects of descriptive metadata about available services.  The metadata in each is different, so the split across the three catalogs/registries may be appropriate (though I must admit that I'm  not totally convinced that it is).  However, what seems to be missing at the moment is a unique key to link the three bits of descriptive metadata together and an appropriate API in any of the services to allow client software to say, "tell me what you know about service X".

March 14, 2008

Yahoo search & the Semantic Web

There was a good deal of excitement yesterday at an announcement on the Yahoo! Search weblog that they will be introducing support in the Yahoo Search Monkey platform for indexing some data made available on the Web using Semantic Web standards or using some microformats. In yesterday's post, "Dublin Core" is mentioned as one of the vocabularies which will be supported; it also refers to support for both the W3C's RDFa and Ian Davis' Embedded/Embeddable RDF (Aside: I've been starting to explore RDFa recently and I'm quite excited about the potential, but that should be the topic of a separate post.)

A post by Micah Dubinko provides some further detail in an FAQ style.

It is worth bearing in mind the note of caution from Paul Miller that such an approach brings with it the challenges of dealing with malicious or mischievous attempts to spam rankings, and as I think Micah Dubinko's post makes clear, this is not going to be an aggregator of all the RDF data on the Web. But nevertheless it seems to represents a very significant development in terms of the use of metadata by a major Web search engine (after all the years I've spent having to break it to dismayed Dublin Core aficionados that the metadata from their HTML headers almost certainly wasn't going to be used by any of the global search engines, and unless they knew of an application that was going to index/harvest it, they might wish to consider whether the effort was worthwhile!) - and for the use of Semantic Web technologies in particular.

March 11, 2008

Institutions, Web 2.0 and the shared service agenda

For the third and final thread of my UCISA talk on Thursday (see also thread 1 and thread 2) I want to talk about the shared service agenda, Web 2.0 and the potentially disruptive impact on institutional service provision that might result.  I'm basing this thread on the vague premise that there is some relationship between the shared service agenda and Web 2.0, though I have to confess I'm not 100% sure that I'm going down a useful or valid path here and I'm fully expecting people to tell me so if I'm not!

I think it can be argued that UK academia (particularly HE) has been pretty good at taking advantage of shared service approaches, thanks in large part to the JISC's coordinating role, and some high profile examples spring to mind - Athens, Chest, JISCMail, the national data centres and, not least, the JANET network infrastructure itself.  There are many others.  It should probably also be noted that this practice appears to have grown fairly naturally and organically out of the community itself - well in advance of any political agenda that said this is the best way to do things.

So, let's test my hypothesis a little and consider the similarities and differences between the shared service agenda and the use of external Web 2.0 services.  Note that I'm purposely using Web 2.0 in its broadest, and therefore fuzziest, sense here. 

The major similarity is that both the shared service agenda and the growing use of Web 2.0 applications results in services moving outside the institution - i.e. services that are either already delivered within the institution or that one might naturally expect to be delivered from within the institution will move to being delivered by external service providers.

On the other hand, the major differences lie in motivation and control.  The primary driver for shared services has tended to come from the providers (the institutions, sometimes by proxy thru the JISC) looking for the efficiency savings enabled by a shared approach and preventing the need for every institution to replicate every service in-house.  The primary driver for using Web 2.0 services tends to come largely from individuals, who are often attracted by the better user-experience on offer and the network effects that the use of external global services provide.  As a result, the use of Web 2.0 services tends to leave the institution much less in control of what is happening than they would be in a traditional outsourced 'shared service' approach.

To make this somewhat more concrete... as one can see from some of the responses to Brian Kelly's post about his part of the UCISA talk, it is now perfectly possible for individual members of an institution to move all their email and 'office' functionality out to an external provider like Google.  More significantly, it is not inconceivable that whole departments could make such a transition - see Google Apps for example.  I suspect that this is currently a theoretical concern, but it is certainly a possibility that those departments that have traditionally shied away from the centralised IT services offering, in favour of running their own email and Web services, will find outsourcing their own provision lock, stock and barrel to an external provider increasingly attractive.  I doubt this is happening yet, but one issue that IT services need to weigh up is whether the trickle we are currently seeing is the beginnings of a flood or just something that will remain a trickle.

It is also worth noting that this kind of transition isn't limited to the application layer.  It is similarly conceivable that individual members of an institution could go outside for their compute or storage infrastructure in the form of services such as Amazon S3.

I think that Brian is going to argue in his part of the talk that we can "learn to stop worrying about web 2.0".  I'm going to suggest more or less the opposite.  I think that we need to "learn why we should start worrying"!  I think we have to start by acknowledging that we are entering a period of disruption caused by the use of external Web 2.0 services.  I say "we" because the kind of disruption we are talking about affects the current generation of 'shared service' providers (including Eduserv) just as much as it affects institutions.

Rather than hiding our heads in the sand, we need to acknowledge what is happening, embrace the technology and try to understand our new place in the world.

At the same time, we need to remember that education and educational institutions have special requirements around teaching, learning and research - the core of what universities do - and that a generic discussion around outsourcing and shared services is not sufficient.  Those special requirements include supporting and ensuring high-quality research, encouraging scholarly communication, citation and effective peer-review, curating the scholarly record, the relationship between ICT and pedagogy, trust, maximising impact of elearning, adherance to QAA, and so on.  All of these things bring with them special requirements that sit uncomfortably with the anarchy of Web 2.0.

And that brings me to my limited and perhaps somewhat unhelpful conclusions based on the three threads.  Imagine that we are on a roller-coaster in a darkened room.  Our eyes are beginning to adjust to the dark.  I think we need to move as close to the front as we can, partly to help see where we are going and partly because it'll be more fun!  How close we get depends on a judgement about how far we want to close the hype-curve gap between leading edge adoption and mainstream adoption.

More fundamentally, and as I said at the end of my previous thread, I think that IT Services need to see themselves not simply as 'service providers' but as 'service enablers' in the use of external Web 2.0 services

March 08, 2008

Netskills information literacy workshops for schools

Netskills have announced several workshops in the area of plagiarism awareness and information skills aimed at the schools sector.  These workshops are being run as part of two projects funded by us (the Eduserv Foundation).

March 07, 2008

The man whose tweets were all exactly alike

Two significant Twitter-related things happened yesterday... I blogged the first, the release of CommonCraft's Twitter video tutorial.  The second was that the technology section of the UK Guardian ran a short piece about Twitter under the headline, Why are there no spam or trolls on Twitter?

This is not the first time that the Guardian has covered Twitter and it certainly won't be the last.  But it seems to me to be indicative of a gradual mainstreaming of Twitter as a tool, as is the CommonCraft video.

Mainstreaming will bring with it greater numbers of users.  That, in turn, will bring growing pressure to use it as a channel for spam and other less-than-desirable uses.

The thrust of the Guardian article is that Twitter has a natural immunity to spam-like problems because of the way it works.  I don't strongly disagree with this.  On the other hand, spammers are inventive people and if they can find a way to make the benefits outweigh the costs, they probably will.  We don't tend to see much of it at the moment because of the relatively low numbers of Twitter users and because most of them are currently tech-savvy people (err, geeks).

Mainstreaming will also bring with it self-inflicted issues - wanting to following large numbers of other twits (twitterers) for example.  Email isn't a broken technology as such, it just didn't cope with scalability issues very well.  Will Twitter go the same way or is it genuinely protected from such a fate?

March 06, 2008

Options for joining the UK Access Management Federation

In their recent briefing paper about third party providers of federated access management solutions (see also my previous blog entry on the same topic) the JISC present three options for participating in the federation, as follows:

  1. Become a full member of the UK federation, using open source software with in-house technical support
  2. Become a full member of the UK federation, using open source software with paid-for support
  3. Subscribe to an ‘outsourced Identity Provider’ to work through the UK federation on the institution’s behalf

It strikes me that this is a rather unsatisfactory list for two reasons...

Firstly, options 1 and 2 are prefixed with the phrase "Become a full member of the UK federation" whereas option 3 is not.  Why?  The implication seems clear enough... if you choose to outsource your identity provision then you are not a "full member" of the federation, whatever that means.  This is an odd choice of wording, especially in a political and financial environment where institutions are generally being encouraged to consider shared service solutions as alternatives to doing everything in-house.

As I said in my previous blog entry, I am not particularly trying to promote outsourced solutions here - our's or anybody else's - institutions can make their own minds up about that.  But I see no good reason to give the impression that those institutions that choose to outsource their identity provision to a third party are any less members of the federation than those that do everything in-house.

Secondly, the list mixes up 'technology provision' and 'support arrangements' in a rather unhelpful way.  It would be more helpful to separate these, giving two lists as follows:

  1. In-house identity provision using open source software.
  2. In-house identity provision using commercially licensed software.
  3. Outsourced identity provision using an external service provider.

and

  1. In-house support
  2. External support

I appreciate that even these lists aren't perfect (we offer a partially outsourced identity provision option for example) and that a matrix of the two may not be fully populated (the combination of in-house support and outsourced identity provision doesn't sound likely for a start!).  Despite that, it think it is a more accurate reflection of the options facing institutions than the three options currently presented by the JISC.

JISC briefing paper on third party suppliers of federated access management solutions - some clarification about OpenAthens

The JISC have released a briefing paper about third party suppliers of federated access management solutions:

aimed at UK higher (HE) and further (FE) education institutions that wish to adopt federated access management and join the UK Access Management Federation, either by using paid-for support or by subscribing to an 'outsourced Identity Provider'.

For the record, the briefing paper contains some presentational errors about our OpenAthens product suite (though I should acknowledge that I fully understand why, since our own messaging in this area has not been as clear as it might have been).  In particular, the phrase:

[OpenAthens] Interoperates with the UK federation via Gateways that are integral part of OpenAthens.

is somewhat misleading.

It is true that the current Athens service interoperates with the UK Federation via two gateways, one going from Athens to Shibboleth, the other going from Shibboleth to Athens.  However, the new OpenAthens identity provider (in both its Managed Directory (fully outsourced) and Local Authentication (partially outsourced) forms) offers a fully functional, federation-compliant, Shibboleth identity provider.  There is therefore no requirement for the Athens to Shibboleth gateway component as a separate entity on the network - it simply will not exist in the future.

The other gateway, going from Shibboleth to Athens, will remain for as long as it is necessary for institutions to gain access to Athens-only service providers - this gateway is needed by any Shibbolised institution wishing to gain access to such services via Athens, irrespective of how they have chosen to implement Shibboleth.  (Note that by offering OpenAthens SP we feel we are doing as much as reasonably possible to encourage service providers to move from Athens to Shibboleth - but, clearly, this gateway is likely to be required for some time).

So, to sum up... OpenAthens comprises three main components:

  • An identity provider (which comes in two forms, Managed Directory (i.e. fully outsourced) and Local Authentication (i.e. partially outsourced))
  • OpenAthens SP
  • the Shibboleth to Athens gateway.

Note that at this stage I'm not 100% sure that these are the formal product names that we will use for these components - apologies in advance if this blog entry confuses anyone because of this.  However, the point is not to worry too much about the names - the important thing is that these are the components we offer and that, as far as I know, all of them are compliant with the UK Federation and all come with a commitment from Eduserv to maintain that compliance and to adopt whatever mainstream access and identity standards come along in the future.

I should also add that the purpose of this blog entry is not to promote OpenAthens as the best way of joining the UK Access Management Federation - institutions will have to make up their own minds about which route is most appropriate for them.  I'm just trying to clarify the picture around OpenAthens a little so that institutions can make an informed choice.

8 March 2008: I've slightly revised this entry because colleagues at Eduserv felt that my use of 'OpenAthens IdP' gave the impression that this was an agreed product name, which it is not (at the time of writing).  Apologies for any confusion caused.  It is perhaps also worth noting that my characterisation of the Shibboleth to Athens gateway as a separate entity on the network is not a view shared by everyone at Eduserv.  Speaking only for myself, I think that continuing to refer to this particular gateway is helpful for understanding what OpenAthens comprises in the short term, though I completely accept that this may not be a useful way of describing our product suite in the longer term.

Cardiff University information literacy podcasts

Cardiff University have released a series of six podcasts focusing on improving essay writing for university students:

The podcast is called "Student Survival Guide to Writing a Good Essay" and has been created in conjunction with the University's student radio station.  The six short episodes feature interviews with students, academic staff and librarians on topics such as:

  • What makes a good essay?
  • Quality control: information to use and avoid
  • Going beyond the reading list: finding good web sites
  • Going beyond the reading list: discovering books and journals
  • Getting your references in order
  • Meeting the deadline

The podcasts, which are hosted on Xpress Radio, are currently organised by date rather than by topic, making individual episodes less easy to find than they might have been.  Apart from that fairly minor gripe, this looks like an interesting approach to raising information literacy skills.

Sharing, socialising and institutional IT service provision

The second theme for my UCISA presentation next week will be around 'sharing and socialising'...

It is clear that there is currently a huge interest in the management and disclosure of scholarly assets by institutions.  This is most visible in the open access repository movement, the growing interest in open data, and the push for open and re-usable learning objects.  The focus tends to be be both on managing and preserving the content, and sharing it openly on the Web with the aim of letting others re-use it in various ways.  And a large part of the policy agenda is concerned with institutional solutions, an approach that I've spent some time arguing against of late.

At the same time there is a whole spectrum of less formal sharing going on in the form of blogs, wikis and uploading content to Flickr, YouTube, Slideshare and so on, most of which tends to happen using Web 2.0 services outside of the institution.

Whist the discussions around how best to openly share content on the Web are interesting, in the context of my UCISA talk I'm more interested in the social networks that grow up around these activities than I am in the sharing activity itself.  Learning and research are social activities and one of the things I'm interested in is how we build online social networks that support them most effectively.  Social networks are like gardens... they need a certain amount of care and attention and they tend to flourish best in the right environment, one facet of which is the concentration effect that Lorcan Dempsey has been talking about recently.  Large-scale globally concentrated social services bring with them network effects that are not possible in smaller-scale service scenarios.

Consider Slideshare as an example, a global Web 2.0 service that has rapidly become "the best way to share your presentations with the world".  It is hard to imagine that the kind of presentation sharing service we see in Slideshare today could have grown up around a set of institutional activities (however well coordinated they might have been) - the service works primarily because it is global in scale.  For similar reasons, social activity has built up around the presentations, both within the confines of the Slideshare service itself (tagging, favoriting, etc.) and beyond (by embedding presentations into other services).  As a result it has become a very compelling place to share presentations on the Web.

So what is the lesson here for institutions and institutional IT services?  I think they need to take note.  Whilst (in some cases) they may have the technical competence to build global-social social services, it is not typically part of their function to do so.  To put it bluntly, their business is to serve the institution, not to serve the world.  As a result, IT services have to begin seeing themselves as the enablers rather than the providers of such services.

This means more than simply providing the network pipe thru which the services are accessed.  There are functional requirements in the educational space that go beyond those catered for by external services directly - the need to preserve the scholarly record and comply with QAA requirements being two good examples.  I'm sure there are others.  I think there is an interesting debate to be had around what it means for institutional IT services to properly enable and support access to external Web 2.0 services.

I'll touch on this again in my third and final theme for the talk - the 'shared service' agenda.

Twitter by CommonCraft

For those of you that still don't get Twitter, maybe this will help:

Discovered via @sirexkat on Twitter (of course!).

March 05, 2008

JISC Information Environment blog announced

I note that the JISC Information Environment team have announced a new IE blog:

... the blog will be relevant to people involved or interested in the JISC programmes that fall under the Information Environment theme (a list of these programmes can be seen on the about page of the blog).

Concentration and diffusion - the two ways of Web 2.0

Lorcan Dempsey has now blogged his ideas around two key aspects of Web 2.0, concentration and diffusion, The two ways of Web 2.0, which I referred to in my keynote at VALA 2008 but was unable to cite properly.

As I said in my talk, I think these two concepts are very helpful as we think about the impact of Web 2.0 on the kinds of online services we build and use in the education space.

March 04, 2008

LCCN Permalinks and the info URI scheme

Another post that has been on the back burner for a few days.... via catalogablog, I noticed recently that the Library of Congress announced the availability of what it calls LCCN Permalinks, a set of URIs using the http URI scheme which act as globally scoped identifiers for bibliographic records in the Library of Congress Online Catalog and for which the LoC makes a commitment of persistence.

I tend to think of two aspects of persistence, following the distinction that the Web Architecture makes  between identification and interaction. From reading the FAQ, I think that persistence in the LCCN Permalink case covers both of these aspects. So the LoC commits to the persistence of the identifiers as names by, for example, keeping ownership of the domain name and managing the (human, organisational) processes for assigning URIs within that space so that once assigned a single URI will continue to identify the same record (i.e. they observe the WebArch principles of avoiding collisions). And they also commit to serving consistent representations of the resources identified by those URIs (i.e. they observe the WebArch principles of providing representations and doing so consistently and predictably over time).

So for example, the URI http://lccn.loc.gov/2003556443 is a persistent identifier of a metadata record describing an online exhibit called "1492: an ongoing voyage". And in addition, for each URI of this form, a further three URIs are coined to identify that same metadata record presented in different formats: http://lccn.loc.gov/2003556443/marcxml (MARCXML), http://lccn.loc.gov/2003556443/mods (MODS), http://lccn.loc.gov/2003556443/dc (SRW DC XML). So in terms of the Web Architecture, we have four distinct, but related, resources here. And indeed the fact that they are related is reflected in the hypertext links in the HTML document served as a representation of the first resource, along the lines of the TAG finding, On Linking Alternative Representations To Enable Discovery And Publishing. It would be even nicer if that HTML document indicated the nature of the "generic resource"-"specific resource" relationship between those resources. But, really, it would be churlish to complain! :-) We now have a set of URIs which have the (attractive) characteristics that, first, they serve as globally scoped persistent names and, second, they are amenable to lookup using a widely used network protocol which is supported by tools on my desktop and by libraries for every common programming platform. Good stuff.

However, it is interesting to note that this - or at least the first aspect, the provision of persistent names - was the intent behind the provision of the "lccn" namespace within the info URI scheme. According to the entry for the "lccn" namespace in the info URI registry:

The LCCN namespace consists of identifiers, one corresponding to every assigned LCCN (Library of Congress Control Number). Any LCCN may have various forms which all normalize to a single canonical form; only normalized values are included in the LCCN namespace.

An LCCN is an identifier assigned by the Library of Congress for a metadata record (e.g., bibliographic record, authority record).

Compare (from the first two questions of the LCCN Permalink FAQ)

1. What are LCCN Permalinks?

LCCN Permalinks are persistent URLs for bibliographic records in the Library of Congress Online Catalog. These links are constructed using the record's LCCN (or Library of Congress Control Number), an identifier assigned by the Library of Congress to bibliographic and authority records.

2. How can I use LCCN Permalinks?

LCCN Permalinks offer an easy way to cite and link to bibliographic records in the Library of Congress Online Catalog. You can use an LCCN Permalink anywhere you need to reference an LC bibliographic record in emails, blogs, databases, web pages, digital files, etc.

The issue with URIs in the info: URI scheme, of course, is that while they provide globally scoped, persistent names, the info URI scheme is not mapped to a network protocol to enable the lookup of those names. I understand that for info URIs, "per-namespace methods may exist as declared by the relevant Namespace Authorities", but "[a]pplications wishing to tap into this functionalitiesy (sic) must consult the INFO Registry on a per-namespace basis." (both quotes from the info URI scheme FAQ.)

The creation of LCCN Permalinks seems to endorse Berners-Lee's basic principle that I mentioned in my post on Linked Data) that it is helpful for the users/consumers of a URI not only to have a globally-scoped name, but also to be able to look up those names - using an almost ubiquitous network protocol - and obtain some useful information. LoC have supplemented the use of a URI scheme that only supported the former with the use of a scheme which facilitates both the former and the latter. And with a recent post by Stu Weibel in mind, I'd just add that (a) the use of an http URI does not constitute an absolute requirement that the owner also serve representations - the http URIs I coin can be used quite effectively as names alone without my ever configuring my HTTP server to provide representations for those URIs (and if the LoC HTTP server disappears, an LCCN Permalink still works as a name); and (b) the serving of representations for http URIs is not - in principle, at least - limited to the use of the HTTP protocol (see "Serve using any protocol" in the draft finding of the W3C TAG URI Schemes and Web Protocols).

Further, the persistence in LCCN Permalinks is a consequence of LoC's policy commitment to ensuring that persistence (in both aspects I outlined above): it is primarily a socio-economic, organisational consideration, not a technical one, and that applies regardless of the URI scheme chosen.

Indeed, it seems to me the creation of LCCN Permalinks suggests that there wasn't really much of a requirement for the creation of the "lccn" info URI namespace. And the co-existence of these two sets of URIs now means that consumers are faced with managing the use of two parallel sets of global identifiers - two sets provided by the same agency - for a single set of resources (i.e. URI aliases). Certainly, this can be managed, using, e.g. the capability provided by the owl:sameAs property to state that two URIs identify the same resource. But it does seem to me that it adds an avoidable overhead, with - in this case - little (no?) appreciable benefit. (Compare the case that I mentioned, also in the post on Linked Data, of URI aliases provided by different agencies, where the use of two URIs enables the provision of different descriptions of a single resource, and so does bring something additional to the table.)

Given the (commendable) strong commitment to persistence expressed by LoC for LCCN Permalinks, it seems to me that anyone using the URIs in the info URI "lccn" namespace could switch to citing the corresponding LCCN Permalink instead - though if only a proportion of the community makes the change, that still leaves services which work across the Web and which merge data from the two camps having to work with the two aliases.

Interestingly, the use of the http URI scheme in association with a domain which was supported by some organisational commitment is exactly the sort of suggestion made by several observers as a viable alternative to the info URI scheme when it was first being proposed. See for example a message by Patrick Stickler to the W3C URI and RDF Interest Group mailing lists (in October 2003!) which uses the LCCN case as an example.

Anyway, all in all, this is a very positive and exciting development. I look forward to the implementation of similar conventions using the http URI scheme by the owners of other info URI namespaces :-)

P vs. P in a user-centric world

I'm currently doing some thinking around the 3 or 4 themes that I want to pull together for a talk at the UCISA 2008 Conference in Glasgow next week.  (Brian Kelly recently blogged about the same talk - it is a joint effort - under the title "IT Services Are Dead – Long Live IT Services 2.0!").

One of the themes I want to touch on is our general move towards user-centricity (is that a word?) and in particular the use of the word 'personal' in both Personal Learning Environment (PLE) and Personal Research Environment (PRE).  I've been laboring under what turns out to be a misapprehension that the P in PLE is used differently than the P in PRE.  Why did I think this?  Well, when I first read the PLE article by Scott Wilson et al, Personal Learning Environments: Challenging the dominant design of educational systems I must have particularly picked up on this paragraph:

While we have discussed the PLE design as if it were a category of technology in the same sense as the VLE design, in fact we envisage situations where the PLE is not a single piece of software, but instead the collection of tools used by a user to meet their needs as part of their personal working and learning routine. So, the characteristics of the PLE design may be achieved using a combination of existing devices (laptops, mobile phones, portable media devices), applications (newsreaders, instant messaging clients, browsers, calendars) and services (social bookmark services, weblogs, wikis) within what may be thought of as the practice of personal learning using technology.

At the same time I conveniently ignored the following paragraph:

However, for the design to reach equivalent or superior levels of efficiency to the VLE, as well as broader applicability, requires the further development of technologies and techniques to support improved coordination. Some initial investigations include the work of projects such as TenCompetence and the Personal Learning Environments work at the University of Bolton cited previously.

I really like the first of these two paragraphs, it sums up my view of the PLE as a way in which the learner can pick and mix from the wide range of [Web 2.0] services out there on the Web in order to get whatever task is at hand done most efficiently.

I tend to dislike the second, only because it puts one in mind of a portal-like approach, i.e. where the learner uses some kind of institutional or desktop tool as an access point to the range of external  services in which they are interested.  I'm afraid that I have a somewhat unjustified hatred of the 'portal' word/concept ever since I used it in the early days of the JISC Information Environment work and then had to spend 4 or 5 years explaining that I didn't really mean what people thought I meant!

Anyway... it seems to me that the P in PRE does tend to be used very much in the sense of 'research portal' - a single point of activity that brings together whatever combination of things it is that a researcher needs to do in order to undertake their research.

A couple of days ago, I asked my Twitter followers a question: is a PLE an approach or a bit of software?

To his credit, Scott replied, summing up the PLE concept rather nicely in 140 characters or less as follows:

@andypowe11: environment (web,society,family)+tools(sw, hw, process, technique)+disposition = PLE

I used to have a (regularly broken) rule of thumb that if you can't write something in one side of A4 or less then you haven't thought about it hard enough.  Seeing Scott's reply made me wonder whether that should be downsized to 140 characters - i.e. if you can't tweet it, don't bother!

I remain slightly disappointed that the notion of a PLE has to include some aspect of a tool to aggregate things together (and typically an institutional tool at that) though I suppose I have to grudgingly concede that such a thing is necessary, at least in as much as one needs to tie together assessment-related information based on the learning being undertaken in the PLE.

In terms of the talk, the theme remains pertinent I think.  We are now quite used to using the term 'user-centric' in the context of identity management (particularly OpenID).  But, of course, this trend is more pervasive, covering all kinds of activities and including both learning and research.  Whether there is an in-house aggregation layer (a portal, or PLE, or PRE, or whatever one chooses to call it) to bring the outputs of distributed learning and research activities back together is largely a moot point.  The point is that those activities are increasingly likely to be carried out using services outside the institution and where the institution has varying degrees of control over service level agreements, data protection, and the like.

And despite my negativity, one of the advantages of having that in-house aggregation layer is that it gives the institution some way of pulling external content created by its members back inside the institution where it can be retained as part of the scholarly record or for QAA type purposes, or whatever.

JISC ITTs: lifelong identity management and the role of e-portfolios in assessment

The JISC have a couple of calls for funding out at the moment, which I mention here only because they are very relevant to our own areas of interest within the Eduserv Foundation:

Homework vs. network

British 15-to-19-year-olds admit spending significantly less time doing homework than they used to as a result of their use of social-networking sites such as Facebook, MySpace and Bebo, according to research published today.

So reported the Guardian yesterday, based on the findings of a study produced by Entertainment Media Research for media law firm Wiggin.  I think I could have told them that.  Just come round our house.

Similarly unsurprising, only 13% of men in the 45 to 54 age group (it's my birthday today and I'm rapidly approaching the middle of that group) "regularly browse social-networking sites", as opposed to 55% in the 15 to 19 age group.

But my favorite obvious statement from the report... 79% of people fast-forward thru the adverts in recorded TV shows "most" or "all" of the time.  Err, so what's wrong with the other 21%, can't they find Frank quickly enough?  (Frank->Frank Zapper->Zapper->TV remote control).  As a result, "some advertisers have been experimenting with adverts that make sense only when watched in fast forward". Lol.

On a slightly more serious note, the report claims that "70% of British 15-to-54-year-olds who have illegally downloaded copyrighted material would not do so again if they got an email or call from their ISP".  Well maybe... though I can't say that I'm totally convinced.

March 03, 2008

Second Life snapshot news

As we've mentioned before, the Foundation is currently funding John Kirriemuir to provide a series of rolling reports to update his initial survey of the take-up of Second Life within the UK Higher and Further Education sector, and to try to examine the impact that use of Second Life is having on teaching and learning. The third of John's reports will be available later this month, and to supplement the report itself, John is providing images of some of the SL spaces and resources developed by UK universities and colleges as a set on Flickr, and also a series of posts on his weblog providing up-to-date news of his current investigations in this area.

As John notes, participating in his study offers "an opportunity to promote what you’ve done, and also for like-minded academics to find you".

About

Search

Loading
eFoundations is powered by TypePad