« May 2007 | Main | July 2007 »

June 29, 2007

A couple of FRBR-related items

Via a post by William Denton on the FRBR Blog, I see that the Joint Steering Committee for Development of RDA has released a mapping of the RDA elements to FRBR (see also the other two documents which William highlights). This would seem to be a significant contribution to the work that was planned at the RDA Data Model Meeting held in London at the start of May, which Andy reported here.

Also this week, I came across a note by David Bigwood from Catalogablog (edit: now also posted on the FRBR Blog!) pointing to work on harmonising the FRBR model and the CIDOC Conceptual Reference Model (CIDOC CRM), an object-oriented model which, broadly speaking, I think aims to fulfil a similar role to FRBR within the cultural heritage domain. In the context of the Web, where resources are used and described by members of different communities with different "worldviews", reflected in models like FRBR and the CRM, the ability to "merge" or at least "map" those different "worldviews" is of considerable importance. I haven't looked at the FRBR-CRM work in detail at all, I hasten to add, but the stated aim of producing something which can 'be "plugged" into CIDOC CRM in order to result in an overall conceptual model for bibliographic and museum information' sounds exciting.

June 22, 2007

Precedings

I note that Nature have announced Precedings:

Nature Precedings is a place for researchers to share pre-publication research, unpublished manuscripts, presentations, posters, white papers, technical papers, supplementary findings, and other scientific documents. Submissions are screened by our professional curation team for relevance and quality, but are not subjected to peer review. We welcome high-quality contributions from biology, medicine (except clinical trials), chemistry and the earth sciences.

Interesting.  As one might expect, blog reaction is mixed... for example, the positive reception by David Weinberger draws some negative comment from those on the institutional repository side of the fence, who argue that repositories (despite the fact that they are largely empty!) already do all of this.

The announcement of Precedings echoes almost exactly the point I was trying to make in my talk at the JISC Repositories Conference and in subsequent posts - we need to stop thinking institutionally and instead develop or use naturally compelling services, such as Precedings, that position researchers directly in a globally social context.

Of course, it remains to be seen whether Nature have got Precedings right, but I think it is an interesting development that deserves close attention as it grows.

Urgent: Educational Material

587217714_8e6664e6d8 My daughter has just started an OU course in something or other - I forget quite what!  A package arrived for her the other day by post and I happened to be in to sign for it.  The label on the box made me laugh.  Hey, this is learning... there's no time to waste! :-)

June 21, 2007

Second Life in 3600 seconds - University of Bath

I had originally planned to give this seminar up a the University of Bath last week, but we canceled it because of SL scheduled maintenance - so I'm now doing it next week instead.  Details are as follows:

     
Title:Second Life in 3600 seconds
Start time:Thursday 28 June 2007, 2:00 until 3:30PM
Location:8 West 3.22, University of Bath
Description:This seminar, presented by Andy Powell, will be an opportunity to see Second Life in action. It will provide an overview of the features of Second Life, with an emphasis on its use in education, using a mix of slides, live demos and discussion.

Second Life (SL) is a 3-D virtual world, a.k.a. a metaverse, which is attracting quite a lot of interest from the global education community because of its potential use in e-learning. Educational applications include virtual delivery of seminars and lectures, collaborative exercises, tutorials and discussions, virtual archaeology, visualisations of architectural history, art and design applications, and so on.

Andy Powell is Head of Development at the Eduserv Foundation. Eduserv is a non-profit UK educational charity that works to realise the effective use of ICT for learners and researchers.

Contact:Artemis Cropper, 01225 386256, a.cropper@ukoln.ac.uk

Thanks to people at UKOLN, particularly Arte, for organising this.

OAI-PMH vs. Atom vs. Sitemaps

For some time now I've been meaning to write a blog entry summarising the functional capabilities of the OAI-PMH and then looking at whether and how the same functionality could be delivered using RSS, Atom or Sitemaps.

Jim Downing has beaten me to it.

I have one minor quibble - which may be to do with my lack of understanding about Atom - in that I don't fully understand what Jim means by:

I have a feeling that the resource representations in Atom / RSS feeds are unlikely to satisfy most repository clients’ needs. Isn’t a more resource-oriented approach to simply link to the resource and let the client negotiate with the resource for an appropriate representation?

That said, I certainly agree with the thrust of his post.

W3C TAG considering identification in Virtual Worlds

I just noticed while browsing the mailing list archives of the W3C Technical Architecture Group (which are always a good read, I should add) that one of the items currently under discussion is "Naming and Identification in Virtual Worlds". Actually, there are only a couple of posts on the topic at the moment (the thread starts here) but I imagine more will follow.

This relates to a point about the use of SLURLs which Andy discussed in a couple of posts over on ArtsPlace, and more generally one of my interests is in understanding how virtual worlds like Second Life integrate with the Web - with identification being a key part of that - so I'll be following the TAG discussion with interest.

Obsessive-compulsive disorder?

In a posting to the American Scientist Open Access Forum Sally Morris notes:

It's one of the curious things about the 'Open Access movement' that uptake by the academics themselves (for whose benefit it is supposed to be) depends on compulsion.

I made a similar point, though I suspect for completely different reasons, in my recent posting about repositories:

Yes, we can acknowledge our failure to put services in place that people find intuitively compelling to use by trying to force their use thru institutional or national mandates?  But wouldn't it be nicer to build services that people actually came to willingly?

Steven Harnard, in his response to Sally, notes that:

But if "compulsion" is indeed the right word for mandating self-archiving, I wonder whether Sally was ever curious about why publication itself had to be mandated by researchers' institutions and funders ("publish or perish"), despite its substantial benefits to researchers?

Touché.

I don't consider myself a real researcher [tm] so I probably shouldn't comment but I've always assumed that "publish or perish" resulted at least as much from social pressure as from policy pressure.  Self-archiving should be the same - it should be the expected norm because it is the obvious and intuitive thing for researchers to do to gain impact.

Powerpoint and glanceability

Tony Hirst talks about glanceability in the context of presentations and makes the point that Powerpoint (particularly within academia) is often mis-used - or at least not used as well as it might be.

In general, I'm guilty of over-egging the text on my slides - hey, it's hard to find suitable images when you are talking about metadata!  Interestingly (to me anyway) I think I often talk longest to those slides with the least text, at least I'm getting better at doing so, so I don't think I'm guilty of simply reading stuff out.

I think we need to separate out 'slides as scaffolding for the talk itself' vs. 'slides as record of the event'. I tend to put enough text on my slides that they stand alone afterwards, and I'm regularly told off by the marketing people here for doing so.  But in the age of Slideshare, slides have a significant impact well beyond the 30 minutes and 50 people that you are speaking to at the actual event.

I picked one of Tony's Slideshare presentations at random and yes, it looks very nice.  (I genuinely wish I could put together presentations that looked like that).  But I have no real idea what points he was making in his talk - at least not in any detail.

Perhaps we just need to get better at sharing the 'record' in some other form - a text log, an audio/video recording, some slide notes, or something...?

I sometimes wonder if more slides, each with much less text, is a possible compromise?

efsym2007 revisited

On Tuesday we hosted a mini-event in Second Life, a panel discussion as a follow-up to the symposium held back in May. The aim was to provide an opportunity to continue and extend the discussions which had started in the symposium, and particularly to try to focus in on the questions - which perhaps we didn't quite get to grips with as fully as we would have liked in the symposium itself - of how Second Life is being used, or could/should be used, to deliver and support learning and research.

We were fortunate that five of the six speakers at the symposium - Jim Purbrick (Linden Lab), Roo Reynolds (IBM), Hamish MacLeod (University of Edinburgh), Joanna Scott (Nature Publishing Group) and Stephen Downes (National Research Council of Canada) - were able to participate as panelists. All participants were joining the event remotely and we hadn't set up any audio or video streaming for this event, so communication was entirely via in-world chat. I think there were about thirty-odd people in the audience plus the five panel members - enough for us to experience some degree of "lag", but I don't think it was severe enough to cause real problems (though Paul Walk notes his keystrokes being reduced to a crawl, and I think Stephen did lose his connection briefly.)

However, as Andy notes over on his ArtsPlace weblog, we had something of a, ahem, "learning experience" with the use of the PanelPod software which Andy has developed to provide "virtual chairing". (The PanelPod software manages queues of participants who wish to speak and provides prompts for them to start talking when they reach the front of the queue - simulating the role of a human panel chair in a physical meeting). We started the meeting using that system, but it soon became evident that the "structured" approach imposed by the system was inhibiting discussion, and working against our intent that the discussion should be relatively informal. So we switched it off, and the conversation seemed to flow more freely afterwards. (In a comment on Andy's post, David Tebbutt provides some statistics to support that! Thanks, David!)

In terms of the content of the discussion, Andy has posted the full chat log so I don't intend to try to summarise the whole thing here, but I'll try to highlight a few points which emerged:

  • The opening discussion picked up on a question raised by Diana Laurillard during the symposium of whether SL resulted in, or enabled, "new pedagogies". The consensus seemed to be that SL didn't in itself change pedagogical approaches, but that it did provide a new context and that change of context encouraged more thinking about how we teach and learn in that context.
  • In terms of specific practical applications, there was some agreement on the power of providing 3D visualisations in SL, e.g. Nature's work with molecular structures and IBM's with abstractions such as network architectures.
  • (It was round about this point that a question about the use of SL for discussion led to reflections on the use of the queueing system in this discussion, and the conversation switched into an open chat. Although there were a few moments where threads overlapped and possibly a few points were lost along the way, I think it worked out OK.)
  • This prompted a question of the "gaps" between an individual's conceptualisations and their ability to realise those conceptualisations in SL, e.g. the "learning curve" involved in becoming sufficiently proficient in building and scripting to realise some project - though as Algernon Spackler pointed out, that may be a problem with other software tools (or indeed with pen and paper!). And indeed Kimberley Pascal indicated that his experience was that students did take well to building in SL.
  • Algernon suggested that we should take opportunities to "[reach] students where they currently are (whether its Second Life, Facebook, or wherever)" rather than seeking to replicate such systems. This led into two questions: firstly, whether our students really are currently in SL (which I'm not sure we really addressed!), and secondly, how to ensure that their early engagement with SL in a learning context is a positive one. Babbage Linden acknowledged that probably only 10% of people "got" SL, which raised the question of whether there was a requirement "to make SL better" (improve the interfaces etc) or to acknowledge that for some people perhaps virtual worlds were not the most suitable tools. (Edit: between my drafting this post and publishing it, Andy has expanded on these questions over on ArtsPlace.)
  • The question of how to assess learning in SL was also raised, with the suggestion that assessment was more difficult in SL. I admit I'm not sure I quite grasped the points being made here, as I hadn't thought of the challenges of assessing learning carried out in SL as fundamentally different from those of assessment in Web-based learning environments.
  • The role of SL in encouraging a collaborative approach amongst learners, and more broadly the "social dimension" of SL and its "network effects" proved to be a point of debate, particularly between Babbage Linden and Labatt Pawpaw. Babbage suggested that SL facilitated people meeting other people with shared interests ("places and things provide the points around which communities form" and "you go to the space flight museum for the rockets and stay for the people"), and Labatt argued that this was not borne out by his own experience, where many SL places are relatively empty and large numbers are clustered only around venues like casinos.
  • Following in part from the discussion of socialising/networking, and in part from an earlier point about what made SL attractive/compelling, Art Fossett emphasised that one of the central attractions of SL for him was the ability to build (needless to say, I've noticed this from sitting across the desk for the last six months!), and also that building provided an important point of contact with others (e.g. striking up conversations with other builders in sandbox areas). This sparked some debate about the role that building might play within learning and teaching (Graham Mills: "Building is very demanding". Misha Writer: "most teachers & students will not be going to build". Magistra Clary: "A lot depends on the discipline...do law students like to build?".) Art did expand his comments to emphasise that he was adopting quite a broad notion of "building": "for me, the term 'build' is fairly wide - i include 'make a film', run a virtual courtroom, put on a play, etc." (Edit: again, more thoughts from Andy over on ArtsPlace.)

Overall, once discussion got going it did flow quite well, and it was an enjoyable conversation with contributions from a reasonable proportion of the audience (25, according to David's statistics!). However, on one last note, there was one question asked (also highlighted by Paul Walk) which did leave me wondering about how we had approached this particular event. The question (from avatar Lovely Day) was "Who in this room is reading the chat history? And who is looking at the people? And, if the chat history, why bother with SL?" 

For some of the time I was looking at avatars, but mainly because I was trying to capture some snapshots of the event (and at those points I wasn't really following the chat)! When I was looking at the chat log I certainly wasn't panning round the room to find out which avatar was "speaking" at the time. Having spent a few hours pulling together snapshots and assembling these notes, it does seem to me the questioner had a point. Couldn't we have had a similar discussion using IRC or some Web-based chat forum/message board? What did having the discussion in SL really add? Of course, I'm aware that it would have been perceived as rather odd, given the Foundation's recent activity in this area, if we had decided to hold a "virtual" discussion about SL outside of SL, but, OTOH, I still struggle to articulate exactly what holding the discussion in SL brought to Tuesday's exchanges. Which is not to say that I think SL has nothing to offer to such events - not at all, and indeed I found myself agreeing with some of the points made on Tuesday about visual and spatial cues - but that question made me realise that (with the possible exception of the fact that the panel were seated separately from the audience) I made almost no use of those aspects during the event itself. Hmmm.

Still, a useful discussion, I think. Thanks to all who participated.

Facebook and OpenID

There's a short post here, Facebook and OpenID, about the need for fb to support OpenID.  The important point being that it needs to support it as a consumer, not just as a provider, i.e. that it should allow people to use their existing OpenIDs.

Perhaps we need to start a 'we want OpenID' fb group to demonstrate demand? :-)

June 20, 2007

Telling stories

I spent today at the Telling More Stories conference in Wolverhampton, a one day conference about e-portfolios facilitated by Shane Sutherland. It was a good day, with some very positive, though some would say anecdotal, stories about the successful use of e-portfolios in practice.

I came away feeling very inspired.

Shane started the day by (re)defining e-portfolio firmly as a noun – an e-portfolio is a purposeful aggregation of digital items that functions as a representation of a person thru their work,  ideas, achievements, reflections and qualifications and so on.  (Shane used slightly different wording to this, but I think the spirit is right).

I like this. I've never understood the view of e-portfolio as service. Sure, there are e-portfolio services that create, manage and consume e-portfolios, but those services are not, in themselves, the e-portfolio.

A good start.

Lawrie Phipps (JISC) gave us two stories about learners using e-portfolios and talked about the wider environment within which e-portfolios now have to sit – the Web 2.0 environment of Facebook, Flickr, Skype, Del.icio.us, Google and so on. Most importantly he presented the three-cornered model of e-portfolio space – formal space, social space and private space. I like this model, though I'm not a big fan of the 'private' label since almost nothing in e-portfolios is completely private - or so it seems to me – it's just that access to some stuff is more restricted than to other stuff.

It also became very clear that the interesting stuff happens in the middle of this triangle, at the intersection between institutional and personal activity, between learning and social, between restricted and public, and that students will want to expose representations of themselves using a mix of institutional e-portfolio systems and Web 2.0 services like Facebook.

I think that this has technical implications on our e-portfolio systems.

In the final plenary I argued that we, as a community, have a nasty habit of inventing heavyweight and complex technical interoperability solutions driven largely, in the case of elearning, by the data-sharing needs of institutions and related bodies – not by the needs of learners. Yet it was abundantly clear during the day that the success stories around e-portfolios centre almost exclusively on personal social interaction. Creating an e-portfolio is a social activity done for social reasons – just like blogging. We know that social tools are supported best by the use of persistent 'http' URIs (cool URIs) and RSS. These are the standards we should focus on if we want to take a learner-centric view of e-portfolio interoperability and, more importantly, if we want to embed e-portfolio systems firmly into the fabric of the social Web.

Most of the rest of the day was about the stories of learners (right across the spectrum of UK education). Anecdotal? Absolutely. But also very interesting.

Favorite quote: "you don't see that on paper", referring to the breadth of content found in e-portfolios but said by someone who had just explained that students in her institution needed to print out their e-portfolios in order to submit them for assessment! :-)

Minor gripe: all conferences should now have an agreed tag, so that blog entries, Flickr photos, etc. can be easily aggregated together after the event.

@everyone the subtleties of chat

Yesterday's near disaster… err, I mean yesterday's well thought thru learning experience… with the slow start of the symposium follow-up session (blogged here, here and here) reminded me that the simpler the communication medium, the more inventive we are with it's use. Think SMS, which has more or less led to the invention of a new form of written English – OK, I exaggerate a little! Think plain text email, with its well accepted conventions for quoting and responding in discussion threads.

It's interesting to note that the introduction of HTML email, which should on the face of it have allowed for much richer forms of communication by email, has only succeeded in destroying a lot of those conventions. (Note: on the plus side, it has given corporate-types a way of putting their irrelevant disclaimers in a different font! :-) ).

Twitter (I assume it is Twitter?), as simple a way of doing things as you can get, has introduced a new convention (at least it is new to me) - that of starting a tweet with '@somebody', where somebody is a Twitter name, to indicate that the tweet is directed at a particular person.

We're already beginning to see usage of this outside of Twitter-space – e.g. in email messages and in-world chat. I suspect we'll see more of it.

June 19, 2007

Facebook application growth

Silicon.com has an interesting article describing the rate of growth of Facebook applications - this month's big must-have I guess.  I'd build one myself if I could think of anything useful to build!

In discussion on Brian Kelly's blog I tried (unsuccessfully) to argue that we need to be careful to develop applications that fit with the fb mindset (which broadly speaking is "all about me") and not try to turn it into YAP (yet another portal) just because we are desperate to put our applications (e.g. search the library catalogue) in front of people's eyeballs.

I don't want my fb page covered in hundreds of search applications thank you very much.

But I (reluctantly) concede that others might - and if so, who am I to discourage them :-).  I do think that libraries have a role in supporting and encouraging the "this is what I've been reading recently" type of fb application.

One of the most powerful aspects of fb is that the Facebook Platform allows it to go where its users want it to go.  People will vote with their feet so to speak.

June 18, 2007

Virtual worlds, real learning, revisited

We're running a short follow-up meeting to the symposium on Eduserv Island in Second Life tomorrow at 4.00pm (UK time) - that's 8.00am (Second Life time).  Registration not required.

We hope to relay all the chat from the session into RL using Twitter.  If you are a Twitter user, follow 'eduservisland' to see the discussion.

June 15, 2007

Repository Plan B?

"The most successful people are those who are good at Plan B."
-- James Yorke, mathematician

My post yesterday about real vs. fake sharing in the context of services like Facebook, coupled with my ongoing thinking about what is or isn't happening in the repositories space has made me begin to wonder.

Pete said to me yesterday while we were discussing my Facebook post that he feels very reluctant these days to put content into any service that doesn't spit it back out at you as an RSS or Atom feed.

I completely concur.  It is HTTP, the 'http' URI and the RSS feed (and more latterly the use of Atom) that are the really successful interoperability glue of the Internet. This is brought home most clearly in our daily use of services like Bloglines, Technorati, Facebook and the rest, in the ease with which one can aggregate stuff using the Blastfeeds and Yahoo Pipes of this world, in the way in which one can build whole Web sites simply by pulling together externally held content via their feeds.

But what does this mean for repositories?

Imagine a world in which we talked about 'research blogs' or 'research feeds' rather than 'repositories', in which the 'open access' policy rhetoric used phrases like 'research outputs should be made freely available on the Web' rather than 'research outputs should be deposited into institutional or other repositories', and in which accepted 'good practice' for researchers was simply to make research output freely available on the Web with an associated RSS or Atom feed.

Wouldn't that be a more intuitive and productive scholarly communication environment than what we have currently?

I completely accept the argument put forward by others (e.g. see Rachel's comments on my previous post or Jim Downing's recent blog entry) that a repository is about more than just discovery, sharing and access - its about management (content management? :-) ), curation, preservation, whatever, ...  But until we foster, support and build on the social networking aspects of our learning and research communities properly, we simply will not have enough content in repositories to bother worrying about anything else.

This is not simply about saying we should give more prominence in our repository activities to supporting RSS than we do to supporting the OAI-PMH (though I happen to think that we should - for the purely pragmatic reason that it is more widely adopted) - it's about our whole attitude and approach to 'repositories' (if that is what we insist on calling them) as social tools.

In my JISC talk I positioned ArXiv as one of the first Web 2.0 services (somewhat tenuous I know, since it predates the Web).  Like almost all successful Web 2.0 services, ArXiv is global in nature - it positions and engages its users directly within a global context that means something to them as researchers.  Since then, we have largely attempted to position repositories as institutional services, for institutional reasons, in the belief that metadata harvesting will allow us to aggregate stuff back together in meaningful ways.

Is it working?  I'm not convinced.  Yes, we can acknowledge our failure to put services in place that people find intuitively compelling to use by trying to force their use thru institutional or national mandates?  But wouldn't it be nicer to build services that people actually came to willingly?

In short, do we need a repositories plan B?

Repositories roadmap, cont...

I wanted to respond to some of the comments made on my presentation to the JISC Repositories Conference, you know... the one where I waffled on about Web 2.0 and sadly concluded that we need to take a different approach :-)

In the comments to my original blog entry Herbert says:

I do not get the point you are making re "OAI becoming less important". OAI (I think you mean OAI-PMH) fits under the Resource Discovery category of this blog. Just like RSS, and Sitemaps do. Nothing more, nothing less. And I think resource discovery is and remains important and the OAI-PMH offers one approach to allow for batch discovery of resources. Unfortunately, not because of some flaw in OAI-PMH (I think), but rather because of ambiguities in unqualified Dublin Core (or in the implementation thereof) regarding referencing actual resources by means of their URIs, many OAI-PMH harvested records turn out to be of little use to the major search engines. As far as I understood from discussions with Google people, this is the major reason why they do not promote (do not read "do not use") the OAI-PMH as a way to discover resources. Replace unqualified Dublin Core by some more meaningful resource description approach (I am, for example, thinking OAI-ORE serializations of named graphs; see Pete's entry), and I think that OAI-PMH still has quite something to offer in the realm of resource discovery.

I'm a theoretical fan of the OAI-PMH, but the sad fact is that it hasn't changed the world in the way that RSS has.  I don't know quite why but I don't think that one can simply blame DC.  I suspect it has to do with complexity, not just at the protocol level, but also in terms of our inconsistent modeling of the objects in repositories and the way we use OAI to expose metadata about them.  The primary problems with the use of DC in repositories are to do with the fact that it is used to disclose metadata about different kinds of objects in different repositories - conceptual 'works' in some, digital 'items' in others, and so on.  Add the conceptually challenging notion of an OAI 'item' into the mix and you have a potential problem.  That's not a problem with DC it seems to me, but with an inconsistency in how we see the world we are dealing with.

I also tend to think that the fact that OAI-PMH adopts a service-oriented approach where RSS and Atom adopt a resource-oriented approach is probably significant.  It's about fitting in with the way the Web works.

Apart from this, I would like to also agree with Pete when he suggests that the eventual OAI-ORE approach will most likely not be complex. The theory may look complex at this moment, I think the practice should be relatively simple. I think we agree that simplicity is a major factor when it comes to getting buy-in for interoperability specs. We have learned lessons from the past.

I think we could argue about simplicity vs complexity for a long time - and we probably wouldn't strongly disagree.  From my point of view it kind of misses the point.  The key issue for me is that we haven't managed to build a set of repository services that support the social dimension of what we want to achieve - improved scholarly communication.  RSS, it seems to me, is one of the features that has allowed Web 2.0 services to flourish - something we are still struggling to achieve with repositories in any real sense.  I'll return to this in my next post.

Rachel said:

In my view (and this is what I have argued in the past) it is significant that institutional repositories are 'well-managed' and for this reason have a level of sustainability and trustworthiness over and above an individual academic's or even a department's Web site. There are a number of actors involved with 'repositories' - depositors of content; searchers and users of content; repository administrators (ultimately the institution). That the repository is 'well-managed' (sustainable, backed up by institutional mandates, trusted) is an important characteristic which should encourage in particular the depositor to populate the repository and the administrator to keep content safe.

To that extent I think the repository is a particular sort of 'Web site', it has institutional commitment to keep it up-to-date and high quality.

Over and above that characteristic, I would suggest the manner in which the 'repository' interfaces with both the depositor and the searcher (both of whom might be considered as consumers I think?) can be as much Web 2.0 as you like....

I don't strongly disagree with any of this - yes, stuff needs managing somewhere - except that I think it interprets 'Web 2.0' as having a technical/technology meaning, whereas I'm more interested in the social aspects - Web 2.0 as attitude.  For example, in my talk I described ArXiv as the first (academic) Web 2.0 service, even though it pre-dates the Web.  This attitude needs to pervade every aspect of the way we deploy repositories, it's not just a surface gloss that we layer on top of an existing bit of software.
I accept that I'm finding it hard to get my thoughts across here.  Part of the problem, I think, is that we all interpret Web 2.0 as meaning slightly different things. I'll try and clarify some of this in my next post...

June 14, 2007

Creative Commons, open licences and cultural heritage

We have agreed to fund Jordan Hatcher, formerly a Research Associate at the AHRC Research Centre for studies in Intellectual Property and Technology Law, to undertake a study into how open content licences are currently being used by cultural organisations in the UK.  Get in touch with Ed Barker if you want to know more.

June 13, 2007

Bashing in Bolton

At the end of last week I spent a couple of days at the 4th JISC CETIS CodeBash at the University of Bolton.

I think the CodeBash events are, or at least have been in the past, aimed primarily at those individuals developing and/or working with software tools which implement various specifications and standards used in the e-learning sphere. They provide an opportunity for some very concrete explorations of technical interoperability ("If my tool exports/exposes an instance of format XYZ, what happens when your tool  imports/consumes it?", "How does a title search on my system A compare with a title search on  your system B?", and so on.) Also, since several JISC CETIS staff are closely involved in the processes of developing specifications, they allow participants to provide quite detailed feedback on specifications - especially on versions that are under development.

There were about 20 people at the meeting in Bolton, and probably a slightly smaller number joined the event remotely at some point, using the Adobe Acrobat Connect/Macromedia Breeze facilities provided by SURF (JISC's counterparts in the Netherlands (roughly)).

As I readily admitted when I introduced myself to the other participants, I felt I was perhaps there "under false pretences" as it's probably twenty years since I've considered myself a "coder" (well, in my work time anyway - I occasionally dabble at weekends, mainly with PHP, but of late that has been in a fairly desultory fashion!). In part I was curious to see how the events worked, but I was also interested in finding out what were the areas of concern/interest for the e-learning developers working "at the coalface".

I suppose my broader interest is in trying to see when what may appear to be problems and challenges specific to a domain or community have similarities to more general problems and challenges, or those encountered in other domains. As Andy discussed in a couple of posts some time back, for example, an "item bank" is pretty much a specialised form of repository, and many of the approaches taken to providing functions in systems which call themselves repositories are probably equally applicable to systems which call themselves item banks. (In particular, I'd hope that specifications such as the Atom Publishing Protocol would find a wide adoption, as it seems to me it addresses some core functions that are common to many different types of systems which essentially manage "a collection of member resources".)

The event opened with some short presentations giving updates from groups working on specification development:

  • Wilbert Kraan from JISC CETIS reported on developments in the forthcoming version 1.2 of IMS Content Packaging and on progress in the IEEE RAMLET project. One item which prompted a fair amount of discussion was the indication of a plan to develop what I think Wilbert referred to as a "safe" profile of IMS CP - a very minimal feature set which a system should support.
  • Steve Lay from CARET, University of Cambridge reported on work on version 2.1 of IMS Question & Test Interoperability (QTI)
  • Angelo Panar from ADL briefly discussed recent work around SCORM, including some of the suggested organisational changes intended to reflect the fact that the SCORM user community is considerably broader than the military community within which SCORM was initially developed. (As Wilbert notes in an earlier post, this is not an uncontroversial topic!)

I gathered from the conversations around me that in previous events in the CodeBash series, the emphasis had been mainly on exploring the various flavours of learning object metadata specifications and the IMS Content Packaging specification, whereas this time the topics of discussion were rather more wide-ranging - and indeed weren't at all limited to testing low-level interactions between applications. If there was an emphasis on some particular specification (or set of specifications), it was probably on IMS QTI, with representatives from a number of projects developing tools (authoring tools, item banks, delivery tools/"players") in that area.

I think a lot of the work at the event took place in exchanges between small groups of participants - or even just a couple of participants. And unless you were a party to the individual conversation, it wasn't easy to get a sense of what was going on - I entered one of the Breeze rooms on the Friday morning and discovered quite a long chat dialogue between a local participant and a remote participant that I hadn't been aware of. That's not meant to be a complaint - not at all! - just an observation on the nature of the meeting. And indeed I did do a certain amount of lurking and looking over people's shoulders to listen in on some of those conversations (in at least one of which the participants seemed to be on their way to developing a proposal for a short JISC project!)

In terms of practical/technical input, I'm not sure how much I actually contributed to the event, to be honest ;-) But from my viewpoint it was an interesting experience none the less, both in terms of seeing what sort of issues the learning technologists are grappling with and in terms of trying to fit that into a broader landscape.

Be aggregated as you would aggregate unto others

There should be a word for Web 2.0 services that are open in the sense that they allow you to aggregate content from other sources but that are closed in the sense that they don't expose content in forms suitable for downstream aggregation by others.

Perhaps the 'real sharing' vs. 'fake sharing' labels referred to a while back suffice?

Facebook is a good example...

Facebook is great for aggregating content from other sources, pulling it all together nicely in a single place.  For example, I have recently experimented with using Blastfeed to aggregate my two current blogs (this one and ArtsPlace SL) into a single feed that can be used as a source for my Notes area on Facebook.  (Note that this could probably be done more flexibly using Yahoo Pipes, something I'll experiment with in due course).

But, as Pete said in reply to my previous Facebook posting, Facebook appears to be pretty much useless if you want to expose any content you upload into it (photo albums, wall writings, notes, etc.) for aggregation by other services.  For example, I'd love to be able to 'read' Facebook using Bloglines, as a simple way of keeping track of what is happening.

Pete and I have recently joined the Atom/RSS Feeds for Facebook group in Facebook - a group dedicated to improving this situation.  I'm keeping my fingers crossed.

June 11, 2007

JISC Online Conference - Innovating e-Learning 2007

I've spent part of today taking part in this year's JISC Online Conference, not least because I'm running a 'showcase session', Barriers to mainstream adoption of Second Life for teaching, tomorrow.

The main themes of this year's conference are institutional transformation and supporting lifelong learning.  I have a slight dislike of the software used to deliver the conference (I mean it's not terrible but the handling of discussion threads is pretty weak compared to something like Gmail and overall it doesn't feel very Web 2.0 to me, which is ironic given the prominence of discussions around Web 2.0) but that said, I'm very pleased to be taking part.  This is a novel experience for me and, as someone that has tried to build some thinking around our carbon footprint into the Eduserv Foundation strategy, something I want to do more of in the future, assuming it works.

Most importantly, the papers so far, and the discussions around, them have been very good.

Twitter... wot?

I have also been playing with Twitter.  Somewhat briefly admittedly... but I thought I'd try tweeting or chirping or whatever it is called for a few days to see how I got on.

I have to say that I'm a little bemused by what is going on.  I mean, I'm sure there are use-cases here that Twitter is good for, but my tiny brain hasn't spotted them yet I'm afraid.

Part of the problem might be that I only have 3 friends (sad but true).  If anyone wants to come and keep me company (assuming that I haven't completely lost interest by then), I'm http://twitter.com/andypowe11.  Give me a tweet!

Facebook - an academic tool?

I've just joined a Facebook group called "Dump LinkedIn and other networks in favour of Facebook".  The introductory blurb says:

Just a quick note to let you know that I won't be updating Linkedin any more. I now use Facebook as it shows the personality behind the person. It also allows you to integrate all other applications and social network activity such as Flickr, Twitter, Blog feeds etc.

Facebook also allows you to see who's in my network and it's easy to add them to yours if it's appropriate.

Makes a lot of sense to me.  As far as I can tell, everything that LinkedIn does, Facebook does better - Facebook does a whole lot more besides.

I quite like Facebook.  It's very easy to use, the interface is nice, and the ability to integrate other applications and build your own tools very powerful.

So... is it a useful tool for academics?  I dunno, but I think it could be.  Some things would need to be improved I think.  Here's some initial suggestions:

  1. A simple way of incorporating lists of publications and presentations - probably via an extended RSS or Atom feed.
  2. Some additional types of "How do you know X?" options including "Collaborated on Y" and "Met at Conference Z".
  3. Easier citation of entries in boards.

There's probably a lot of other stuff as well, but that's what I've spotted so far.

BTW, my Facebook page is at http://www.facebook.com/profile.php?id=688401359.

June 08, 2007

Second Life: a personal view

Eduservislandlogo I was asked by Eduserv's PR company for a couple of paragraphs about my views on Second Life.  They'd looked at this blog (and possibly at the ArtsPlace SL blog as well) and hadn't found anything suitable.  I realise now that they are right... despite blogging and speaking fairly often about Second Life, I've never been particularly up front about my views on it.

So here goes...

Well, firstly, I guess that it is pretty obvious (both from my blog entries, particularly those on ArtsPlace SL, and to those of you that can see me online thru the friends list in-world) that at a personal level I really like Second Life and that I'm spending a fairly decent amount of time in-world these days.  I try not to count the hours(!) but I admit that I tend to use Second Life most days, usually during the UK evening.  So, if you need to find me, that's where to look!

Why do I like it?  That's harder to say!  I've never been a big gamer - in fact I can only think of one occasion in my life where I've got hooked by any kind of gaming software (Tony Hawkes 1 on the PlayStation if you are interested, and even that didn't last long).  Before anyone screams, I know there's an issue about whether Second Life should be called a 'game' or not.  For what it is worth, I tend to think that it shouldn't but that it shares some superficial gaming characteristics.  One way or the other, it doesn't strike me as a major issue.

But I digress... I am a confirmed 'hacker' (I use that term in its positive sense) and have been more or less ever since I discovered the joy of programming sometime back in the late 70s.  I see hacking/programming as a craft (not as engineering, though I'm a Software Engineer by degree, and certainly not as a science) and, for me, it works best (i.e. is most enjoyable) when it can be combined with some level of design - whether that is interaction design or visual design or whatever.

Second Life, with its combination of building skills and programming skills, seems to me to bring these things together very nicely.  I think that's probably why I like it so much.

So, what about from a professional perspective?  Why has the Eduserv Foundation got interested in Second Life?  Why did we hold the symposium and fund Second Life projects in this year's round of grant making?

Well, it seems clear to me that there is significant interest within the education community in the use of 3-D virtual worlds in learning.  This interest was most visible to us from the reaction we got to the symposium and the grants call, both of which swamped us with responses in the area of Second Life.

You'll note that I seamlessly switched from the generic, 3-D virtual worlds, to the specific, Second Life, there without any problem.  This isn't surprising to me.  I made the point at the beginning of the symposium that Second Life is where most of the 3-D virtual world learning action is at the moment - so it makes perfect sense, to me, for us to focus our attention on it.  The Second Life brand is the Hoover of the 3-D virtual world space at the moment - or so it seems to me.

Whether this level of attention is justified is another matter of course.  As with the early days of the Web, what we are seeing at the moment is a lot of experimentation - with no-one being quite sure what works well and what doesn't.  We're seeing lots of people in the education sector getting excited, getting involved, getting in-world, and then trying to work out what the hell they are going to do when they get there.  Those people are usually operating alone or in small units - there is still little high-level strategic commitment to Second Life or 3-D virtual worlds.

I see the Foundation's role as helping to move our understanding forward in this area - helping to facilitate a debate about what works and what doesn't.  And if appropriate, helping to grow that ground-level excitement into something more permanent.

I have a gut feeling that 3-D virtual worlds are going to play a significant role in education in the future - but no more than that right now.  Part of my interest in helping with the debate is because I am genuinely interested in where things are going in this space.

On Second Life itself, I think it has strengths and weaknesses - but that is pretty much inevitable at this stage of the game.  I don't know whether the future lies with Linden Lab or not, and I don't really much care.  I don't mean that offensively, by and large I think that Linden Lab are doing a great job, I just mean that I see what people are doing now in Second Life as an important part of the learning process - an essential part of the debate I mentioned above - and it doesn't seem particularly critical to me me whether we are still using Second Life itself in 2 or 5 or 10 years time or whether we are using something else.  Other environments will come and go and we, as a community, will adapt to them - and hopefully help them adapt to us!

At the symposium, these was a significant debate about the commercialisation of Second Life (and, indeed, of education itself).  I must admit, I don't buy the negative side of that debate - the side that says that Second Life can't be usefully used in education because it is a commercial enterprise that supports a commercial virtual world.  Perhaps I'm missing something, but to me, that feels like a non-issue - or at least, it feels like an issue that is already with us in almost every other aspect of education!  Perhaps I'll return to this in a future post.

Anyway, that's a quick summary of my views.  If you are reading this at the PR company, I hope it helps - if it doesn't, drop me a line and I'll have another go!

June 07, 2007

Good grief...

...somebody's got too much time on their hands! :-)

June 06, 2007

Refining ORE

I spent the early part of last week in New York for the second meeting of the OAI ORE Technical Committee, which - thanks to the participation in the project of Google's Rob Tansley - took place at the impressive New York offices of Google in downtown Manhattan. (Edit: Ooh, I just noticed that Tony Hammond of the Nature Publishing Group includes a rather cool photo of the view from the offices in his post about the meeting.) 

Since the first TC meeting in January, the group has held a number of telecons, and the ideas we discussed in January have been refined and expanded, and this has been reflected in the content of a paper authored by Carl and Herbert which has now gone through several iterations. The latest version of that paper "Compound Information Objects: An OAI-ORE Perspective" is now available. See also the announcement by Herbert to the oai-implementers mailing list in which he invites comments on that document (Comments on that document to ore at openarchives.org rather than here, please!)

One of the significant steps forward, I think, has been to conceptualise the "descriptions" of "compound objects", and of the relationships between those objects and their component resources, as graphs, and further, to recognise - drawing on work within the Semantic Web community, particularly the work on "Named Graphs" by Jeremy Caroll and colleagues - that those graphs are resources in their own right - resources which are related to, but distinct from, the resources referenced in the graph - and they can be identified and referred to, just like other resources.

The document emphasises that there are issues still under discussion, and indeed for most of our time at the meeting last week, we grappled with the questions raised in Section 7 of that document, and particularly around questions of identity and referencing.

FWIW, I think the main arguments I made at the meeting were:

  • Given the very broad definition/description of what constitutes a "compound object" presented in the opening paragraph of the paper (particularly the example of "a scholarly publication that is aggregation of text and supporting materials such as datasets, software tools, and video recordings of an experiment"), it seems to me there is no fundamental difference between the two scenarios presented in section 7. A "composite" made up of several previously unrelated items, and created through some algorithmic selection process (Case 1), is every bit as much a "compound object" as a digitised book made up of a number of pages (Case 2). In both cases there is an "aggregate" resource which is distinct from, but related to, its "component" resources - and distinct also from a graph which describes the relationships between that composite and its components.
  • (In each case) the relationships between "compound object" and its "component" resources should be explicitly asserted (in one or more graphs).
  • The "compound object", its "component" resources, and the graph(s) describing those resources may be created by different agents at different points in time, given different names/labels/titles, associated with different conditions of use (etc etc), and it may be useful/necessary (particularly when dealing with issues of trust, authority, provenance) to make that information available. 
  • If it is necessary/useful to refer to a resource, then consideration should be given to assigning a URI to that resource (following the Web Architecture Good Practice note "Identify with URIs").
  • If distinct URIs have been assigned to distinct resources, then we must be consistent in our use of those URIs to refer to those resources. If the owner of URI X says that it identifies resource A, then it introduces ambiguity if we then use that same URI to refer to resource B (again, this reflects the principles of the Web Architecture: Constraint  "URIs identify a single resource" ). (This seems particularly important given the third point above.)

Ah, just the usual stuff I tend to bang on about, I suppose ;-)

As a footnote, I noticed that in his post on Monday, Andy referred to the OAI ORE approach as, potentially at least, "relatively complex". I suppose complexity is always relative, but I still hope (though readers of the preceding list of points may already be concluding that I am skating on thin ice with such an aspiration!) that whatever specifications emerge from the project will turn out to be relatively simple - simpler than the ePrints application profile, certainly - and that they will be firmly rooted in the principles of the Web. At the moment, I think it seems complex in part because we've been working through a process of arriving at a shared conceptualisation, and (as always during such processes?) that has involved a certain amount of (occasionally fraught!) "negotiation" as we tried to understand each others' perceptions and particularly get to grips with each others' use of terminology.

Also as Peter Murray from OhioLINK, a fellow member of the TC, mentions in his recent post on the ORE work, the ideas that are presented in the current paper will almost certainly be further refined and will be subject to some further "repackaging for public consumption" (sorry, that's my paraphrasing, not Peter's words!).

June 05, 2007

Preliminary Programme for DC-2007 announced

DCMI has announced the publication of the preliminary programme for the DC-2007 conference, which this year is to be held in Singapore at the end of August. Both Andy and I were/are members of the Programme Committee.

The "theme" for the conference is "Application Profiles: Theory and Practice", and although my recent experience of DC conferences has been that in reality the theme is at best somewhat loosely adhered to, this year I think there will be some interesting contributions around this specific area. I'm particularly looking forward to hearing from Mikael Nilsson about his work on a model for a Description Set Profile and seeing how that is received by the conference audience. From looking at the programme, I think that is set to be the main subject of discussion in the two sessions related to the DCMI Architecture Forum on the afternoon of the first day of the conference.

As I argued a while ago, the absence of a formal specification of what constituted a DC metadata application profile has been slightly problematic because it has left the way open to a range of (sometimes incompatible) interpretations of that notion. This current work, it seems to me, will be a significant step towards putting in place a key "missing piece" of the jigsaw centred on the DCMI Abstract Model - and I'd hope that it will result in a very useful specification for DC implementers.

June 04, 2007

The Repository Roadmap - are we heading in the right direction?

I've been asked to provide the opening slot at the JISC "Digital repositories: Dealing with the digital deluge" conference in Manchester, starting tomorrow.  My slides are now up on Slideshare.

I'm going to start with a fairly boring overview of the Repositories Roadmap that Rachel Heery and I wrote for the JISC last year (you can have a lie in if you like!) followed by some discussion around the way that our environment is changing, largely because of Web 2.0.  The intention is to ask whether we need to adjust the roadmap to take account of those changes.

The roadmap is now one year old, and was written to paint a picture up until 2010.  So we are roughly 25% of the way there - a useful opportunity to look back and see how we are doing.  Re-reading the roadmap now, I think we did a pretty good job and there's not much that I would strongly take issue with.
Oddly though, the document makes precious little mention of the Web.  One might argue that there was no need to state the obvious - or that the Web was just a given.  But I'm not so sure.  One of the things I want to argue in the presentation (though I know that this is something that Rachel, my roadmap co-author, strongly disagrees with) is that, from the perspective of consumers, repositories are just Web sites.  Somehow, it almost feels like heresy to say so - I don't know why!?  But conceptualising them as such, changes the emphasis I think.  It pushes things like information architecture, the Web architecture, Google, accessibility, usability, URIs and so on to the fore - metadata, OAI and the like seem to become less important.  To me anyway. Perhaps I'm just strange! :-)

These are not straightforward issues, and I don't pretend to have any answers - but that doesn't make the question any less pertinent or interesting.  In fact, I'm very mindful of the tension between the relatively complex, essentially Semantic Web, metadata modeling issues being addressed by activities such as the OAI ORE project and the ePrints Application Profile work and the relatively simple, tag-based, approaches taken by Web 2.0 repository-like applications such as Slideshare and Scribd.

Unfortunately, I lean uncomfortably in both directions!

June 01, 2007

The REST book is out!

The eagerly anticipated (by me, anyway) O'Reilly book on REST by Leonard Richardson and Sam Ruby (as previewed by the authors here) has been published.

On a couple of occasions when I've talked about REST and "resource-oriented" approaches to developing Web applications at meetings or during presentations, a few people have responded along the lines of, "It's all very well talking about Roy Fielding's dissertation and the REST wiki and so-and-so's weblog, but, come on: if this stuff is so important, where's the O'Reilly book?"

I may continue to struggle with the questions about POST and PUT, but now at least I'll be able to answer that one.

I've just ordered a copy from Amazon. I'll report back properly when I've read it!

P.S. In his review, Jon Udell mentions that he has interviewed the authors for a future ITConversations broadcast, so it'll be worth keeping an eye out for that too.

About

Search

Loading
eFoundations is powered by TypePad