« June 2007 | Main | August 2007 »

July 26, 2007

The Open Library

What if there was a library which held every book? Not every book on sale, or every important book, or even every book in English, but simply every book—a key part of our planet's cultural legacy.

First, the library must be on the Internet. No physical space could be as big or as universally accessible as a public web site. The site would be like Wikipedia—a public resource that anyone in any country could access and that others could rework into different formats.

Second, it must be grandly comprehensive. It would take catalog entries from every library and publisher and random Internet user who is willing to donate them. It would link to places where each book could be bought, borrowed, or downloaded. It would collect reviews and references and discussions and every other piece of data about the book it could get its hands on.

But most importantly, such a library must be fully open. Not simply "free to the people," as the grand banner across the Carnegie Library of Pittsburgh proclaims, but a product of the people: letting them create and curate its catalog, contribute to its content, participate in its governance, and have full, free access to its data. In an era where library data and Internet databases are being run by money-seeking companies behind closed doors, it's more important than ever to be open.

So let us do just that: let us build the Open Library.

So says the About us page on the Open Library demo site - in the blog news recently, e.g. here, here, here, here, here and plenty more.  It's hard to disagree with where they are coming from and it'll be very interesting to see how they get on.  Also interesting to see how disruptive this is to the library world in general.

Applying an open Wiki approach to the library catalogue is a neat idea, though it seems odd to me that the underlying metadata schema doesn't make more reference to FRBR?  As I understand it, library cataloguers describe the item in hand.  Trouble is, in a distributed world it is presumably hard to reach agreement about what the item in hand is - there are many hands after all!  So it seems to me that one of the significant challenges facing any distributed and truly open approach to library cataloguing is getting contributers firstly to recognise when any two items are actually the same thing (same item) and secondly to understand the relationship between them when they aren't (same manifestation?, same expression?, same work?).

On the other hand, perhaps I'm just asking old fashioned questions?

Dublin Core Description Set Profile model: draft available

A while ago I mentioned that I was looking forward to seeing the outcomes of Mikael Nilsson's work on a model for what is being called a Dublin Core Description Set Profile. Mikael has recently circulated an early draft of a specification for such a model (and for an XML format and RDF representation) to the DCMI Architecture Forum for comment.

As Mikael notes in his message to the mailing list, "this is a work in progress, still in the design phase", but this is a very important and useful piece of the DCMI jigsaw, I think, and I'd encourage anyone involved in the design/development of Dublin Core "application profiles" to have a look at the draft and to consider whether it does the job of capturing the sort of constraints which are typically described in such profiles.

Comments on the draft itself to the dc-architecture Jiscmail list, please, rather than here.

July 25, 2007

APP moves to Proposed Standard

Via Tim Bray, the announcement that the Atom Publishing Protocol has been given the status of an IETF Proposed Standard.

One of the comments on Tim's post, from Peter Keane, reflects, I think, what Andy was suggesting in a couple of recent posts and also emphasises the importance of advocacy and education to explain what the standard can enable, and the provision of tools which support it:

The question (I think) is whether folks will be able to recognize it as the 'right tool for the job'. It can be a simpler Dublin Core (base line metadata schema), a simpler WebDAV (transfer protocol), a simpler OAI_PMH (protocol for metadata harvesting). As an application developer I need those protocols to tie together increasingly distributed systems. If the tools and libraries (mod_atom +1) become ubiquitous, it ought to work. If on the other hand, it is seen as simply something for reading and writing to blogs, perhaps not.

Downtime

Significant parts of the Web were down last night (UK time), causing yours truly to struggle to meet his alloted quota of time in Second Life :-)  Seems amazing that so much can be hit so easily?  Several people have pointed out that this was kinda predicated in the recent Onion News Network news story.  Funny.

Open identities

Stephen Downes links to Say Everything and raises some interesting issues about privacy and people's growing willingness to share almost everything about themselves online.  Whatever happened to those conversations we used to have in the early days of the Web, the "you can't put that online - someone will abuse it!" kind of thing.

The old fart (*) in me wants to say there are real dangers here.  And perhaps there are.  But I'd be wasting my breath.  Even my 11 year-old (Stan) wants his own Facebook page - something we've prevented him from doing so far.  But plenty of his mates (of the same age) are on Fb despite the pretty clear acceptable use policies, so you can see the attraction for him.

As the article says:

It may be time to consider the possibility that young people who behave as if privacy doesn't exist are actually the sane people, not the insane ones...

(*) PS  The first band (I use that term loosely) I was in was called The BOFS, circa 1977 - which as far as I recall was an acronym used in Sounds at that time to refer to people not into punk.

Tracking OpenID, Shibboleth and CardSpace using Google Trends

Alan Levine on CogDogBlog pointed out the use of  Google Trends to compare the rise and fall of key technologies (and dogs!).  I thought I'd try it to have a quick look at OpenID vs. Shibboleth vs. CardSpace.  Interesting... though pretty much what you'd probably expect.  I have no idea what is going on in Russia!?  (Oh, wait... looks like the apparent OpenID peak in Russia is just a glitch in the way the graphs are presented??).

July 24, 2007

Towards a European Infrastructure for e-Science Digital Repositories

There's an EC-funded survey about the future of "digital repositories for e-science in Europe!" (Their exclamation mark.)

This consultation seeks input from you as key stakeholders on digital repositories. Your responses will help identify needs, priorities and opportunities with regard to digital repositories and an e-infrastructure to support them and their use, and they will also provide important input into developing future policy initiatives.

Sounds reasonable.  Unfortunately the survey doesn't define what is meant by a digital repository, which is a bit of a shame - I wasn't sure if things like Slideshare were included or not.  The background information defines e-science repository as follows:

An e-science repository is a digital repository for science in the digital age – holding materials generated in e-science and/or which can e used in e-science

But that doesn't really help.  Sounds like a managed Web site to me :-)

July 23, 2007

New VTS tutorials announced

Intute have announced three new additions to the Virtual Training Suite - Internet for Dentistry and Oral Health, Internet Pharmacist and Internet for Allied Health.  Good stuff.

The tutorials offer advice on using the Internet for research for university work, offering a guide to some of the key academic websites for the subject, advice on Internet searching and on critical evaluation of information found on the Internet. "Success Stories" in each tutorial illustrate how the Internet can be used effectively to support education and research.

Note that there is no connection between Eduserv and these tutorials - I just happen to think they are great.

July 21, 2007

MOO Stickers

These have just got to be useful in schools somehow... haven't they?

July 20, 2007

Life on Sram

I've just been skimming thru the list of entries in the IWMW innovation competition.  There's some interesting ideas here.  The ones that I particularly like include Jeff Barr's Wiki-Powered Self-Serve Meeting Scheduling, which is a very simple idea (aren't all the best ones?) to use a Wiki to allow people to plan his trip schedule for him.  Kinda risky I would have thought, though perhaps the use of a Wiki self-selects people that it is worthwhile meeting? :-)  Graham Atwell and Einion Daffyd's mashup of the JISC SOA video is quite neat also.  But my favorite is Adrian Stevenson et al's Life on Sram, which offers clear evidence of the dangers of spending too much time in the pub at conferences.

IWMW 2007 in York - OpenID and all that

Richard Dunning and I ran a parallel session at the Institutional Web Managers Workshop in York earlier this week entitled Athens, Shibboleth, the UK Access Management Federation, OpenID, CardSpace and all that - Single sign-on for your Web site.  We'd co-opted a panel consisting of Andrew Cormack from JANET(UK) and Scott Wilson from CETIS.

I spoke first, trying to summarise the key words and phrases in the title of the session.  I'm not totally convinced that I did a good job – in part I blame tiredness, following a 7 hour journey up from Bath to York the day before arriving in a heavy rainstorm just in time to miss out on going to the pub :-(.  Sorry... a pretty pathetic excuse I know.  I find identity management a particularly hard area to talk about coherently for some reason. My slides are up on Slideshare.  Luckily, Richard, Andrew and Scott jumped in at regular intervals to mop up the issues.

During the session I mentioned that there still seems to be a lot of confusion around the changing access and identity management landscape – particularly in terms of what institutions need to do to move to Shibboleth and what impact that has (or not) on what they are doing internally with single sign-on. 

What did we achieve?  In part I was looking for recommendations for what could/should be done next.  TechWatch offered to commission a study of technologies in this area – which seemed to be welcomed – though I have a slight concern that such studies tend to disappear without trace if we aren't careful.  Beyond that there didn't seem to be many suggestions being made.  For info, I am currently wondering about holding an autumn meeting looking in more detail at OpenID and CardSpace.  Watch this space!

On the way down to the station, Andrew suggested one conclusion – that no-one in an institution should ever again invent an ad hoc sign-on mechanism for anything they do on their Web site.  Identity management and single sign-on are of strategic importance to institutions and need to be addressed as such, with joined-up thinking and institution-wide solutions and approaches.  The JISC Access and Identity Management Roadmap is a pretty good place to start, though it doesn't include any discussion around the impact of OpenID and CardSpace.

There was some interesting discussion about the issues around users having multiple online identities.  While I accept this as pretty much inevitable for all sorts of reasons I still don't accept that we should enforce multiple identities (as we do now) just because people move between educational institutions and/or sectors.  Others weren't convinced by this.  I don't doubt that there is much more debate to be had in this and other areas around identity management.

P1030017Unfortunately, I could only stay at the event for the first day.  It was worthwhile though – even for a short stay.  There were two really interesting talks before lunch... the first by Steve Warburton, who talked about community, and in particular what 'communities of practice' means in the context of elearning.  The second by Alison Wildish, who talked about her experiences of letting the students do the talking (using Web 2.0 services of course) at Edge Hill University.  This second talk in particular was inspirational, a great example of putting into practice what many of us only talk about doing.

Presentations later in the workshop included Jeff Barr from Amazon talking about Amazon Web Services, Drew McLellan from Yahoo on micro-formats, Peter Reader from the University of Bath on customers, community and communication, and many others.  Streamed video for most of the plenary talks is available.

[Image: Steve Warburton and Brian Kelly taking questions from the audience during the first session at IWMW 2007.]

UK academia Second Life snapshot

I'm very pleased to announce that a study by John Kirriemuir entitled July 2007 "snapshot" of UK HE and FE developments in SL is now available on the Eduserv Foundation Web site.

As John notes in the conclusion:

This report shows that a growing number of UK academic institutions, departments and groups are at different stages of SL development. It is, perhaps, presumptuous to conclude that UK Higher Education has reached a "tipping point" in terms of using and developing facilities in SL.  However, there has been a considerable increase in activity between March and July 2007, marked by the beginning and end of this survey. The appendix lists over 40 UK Universities and Colleges that have a building, land or island on the grid, many appearing in the last few weeks and not yet open for public visiting while they are being developed.

I know that John is still receiving some responses to his earlier survey questions on the SLED mailing list and elsewhere so the raw data on which this report is based is still growing slightly - we hope to make that data available in some form (possibly via a wiki) in due course. Nonetheless, the snapshot provides a valuable picture of where we are right now in the UK higher and further education community .

Ensuring that OpenID stays open

Phil Hunt at Oracle has an interesting post about the current IPR fuzziness around the OpenID spec.  I have absolute faith that the guys leading the OpenID charge are going to do the right thing, and there is already traffic on the relevant lists in response, acknowledging that something needs to be done... but clearly, this is a big issue - and one that needs to be resolved as soon as possible.

July 19, 2007

Modelling4All blog

Just a quick note to say that the Modelling4All project, which we recently funded under the 2007 grants call now has a blog.

Modelling4All – Web services to enable non-programmers to collaboratively build and analyse computer models  

Computer modeling is playing an increasingly important role in fields as varied as sociology, epidemiology, zoology, economics, archaeology, ecology, climate, and engineering. This project, led by Ken Kahn and involving Howard Noble (both at the OUCS, University of Oxford), will attempt to make such modeling more widely accessible by developing easy to use Web 2.0 services for building, exploring and analysing models, encouraging the development of an on-line community where models and model components are shared, tagged, discussed, organised, and linked to other resources.  Furthermore, the project will explore the possibilities of providing an immersive first-hand experience of the execution of models within Second Life.

July 13, 2007

Making PURLs work for the Web of Data

An interesting snippet of news from OCLC:

OCLC Online Computer Library Center, Inc. and Zepheira, LLC announced today that they will work together to rearchitect OCLC's Persistent URL (PURL) service to more effectively support the management of a "Web of data."

(For more on Zepheira, see their own Web site and also the interview with Zepheira President Eric Miller by Talis' Paul Miller in the Talking with Talis series).

While it's good to see an emphasis on improving scalability and flexibility (I hope that will include improvements to the user interface for creating and maintaining PURLs - while the current interface is functional, I'm sure everyone would admit it could be made rather more user-friendly!), the most interesting (to me) aspect of the announcement is:

The new PURL software will also be updated to reflect the current understanding of Web architecture as defined by the World Wide Web Consortium (W3C). This new software will provide the ability to permanently identify networked information resources, such as Web documents, as well as non-networked resources such as people, organizations, concepts and scientific data. This capability will represent an important step forward in the adoption of a machine-processable "Web of data" enabled by the Semantic Web.

This is excellent news. The current functionality of the PURL server tends to leave me with a slight feeling of "so near yet so far" when it comes to implementing some of the recommendations of the W3C - for example, the re-direct behaviour recommended by the W3C TAG's resolution to the "httpRange-14 issue". The capacity to tell the PURL server when my identified resource is an information resource and when it is something else, and have that server Do the Right Thing in terms of its response to dereference requests (which is how I'm interpreting that paragraph above!) will mean that there's one less thing for me to worry about handling, and will generally make it easier for implementers to follow the W3C's guidelines.

Good stuff. I look forward to hearing about developments.

July 12, 2007

OpenID and education

In a post to the openid-general mailing list, Evan Prodromou proposes a simple rule of thumb about OpenID:

If your current registration validation system consists of email address verification or less, then OpenID is probably fine for you.

I think this rule of thumb covers well north of 95% of publicly-accessible Web sites. You can block individual bad behavers on a case-by-case basis, and you can block bad-boy servers that give out IDs to bad behavers (or that try to exploit weaknesses in OpenID implementations) in whole.

In response, I noted that the use of OpenID by educational institutions seems to be an interesting middle ground in that, in general, formal educational systems (i.e. those delivered within the campus or by external suppliers with whom there is a contractual relationship) fall outside the 95% email-based registration validation systems but we're seeing an increasing use by both students and staff of Web 2.0-type services that are inside the 95%.

I recently initiated (somewhat unintentionally it has to be said) a discussion on the jisc-middleware-development mailing list about the trust issues in a scenario where a lecturer sets a student a task of maintaining a blog which the student undertakes on an external blogging service using their institutionally-provided OpenID.  The question caused some debate (more debate than I was expecting).  By the end, I wasn't really sure that I was much the wiser.  I summed up the discussion as follows:

I posted a scenario that involved a lecturer (setting and assessing a task), a student (undertaking that task), an institution (acting as OpenID Provider and wanting to ensure the validity of any assessed work) and an external Web 2.0 blog service (where the task is actually performed).

I think this is a perfectly valid scenario, and one that will become significantly more common in the future.  I was at the Telling More Stories e-portfolio conference in Wolverhampton recently where a lot of the reported case studies around e-portfolios included scenarios very much like this.  I also think it is an area where a Shibboleth approach is weak, because of its lack of penetration into mainstream services outside the education sector.

I asked if using an institutional OpenID to sign into an external blogging service gives us sufficient confidence in whether a given student is submitting a given bit of work to be a viable way forward for institutions, given 'quality assurance' and other types of issues.

I think I heard both (implicitly) "yes, OpenID is OK in this scenario" and (explicitly) "no, don't touch OpenID with a bargepole, it isn't worth the plastic it's written on" type responses.

I'm still struggling to weigh up these responses.  I'm still struggling to understand if OpenID is useful/sensible in this scenario or not.

Note that my scenario in this case only goes part way towards what I think we'll actually see in the future, which is that students will turn up at university with an existing OpenID that they want to use (rather than using a university-provided OpenID).  But I think that the trust issues in that scenario are significantly more complicated, so I didn't want to raise it at this stage.

I'd be interested in people's views on the scenario presented above and more fundamentally on the question: Does OpenID provide an identity infrastructure that meets the needs of the education community?

Answers on a postcard please...

Journal articles, metadata formats and woes

In a post on his Digital Library Technology Jester weblog, Peter Murray of OhioLINK points to an XML format developed by the Directory of Open Access Journals (DOAJ) for representing descriptions of journal articles.

First, I think I'd qualify Peter's point that

Prior to this addition the only scheme available was Dublin Core, which as a metadata schema for describing article content is woefully inadequate. (Dublin Core, of course, was never designed to handle the complexity of the description of an average article.)

I think the reference here to "Dublin Core" is really to the specific "DC application profile" (or description set profile, as we are starting to refer to these things) commonly known as "Simple DC", i.e. the use of (only) the 15 properties of the Dublin Core Metadata Element Set with literal values, for which the oai_dc XML format defined by the OAI-PMH spec provides a serialisation. On that basis, I'd be inclined to agree that the Simple DC profile is not the tool for the task at hand: the Simple DC profile is intended to support simple, general descriptions of a wide range of resources, and it doesn't in itself offer the "expressiveness" that may be required to support all the requirements of individual communities, or more detailed description specific to particular resource types.

However, the framework provided by the DCMI Abstract Model provides the sort of extensibility which enables communities to develop other profiles to meet those requirements for richer, more specific descriptions.

I guess DCMI still has its work cut out to try to convey the message that "Dublin Core" doesn't begin and end with the DCMES.

But perhaps more specifically pertinent to the topic of the DOAJ format is the fact that the work carried out last year on the ePrints DC Application Profile, led by Andy and Julie Allinson of UKOLN, applied exactly this approach for the area of scholarly works, including journal articles. From the outset, the initiative recognised that the Simple DC profile was insufficient to meet the requirements which had been articulated, and shifted their focus to the development of a new profile, based on applying a subset of the FRBR entity-relational model to the "eprint" domain.

I haven't yet compared the DOAJ format and the ePrints DCAP closely enough to say whether the latter would support the representation of all the information represented by the former. I guess it's quite likely that the two initiatives were simply not aware of each other's efforts. Or it may be that the DOAJ folks felt that the ePrints DCAP was more complex than they needed for the task at hand.

But it does seem a pity that we seem to have ended up with two specs, developed at almost the same time, and applying to pretty much the same "space", leaving implementers harvesting data from multiple providers with the probability of needing to work across both.

(Hmmm, it occurs to me that a quick spot of GRDDL-ing might make that less painful than it appears... Watch this space.)

Write blog postings, not articles :-)

I have to say that I'm somewhat disappointed by Jakob Nielsen's Write Articles, Not Blog Postings blog entry (that is a blog that Nielsen writes isn't it? - albeit one by another name :-) ).  Disappointed because I've read Alertbox on and off for the last 10 years or so and always found it to be spot on.  But this 'article' seems to fall foul of some of the things he tells us not to do, not least in appearing to start from the assumption that all blog entries take the form of a "short comment on somebody else's work".

Yes, he tries to claw his way back from that position by saying:

Obviously, I am referring to the user experience and to the style of the content in this analysis; not to the technology used to serve up this content. Thus, what I call "articles" might be hosted on a weblog service. What matters is that the user experience is that of immersion in comprehensive treatment of a topic, as opposed to a blog-style linear sequence of short, frequent postings commenting on the hot topic of the day. It doesn't matter what software is used to host the content, the distinctions are:

  • in-depth vs. superficial
  • original/primary vs. derivative/secondary
  • driven by the author's expertise vs. being reflectively driven by other sites or outside events

but it's hard to find this statement compelling when he has used the word 'blog' as short-hand for 'superficial', 'derivative' and 'being reflectively driven by other sites' in the title of the piece.

Blogs and blog entries come in all shapes, sizes and forms.  Even those that are primarily 'derivative' are not necessarily 'superficial'.  Blog entries are part of a debate, as any article should be.  Yes, by now there are probably hundreds of blog entries that are 'comments' on Nielsen's work - but so what?  Does that make all of them shallow or superficial?  I think not.

It's a bit like that moment when you are growing up and you realise that despite giving the impression of knowing everything about everything, your dad is actually talking complete bollocks most of the time.  (Note: I am not saying this of Nielsen... but perhaps I'll read his articles a bit differently from now on).

On a slightly different tack, and with reference to the title of this entry, I was recently asked to write something for a peer-reviewed journal.  Now, impact means different things to different people, but for me, as a non-researcher (i.e. as someone that doesn't have to worry about impact factors and the RAE), writing something for a peer-reviewed journal that won't see the light of day for another year or so doesn't make a lot of sense.  I'm happy with the impact of this blog thank you very much.  There are times when it does seem to make sense, to me, to write for something with a quicker turn-around - Ariadne for example - but I must admit that it isn't 100% clear to me exactly when that makes sense and when it is sufficient to simply put something in the blog.

Web 2.0 vs. the e-Framework - ding ding, seconds out, round one...

I've had concerns for some time now about the relationship between Web 2.0 and the e-Framework for Education and Research.  (Despite the title of this post, I am aware that it isn't a simple contest! :-) ).

I recently had cause to go back and watch Michael Wesch's The Machine is Us/ing Us, a video that I first watched some time ago but one that is still very watchable.

The video speaks primarily about Web 2.0 as an attitude and the cultural fallout that results from that.  But from a purely technical perspective, and reading between the lines a little, what it is talking about is the power of 'structured data' (and in particular, the 'feed') and the 'hypertext link'.  That's my take on it anyway.  More than anything else, it is those constructs that make the Web (read Web 2.0 if you like) so powerful.

The pertinent question, for me, is "does the e-Framework support the Web (again, read Web 2.0 if you like) mindset, does it fight against it, or is it neutral?".

My concern is that, in practice, it fights against it?  What I'd like to see is some good reassurance that it doesn't.

July 11, 2007

LAMS European Conference

As previously reported, I attended the 2007 LAMS European Conference in London last week.  (The Eduserv Foundation was one of the conference sponsors).

I was a little disappointed with the turnout, which for something billed as a European conference struck me as a bit on the low side.  I guess that holding the conference in Greenwich, a place surprisingly difficult to get to for a London location, might have had something to do with it?

Having said that, it was quite enjoyable.  Diana Laurillard got the day off to a good start by reminding us that being able to capture, replicate and build on examples of good pedagogic practice is an essential part of moving towards excellence in teaching and learning.  I think one can argue about whether the Learning Design specification and tools like LAMS are an essential part of that process - my personal view is that they probably will be at some point in the future but that we are not quite there yet - but I don't think that many people would disagree with the underlying point - that sharing good pedagogy is a fundamentally good thing [tm].

Most of the rest of the day was spent in parallel sessions - one of the problems with the relatively low turnout being that some of these didn't have many attendees.  I gave my talk on the potential integration of LAMS with Second Life - the more I think about this the more I tend to conclude that the right way to do this is to piggyback on the existing, though separate, bits of work to integrate LAMS and Moodle and Moodle and Second Life (SLoodle) (for which we are providing some funding).

James Dalziel gave the closing keynote - providing summary of where LAMS has got to and where it is going in the future.  He used the development of music notation as a nice analogy for the need for a representation framework for learning design.  He noted the fact that we can still play pieces of music written by composers hundreds of years ago (well, some of us can! :-) ) because of the ability to write down and share what the composer intended.  But also that the written score doesn't capture absolutely everything about the piece - that music has to be interpreted by the performer(s).  It's a nice analogy I think - though I'm hopeful that reaching agreement on a representation framework for learning design won't take quite as long as it did for music!

As an aside, we are also currently funding the NEUMES project at the University of Oxford - a project that is developing an XML encoding standard for Western medieval and Byzantine chant manuscripts and an associated digital library.

Neumes are the basic elements of Western and Eastern systems of musical notation prior to the invention of five-line notation. The earliest neumes were inflective marks which indicated the general shape but not necessarily the exact notes or rhythms to be sung. [Wikipedia]

Hey, we're nothing if not eclectic at the Foundation! :-)

July 10, 2007

Putting them with the SWORD

I didn't make it to the recent meeting of the JISC CETIS Metadata and Digital Repositories SIG in Glasgow, but prompted by Sheila MacNeill's meeting summary and some enthusiastic comments from David Davies, I noticed that Julie Allinson gave a report (slides from Slideshare) on the work of the SWORD project.

SWORD (Simple Web-service Offering Repository Deposit - heheh, even as a resource-oriented sort of chap, I admit that's a neat acronym!) has been working on a specification for adding items to the collection(s) managed within a repository, and they have settled on using a profile of the Atom Publishing Protocol. From the introduction:

This Profile specifies a subset of elements from the APP for use in depositing content into information systems, such as repositories. The Profile also specifies a number of element extensions to APP, defined to adhere to the extensions mechanism outlined in APP. This profile also makes use of the Atom Syndication Format (ATOM) as used in APP, with extensions.

The current SWORD draft (0.4) specifies two levels of compliance, and I noticed that even "Level 0" seems to require an extension to APP, but the project still has some work to do and that may yet change before the spec is finalised.

In posts here, I think we've occasionally questioned the approaches of initiatives within the digital library or e-learning communities which seem either to "reinvent the wheel" or to adopt approaches that don't really mesh well with the way the Web works. So it's good to be able to redress that balance now and again - and to show I'm not always grumbling and nitpicking ;-) - and highlight work which is seeking to build squarely on an existing general-purpose solution which itself is rooted in the principles of the Web.

July 09, 2007

Portfolio (n) ...

Amongst other things, Wordnet defines 'portfolio' as:

  • (n) portfolio (a large, flat, thin case for carrying loose papers or drawings or maps; usually leather) "he remembered her because she was carrying a large portfolio"
  • (n) portfolio (a set of pieces of creative work collected to be shown to potential customers or employers) "the artist had put together a portfolio of his work"; "every actor has a portfolio of photographs"

Interestingly, most dictionaries don't seem to include the second of these - the set of things in the case - which seems odd?  I guess we'd all be surprised to be handed someone's portfolio, only to find it was an empty case :-)

It seems logical then (to me at least) that any definition of 'e-portfolio' should build on this, albeit acknowledging the necessary 'evidence of learning' and 'digital' aspects - something like:

a digital collection of creative work, designed to show evidence of learning and/or ability.

This topic has recently surfaced again on the CETIS-PORTFOLIO@JISCMAIL.AC.UK mailing list (threads beginning here and here).  I have to confess to remaining somewhat bemused by why this seems such a contentious issue.  References on the list to 'e-portfolio functionality' hint (to me) of continued confusion between the 'thing' and 'services on the thing'.  A cash-machine is not cash, and does not offer cash functionality!

Six basic truths of free APIs

Nat Torkington in Six Basic Truths of Free APIs (with commentary here, here and here) reminds us that there is no such thing as a free lunch in the area of Web 2.0 APIs - or not very often anyway. My own advice (which is pretty mundane it has to be said) is to code to the lowest common denominator API - RSS - whenever possible.  That way you can pretty much throw away the service giving you the feed and replace it with something else, should the need arise, without a significant requirement to re-develop your code.

July 06, 2007

Simulated Ants!

Amongst the projects recently funded by the Foundation under its Research Grants programme is the Modelling4All project, led by Ken Kahn and Howard Noble (OUCS, University of Oxford).

The project plans to develop services for creating, exploring and analysing models, and also to try to build up a community around those services where models and their components are shared and discussed. In addition, they are interested in exploring how such models can be run in shared virtual spaces such as Second Life. I'm quite intrigued by this (especially their aspirations to offer "an alternative to the 'god's eye' view" when a model is run - playing the role of a fish in a school and so on), so I'm looking forward to seeing how the project progresses.

Via a post on the O'Reilly Radar weblog, I came across this YouTube video of a simulation of ant feeding behaviour running in Second Life. While I don't think this example really encapsulates all the sort of features that the Modelling4All team suggest, it nevertheless hints at some of the potential for using Second Life for this sort of application. And it's quite a fun video!

July 04, 2007

Learning activity management for avatars

I'm giving a presentation entitled When worlds collide: learning activity management for avatars at the 2007 European LAMS Conference tomorrow in London.

When I originally agreed to give this talk I was, naively, hoping that I'd have done some real work experimenting with how LAMS and Second Life might be integrated.  But I haven't, so tomorrow's talk will be somewhat more theoretical than I would have liked.

No matter - I still think it's a potentially interesting area.

I posted a message to the Second Life Educators (SLED) list a few days back, asking if anyone was doing any work in this area.  It seems that not much is being done.  Peter Miller from the University of Liverpool responded with his ideas about how

learning spaces could be rapidly rezzed from transparent prefab sculpties incorporating the necessary seating/gadgets to support a particular activity/stage in a learning sequence/design. These could be arranged LAMS-like in a sequence but students have the option of walking through walls as well as following the pre-determined path.

There's also the work going on with SLoodle, some of which we are now funding.  It is clear that there is a lot of potential for Open Source collaboration in the area of Moodle/SLoodle/LAMS integration with Second Life.

Ignoring the biosphere?

I really should resist the urge to post about documents which I haven't digested fully, but earlier today I saw the announcement of a draft version of a document with the rather intriguing title of An ecological approach to repository and service interactions, by R. John Roberston, Mahendra Mahey and Julie Allinson of the JISC Repositories Research Team.

The report uses the analogy of ecology and ecosystems to "to inform the task of understanding and articulating the interactions between users, repositories, and services and the information environments in which they take place."

In section 5.2, various "scales" or "levels" of ecological system are introduced, namely

  1. organism
  2. population: group of interacting and interbreeding organisms
  3. community: different populations living together and interacting
  4. ecosystem: organisms and their physical and chemical environments together in a particular area
  5. biome: large scale areas of similar vegetation and climatic characteristics
  6. biosphere: thin film on the surface of the Earth in which all life exists, the union of all of the ecosystems

And then a mapping is suggested between these levels and various "levels" of entity related to repositories: a person is an organism; a repository is mapped to a population; a community based around several repositories is a community in the ecological sense; and an information environment (such as the JISC Information Environment) is mapped to an ecosystem.

The authors note their intention to focus on the "organism" to "ecosystem" levels. However, from a very quick skim through, the thing which stands out for me here is the absence (and it was a quick skim, so I may have missed it!) of any reference to the global information system within which repositories (and indeed our other information systems) are operating, i.e. the Web. Following the mapping above, I'd be tempted to suggest that the Web is analogous at least to a "biome" if not even the "biosphere".

The document notes:

it is important to remember that they are a particular localised view of the wider information environment (ecosystem) and will inherit environmental influences from that level.

So, I suppose the question that I found myself asking is: if the levels below the "ecosystem" are influenced by characteristics of the "ecosystem", then aren't all those levels from "organism" to "ecosystem" also influenced by the characteristics of the "biome" and the "biosphere"? And if that is the case, shouldn't we include those levels in such an analysis?

However, as I think Andy has suggested in various earlier posts, in the digital library and e-learning domains, we've sometimes - not always, but sometimes - made pretty much that mistake: we've developed specifications, and systems based on those specifications, without really taking into account the nature of the Web, and the principles of interaction underpinning the Web. And it may be that those systems "work" within a community that operates according to those principles. But uptake beyond those boundaries has turned out to be limited - perhaps, in part at least, because of those conflicts and contradictions with the more general principles that other communities are following.

Or to pursue the ecology metaphor (and I'm conscious that here I'm probably guilty of using rather loosely some of the concepts and terminology which the authors of the report deploy rather more carefully and precisely!), populations, communities and ecosystems may emerge based on principles of interaction which are to some greater or lesser extent in contradiction with those of the biome or biosphere. If those populations, communities and ecosystems thrive, then maybe the biome or biosphere may itself evolve or adapt to reflect that success. Or alternatively, those populations, communities and ecosystems may grow so far, then reach some sort of crisis point at which the conflict with the constraints of the biome or the biosphere limits any further growth (or even sends them into decline). 

(I was tempted to try to apply that analogy to the current debate around service-oriented and resource-oriented approaches to Web applications, but I think I should leave that as an exercise to the reader!)

In short: don't ignore the biosphere!

P.S. I should add that, in spite of my comments here, the ecology metaphor does seem quite a compelling and useful one, and the report looks like a stimulating read.

Nature Network - a thinking person's social network?

Writing in the Education Guardian, Jessica Shepherd discusses the rise of Nature Network (NN) - a social network for "scientists to gather, talk and find out about the latest scientific news and events".

Comparisons with Facebook (Fb) are obvious (a "Facebook for professors, postdocs and PhDers in the sciences"), but it is perhaps worth thinking a bit about the similarities and differences:

  • Fb mixes up academic and social use in quite an interesting way - interesting in the sense that some people don't mind these things being mixed up, whereas others absolutely hate them being mixed up.  Do students want their lecturers in the same social network as them, for example?  Do lecturers want to be in the same social network as their students!?  Do researchers want to mix up their research (work) with their social (private) activities.  I suspect that these are questions with no clear-cut answers.
  • Fb leans towards the social, but we are seeing some work-related use.  NN leans towards work, but (in the blogs at least) we see some social-related use.
  • Fb is a platform, meaning that people can develop their own Fb applications and then share them with others.  NN is not (as far as I can tell).  I think this is a significant issue because it more easily allows Fb to morph from its original purpose into whatever its users want it to be.
  • Fb allows one to pull stuff in from elsewhere.  Oddly (I think) NN doesn't appear to do this.  It is more closed than Fb in some senses.
  • Fb is completely global.  NN is limited to some areas of science.  I've joined for example, but I don't expect to get much out of it because I'm not a scientist or a researcher.  I think that it will be interesting to see whether researchers prefer global social networking tools or discipline specific ones.

It is also interesting to ponder why Nature are doing this kind of thing.  I think it is because they (rightly) recognise that the Internet is fundamentally changing the nature of scholarly communication.  Communication that used to happen primarily thru the peer-reviewed, published article and the conference paper is now beginning to happen in other ways.  Of course, there is some resistance to this - a scholarly communication momentum that needs to be overcome - a sort of peer (review) pressure I guess :-)  But as Timo Hannay from Nature notes: "We are increasingly seeing the online world with its informal rapid communications complement the slower, more formal communications of academic journals".

Blogs and social tools like Fb are beginning change how scholarly communication takes place and NN is part of this.  Note the discussions in NN about how to cite blog entries, whether it is acceptable to share Powerpoint conference slides, what the relationship is between this kind of activity and the RAE, and so on.

My personal view is that the capabilities that the Internet affords us in terms of immediacy of communication, ease of 'publication' (I use that term in its most general sense), instant citation, the 'wisdom of the crowds' (by which I mean crowds of researchers!) and so on will have an impact on how scholarly communication happens.  I don't know how quick or drastic any resulting changes will be - but it seems certain that there will be changes.  Nature, as a publisher, have to think about their role in the new world.  So do all other academic publishers.

July 03, 2007

A brief history of OA

Stevan Harnard has posted a nice summary of the key milestones in the development of the Open Access movement to the American Scientist Open Access Forum.

Towards the end he says:

The OA way of the present and future is for researchers to deposit their articles in their own Institutional Repositories.

Is this the one true OA way?  I'm not convinced.  Let's focus on what is important, the 'open' and the 'access' - and let the way of the future determine itself based on what actually helps to achieve those aims.

July 02, 2007

You sharin'? You askin'?

I spent a couple of days at the JISC offices in London last week in meetings talking about shared services. 

The first was organised by the Strategic e-Content Alliance (SEA) and focussed on the role of registries, but in particular registries of collections and services, in the UK information landscape.

The second was a slightly more general workshop looking at shared infrastructural services (SIS) in the context of the JISC Repositories and Preservation Programme.

I'm not going to blog in any detail about the specifics of these meetings because reports from both of them will be forthcoming.  But there were a couple of common themes across both days which struck me as potentially interesting.

  1. Registries and the Web architecture (yes, that Web architecture!).  Registeries, it seems to me, have an uncomfortable fit with the architecture of the Web because they surface information about resources (i.e. they surface representations of resources) at places on the network that are unrelated to their URI.  Does that matter?  Yes, unfortunately it does.  In the first meeting we were talking about the need for registries of collections and services to surface their 'very useful' content thru Google ("we need to put our high quality content in places where end-users will find it" is how the registry owner might put it).  Quite right too.  Unfortunately, simply letting the Google robots in to index stuff is only part of the problem - one also needs other people to link to the registry content, in order to bump up its Google-juice.  But end-users don't want to link to the registry content - they want to link directly to the resources described by the registry.  The registry ends up getting in the way.  Ultimately, it sucks Google-juice away from the very resources that it is trying to promote - a problem made even worse in the scenario (a typical scenarios as it turns out) where multiple registries agree to share their records and all surface them at multiple points on the network :-(
  2. Beta as a virtue.  In JISCworld, activities tend to get given labels such as 'project', 'service in development', 'service' and so on (I've probably got the labels wrong here, but you get the idea).  The problem is that these labels implicitly tend to flag non-service activities as somehow worse than service ones.  In Web 2.0, beta is a virtue - the perpetual beta and all that - whereas in JISCworld thereis a danger that it is seen as problematic.
  3. The unhelpfulness of labels.  My JISC IE architecture diagram has stood the test of time, but not without some problems.  One problem is that the rigidity of the diagram doesn't reflect the messiness of the real-world.  The truth is that there are no service boxes like the ones on the diagram in RL.  It's a myth - a sometimes helpful myth, but still a myth.  This becomes most problematic when whole JISC programmes take on a label from the diagram - the Portal Programme for example.  In the meeting on the first day we tried to come up with a reasonable definition of the word 'registry' but failed to come up with any form of words that couldn't equally be applied to the word 'catalogue'.  Yet JISC funds cataloguing activity (e.g Intute) as though it was completely separate from and different to registry activity.
  4. Technology vs. users as architectural drivers.  One of the other problems with the JISC IE diagram is that it was largely technology driven.  In particular, in the area of shared services there hasn't been much work done on deriving the services we think we need based on real end-user requirements gathering.  At the meeting on the second day we were tasked with brainstorming ideas for services and applications that would make use of the current set of shared infrastructural services.  My group had no trouble in coming up with lots of service ideas (I took it as a personal challenge to think of at least one idea involving Facebook, Twitter and Second Life) but getting them to involve the current set of shared services was pretty contrived.  Perhaps it is time to take a fresh look - analyse the services and applications we want, then see what shared services are required?

About

Search

Loading
eFoundations is powered by TypePad