« November 2006 | Main | January 2007 »

December 15, 2006

Grants call 2006/2007 announced

I'm pleased to announce that we have made our call for funding for 2006/2007 available - a little later than I originally hoped, but in time for Christmas nonetheless!  See the call Web pages for full details but essentially we are looking for projects in the areas of:

  • Teaching and learning opportunities in 3-D virtual worlds such as Second Life
  • New approaches to learning based on Web 2.0
  • Access and identity management in e-learning

Projects must be led by UK academic institutions.  Initial proposals should be 2 sides of A4 in length (or equivalent) and should be with us by 29 January 2007.  Note that we don't have a huge pot of money to give away unfortunately - but we hope to spend about £400,000 from this call on 3 or 4 substantial projects.

December 13, 2006

SLDevU, London event

I went along to a SLDevU event in London earlier today.  The event was run by Linden Lab and was targeted primarily, I think, at advertising and PR agencies.  Much of the content of the day focused on how such agencies, and the organisations they represent, can make use of Second Life (SL).

I guess the key messages from the day were along the lines of:

  • think in new in world ways - don't simply try to replicate what is done in real life (RL)
  • exploit the technology and functionality of SL, but not the community of residents
  • offer something constructive to the SL community - don't see SL just as a new one-way delivery medium

The day started with a general overview by Glenn Fisher (Director of Marketing Programs at Linden Lab) of what SL is about - including a live tour of some SL venues.  This talk was perhaps a little superfluous since almost everyone in the room (well over 100 people I would guess) claimed to be a resident.

The opening presentation was followed by a demonstration of how to create content.  Unfortunately, it was marred by technical glitches - and the moment when the presenter's SL Mac client crashed probably caused him to wish that this particular bit of reality was slightly more virtual!  In questions later on in the day, these kinds of technical problems were picked up on to a certain extent, with inevitable questions about whether Linden Lab have got the basics right - like client and server resilience, searching, teleporting, etc. - particularly in the face of the huge growth in their user base.

As well as emphasising that SL is not a game, the introductory session highlighted three features of SL that they claim set it apart from other, similar systems:

  • community - the avatar forming the basis for real social interaction.  (As an aside, some research work was noted that showed that people tend to give their avatars more or less exactly the same amount of personal space in SL as they give themselves in RL).
  • user-created content - all content in SL is user-created
  • marketplace - transactions between residents driving a thriving economy

From my experience it does strike me that these are the defining features of SL.  Within a day of getting an SL account I had created a tee-shirt which I was theoretically able to sell and/or give away to friends. (I say theoretically only because I haven't actually managed to sell any of my Library 2.0 and Learning 2.0 tee-shirts yet). Within a week I had created the Eduserv MeetingPod and had a functional Eduserv office area. Within two weeks I had built ArtsPlace SL from the ground up and populated it with the OpenStudio exhibition, with a slow trickle of visitors (and an even slower trickle of L$ donations).

There were complaints that building stuff in SL is too hard, but I tend to disagree. The combination of simple building tools and the LSL scripting language is incredibly powerful, or so it seems to me. However, there are very real limitations with how external Web content can be embedded into the SL environment.  Based on my conversations with people, it strikes me that if you like SL, you really like it (I won't use the addicted word here!) and if you don't, then you tend towards really hating it.  It's almost a religious issue I guess. One of the presenters at the meeting today said that for those people who get over the initial orientation hurdle, SL grabs both hearts and minds, and I think there is something in that. 

Finally, a very interesting presentation by Justin from Rivers Run Red covered some of the higher profile brands that are visible in SL, including the BBC One Big Weekend event, Adidas, Mrs Jones, Duran Duran and others.

There was a lot of discussion in the question and answer session about how open SL is (or isn't), and the level of trust that the SL community is putting into Linden Labs in terms of long-term commitment. It was noted that the plan to make the client open source is now public knowledge. But that seems to me to miss the point - it's the SL servers that are important.

A comparison between SL and the early days of the Web was used several times during the day... and I can see that there are some similarities (certainly in terms of excitement).  However, at no stage in the history of the Web was it completely hosted on servers owned by one company.  The whole point of the Web was (and is) that it is completely open - anyone can build and run a Web server.  That is currently not the case with SL.  Yes, people own the IPR in any content they build in SL.  Yes, there are examples where that IPR has been realised into real money in RL.  But by and large you can't yet export the objects that you build in SL to the outside world - for importing into other 3-D systems for example.  And you certainly can't run your own 'sim'.

Having said that, some very positive noises were made at the end of the session about Linden Lab's long term desire to see SL made more open, with other players being able to offer both support and technology within the SL infrastructure.  That seems very positive news to me.  People talk about SL being the new 3-D Web, but that will only happen when I am able to run my own SL server in a  way that integrates my 'sim' in an open network of 'sims' offered by other people.  Until that day, we are all completely at the mercy of Linden Labs' business model and success.

December 09, 2006

OpenStudio exhibition - upstairs at ArtsPlace SL

Artsplaceopenstudioexhibition For info... I'm now tending to blog about my Second Life experiences over at the ArtsPlace SL blog.

ArtsPlace SL is a small gallery that I've built, intended primarily to provide a space where librarians, museum curators, artists and like-minded people can create and display new kinds of digital works.  At the moment I'm hosting a small exhibition of work from the MIT OpenStudio.  The show runs until December 23rd.

Cybrarycitymeeting The other night I attended a kind of opening/welcome meeting for the residents of Cybrary City.  A slightly strange affair, with  Paul Miller of Talis being dubbed Mayor of Cybrary City (and being given a chain and cat for his trouble)!  Following the meeting I met up with several people, including the first, and probably only(?), Athens administrator in SL.

December 08, 2006

Access Management in UK schools

I attended Becta's Federated access management showcase in London earlier this week.  The day was primarily targeted at the UK school's sector and was intended to raise awareness of the UK Access Management Federation for Education and Research (UKAMF) and encourage the UK's Regional Broadband Consortia (RBC), Local Authorities (LA) and individual schools to think about the issues related to the roll-out of federated access management (currently in the form of Shibboleth).

Despite some good introductory presentations from a variety of perspectives, the day ended on a slight low note, with very few questions being asked of the assembled panel.  Either this stuff is felt to be so straight-forward that no questions are necessary or people are still at the stage where they don't even know what questions to ask!?

Two related thoughts struck me as I travelled home.  Firstly, the word 'portal' was used several times during the day - primarily, I think, in the context of portals that point people to the resources available to them and that are therefore natural places where part of the access management process can take place.

Now, I don't know if it is just me, but whenever I hear the word portal these days I tend to make the assumption that it comes from a mindset stuck somewhere around two or three years ago.  Yes, I know that one can effectively build portals in the Web 2.0 era - but nobody talks like that anymore do they?

In a similar vein, there seemed to be no recognition, or at least none was articulated, of the general move towards Web 2.0, Learning 2.0, or whatever one wants to call it in the education space.  I blogged a while back about the Web site of the primary school with which I'm involved, now being built almost entirely out of Web 2.0 services.  Images on Flickr, bookmarks on Del.icio.us, blogs on Blogger, ...

For schools like that it is just as important that any attempt to move towards single sign-on encompasses those kinds of services as the more formally published learning content that appears to be the focus of attention currently.  Yet as far as I can tell, those kinds of Web 2.0 services are highly unlikely to ever get Shibbolised?

This is not to suggest that the current activities around the UKAMF are wrong in any way.  But in the context of access and identity management it is important not to lose sight of the bigger picture, the wider perspective of services out there in the wild which are becoming increasingly important in the way that learning is delivered.

December 06, 2006

It's all in the context

I spent the first part of this week at a workshop entitled "Contextual Metadata and the Teaching and Learning Context" organised by the MURLLO project, which is funded by the Eduserv Foundation and led by the eLanguages team based in the Centre for Language Study at the University of Southampton.

MURLLO is examining (amongst other things) the significance of "context-rich" metadata in supporting the discovery and selection of learning objects. The project's literature review, by Ann Jeffery, provides an overview of the topic and highlights a tension between, on the one hand, an approach to learning objects which emphasises "de-contextualisation" - a separation between the object and its context(s) of use (intended or actual) - with the intent of facilitating greater reuse, and, on the other, some evidence that the availability of information about the context(s) of use of an object is vital in enabling a potential user to find and choose objects suitable for some learning activity.

Of course this rather begs the question of what we mean by "context", and that was the topic of our opening discussion. And to be honest, while there seemed to be general acceptance that we could distinguish information about a resource which was "contextual" from information which was "context-independent", I'm not sure we really articulated very clearly what we really did mean by "context"! (Having pondered a bit more on the train back to Bath, I think the best definition I could come up with - and I think this would more or less fit retrospectively with our deliberations during the workshop - would be something like, "A set of circumstances in which a learning object is used or may be used.") There seemed to be an acceptance that contexts may be intended/projected/"designed for" (i.e. the provider of a learning object might specify that it is intended or designed to support some particular purpose and/or audience) or they may be actual (a teacher or learner makes use of a learning object in some real-world situation). Ideally there would be some overlap between those two categories!

A number of participants argued that some specification of context is an essential characteristic of a learning object: without some association with a context, the resource is not a learning object. But a single learning object may be deployed within several different contexts, some of which may be anticipated by the creator or provider, some of which may be quite unforeseen at the time the object is created or published.

This fairly open-ended notion of what "context" is perhaps inevitably leads to a somewhat fuzzy view of what might constitute "contextual metadata". It was suggested that any of the following (and this isn't intended to be an exhaustive list!) might be considered contextual metadata:

  • information about the purpose or objectives associated with a learning object
  • information about the instructional methods associated with a learning object
  • ratings and reviews based on the use of the learning object
  • structural relationships between the object and other resources, e.g. sequence in a learning design etc.
  • data derived from tracking the use of the object (numbers of downloads, time spent reading/using/playing/interacting with the object etc)
  • information about the accessibility of the object (I think the contextual element here is probably in the requirements/preferences of different users, or of a single user working in different environments; the characteristics of the resource against which those user preferences are matched are, it seems to me, not context-specific)
  • information about the role of the user
  • information about "user state" during their use of the object, from their previous learning experiences through to aspects of their physical environment

A full description of context, then, might well involve the description of several different resources and the relationships between them.

The workshop was fairly informal and was structured to emphasise discussion rather than presentations, and with the small number of invited participants, the format worked well - thanks also to some very effective facilitation by Hugh Davis and Dave Millard from Southampton. There were a few short presentations of which I'll mention only two:

  • Erik Duval (Katholieke Universiteit Leuven) contributed remotely (initially over a somewhat crackly Skype connection, and then over a considerably clearer - but probably rather more expensive! - mobile phone line) and gave a short description of his current work on attention metadata. The approach focusses on capturing data reflecting what users do with digital resources, both from the logs of server-side applications and from desktop tools. Erik pointed to the successful LastFM service as an example of what can be achieved through such approaches. (LastFM aggregates information provided by its members about the music they play using plug-ins in their desktop MP3 players, and then uses that information as the basis of personal/group histories and a filtering/recommendation system. My LastFM profile is here!) Potentially very large amounts of low-level/fine-grained data can be collected and the analysis of such data might may provide the basis for a better understanding of resource usage and, in turn, enhanced retrieval methods based on inferences drawn from that data. A couple of years ago, I saw Erik give a presentation in which he exhorted us to consider the proposition that "Forms [for the entry of metadata] must die!" and this time he threw out the suggestion that "If you can't automate it, it won't work!" While I'm not sure I would go that far - after all, it does seem that in some contexts, people are motivated to put considerable effort into, say, writing reviews for services such as Amazon or Rate Your Music - I certainly agree that we should use tools to exploit useful data that can be gleaned efficiently from the environment.
  • Christoph Richter (University of Hagenberg) summarised his approach to the question of context, illustrating how a single object might be used in the context of different learning activities. Although Christoph didn't have time to expand in detail on the role-based model he had developed, I found his graphics particularly helpful in clarifying some of the distinctions which other participants had been discussing from perhaps slightly different perspectives

Given this broad notion of what "contextual metadata" might cover, it becomes clear that this sort of metadata does not fit well within a framework where "the learning object metadata" is conceived as something which is generated once, typically by the provider or distributor of the learning object (or a cataloguer working on their behalf) as some sort of complete, authoritative and more or less static "document" or "record". On the contrary, this information is, almost by definition, provided from multiple independent and possibly quite diverse sources, using different technological systems, and over an undefined period of time. In such a scenario, the capacity to capture and disclose information about who (or what, in the case of algrithmically inferred data) is providing such information - the provenance of this metadata - becomes significant.

Further, any effective aggregation and merging of this distributed data depends on the consistent use of resource identifiers which are global in scope and persistent through time. It seems to me that the diversity of the information itself - information about activities/events, people, places, etc - and of the potential metadata providers requires a metadata architecture which is designed to support flexibility, extensibility and distributed metadata creation - and indeed Mikael Nilsson and his colleagues wrote about exactly these challenges for learning object metadata some four years ago.

Having said all this, I'm also conscious of Scott's recent note of caution about competencies and "using ontologies and schemas to try to pin down human capability in all its dimensions". One challenge is that of determining/assessing what contextual metadata may be required to deliver useful functions to the learner and teacher, and this was one of the areas that emerged from the workshop as worthy of further investigation, perhaps by using some of the contextual data that the eLanguages team have already accrued in the course of this and related projects to conduct observations of how teachers and learners actually use that data, whether and how it helps them in the discovery and selection process, and what other data might usefully be acquired/provided.

Thanks to Kate Dickens and the MURLLO team for an enjoyable event and a stimulating couple of days.

December 05, 2006

UKAMF revisited

At least one reader felt that my previous posting about the UK Access Management Federation was overly negative about the JISC.

That was not my intention, and if that is how people interpreted it, then I apologise.  Sure, I have some minor grumbles along the way, but as a long-term supporter of open standards I completely endorse the current move to Shibboleth within UK academia and I take my hat off to the JISC, and the projects that they have funded so far, for what has been achieved to date.

But that doesn't change my fundamental point, which is that the move to open standards inevitably opens up the marketplace to commercial competition (a good thing), meaning that suppliers like Eduserv have to take decisions about technology and product branding somewhat differently to how they did it in the past. We need to find our position in the new space as much as anyone else - and as part of doing that we need to be careful not to confuse Athens the technology with Athens the product.

The case for OpenID

There's a nice little introduction to why OpenID is important by Netmesh's Johannes Ernst and VeriSign's David Recordon on ZDNet - The case for OpenID.  The piece highlights the simplicity of the OpenID spec and argues, quite convincingly I think, that successful initiatives in this area must start small and simple and grow in complexity only when the need demands it.

Some have told us they consider the OpenID community to lack a clear process or structure, to not solve the "real" problems in identity (yet?), or to be only applicable for low-end problems. They are probably right; however, we think of it as the early days of Internet-scale innovation in action, where these characteristics are desirable, not detrimental. The arguments are the same that were made against the Web in its early days, and the problems either were fixed or turned out not to be problems at all. There is no reason to believe it should be different for OpenID.

December 01, 2006

UK Access Management Federation

The JISC yesterday announced the launch of the UK Access Management Federation.  I, and others at Eduserv, wish them well in this venture and look forward to working with them, UKERNA and others within this new framework to help realise the benefits that it should bring to the community.

But I wish the JISC would stop talking on Eduserv's behalf in their announcements about the Federation. By moving to a SAML-based federated access management landscape, the JISC has essentially opened up the marketplace - enabling institutions to develop their own solutions but also allowing third parties to sell their outsourced solutions where that is appropriate. This has got to be a good thing for the community overall.

Eduserv welcome that marketplace and we have already announced our intention to offer an outsourced identity provision service of our own. This will allow institutions to continue to take advantage of the benefits of a managed access management solution, as they have done for many years.  But this service, while incorporating existing Athens features, will also provide new functionality – so the option we offer is not simply as the JISC suggests 'to continue with Athens' but rather to have the benefits both of the current Athens functionality and of participation in the UK Access Management Federation (and indeed other potential federations) by using Eduserv as an outsourced identity provider.

About

Search

Loading
eFoundations is powered by TypePad