« October 2007 | Main | December 2007 »

November 30, 2007

Running a university open day - a punter's perspective

This is way off topic for this blog but it's the weekend and I couldn't resist...  I'm recently back from yet another university open day, I won't say which, and I feel the need to pass on my top five tips for making them more compelling - from the point of view of the punter.

None of us particularly likes to admit it, but when I walk thru the doors of your university on the open day I'm a prospective customer, expecting to spend a fair old chunk of my annual salary over the next few years and I'm therefore someone whose concerns include things like value-for-money.  I want to think that my pride and joy is going to get a good standard of education (yes, 'good' is probably good enough - you and the government will probably call it 'excellent', but I'll know what you mean), is going to have fun, and is going to stand some chance of getting a half-decent job at the end of it.

Selling a university is a bit like selling a house... I'm not suggesting that you bake bread and make fresh coffee before I arrive (though fresh coffee would be nice) but look and feel is at least as important as substance.

So... what are my tips?

Firstly, you'll no doubt be spending some time talking to me and the other parents about the university and the course.  Give this job to your best lecturer, make sure that the technology they use to do the presentation actually works smoothly and work on their slides so that the rather dull content comes across as vaguely sexy (or at least using a modern style).  With a bit of luck I won't realise that you've done this especially for the open day... I'll just assume that all lectures in the university will be like this.  Don't give the job to someone that hasn't been on the Powerpoint course and doesn't know where the projector 'on' switch is just because they happen to be head of department.  And don't make this person wear a suit.  We've all seen far too many Open University programmes on BBC2 to fall for it.  If you're a bloke, wear whatever the current equivalent of a paisley shirt with big collars is.  Corduroy is fine (hey, how many times is it possible to say that!?).  Come across as comfortable and approachable.

And while we're on the subject of image, make the place look lived in.  Don't make me wait outside a cold breeze block lecture theatre in a cold breeze block corridor with nothing to do.  Put up some current student work and research posters for people to look at.  Have some art work on the walls.

Secondly, make as much use of students as you can.  Students are great... they're generally easy to get on with and it's easier to believe that they're telling you the truth.  Get one or two to help with the presentation - don't try and show me a video of them.  Ask them to be honest about what they like and dislike about the course and university.  It might be risky, but it'll pay off.

Thirdly, sort out transport on the day.  We all know that the public and/or university transport between campuses and to/from the city centre will totally suck once term starts but at least let us believe for one day that the system stands half a chance of working.

Fourthly, catering.  Re-read the above paragraph substituting 'catering' for 'transport'! :-)

Finally, sort out the weather.  OK, I know I'm being grossly unfair and that this is totally beyond your control but I'd be interested to know if there's any kind of correlation between numbers of applications in a given academic year and the number of hours of sunshine on the previous year's open days.  Even the worst kind of sixty's architecture can look half decent on the outside when the sun shines.  Throw in some rain or, worse, horizontal sleet instead and wild horses won't drag me back to your campus no matter how good your results are.

Yours tongue-in-cheekily, ...

November 28, 2007

Reflections on a DIY streaming experience

As mentioned here and here, I spent Monday in Birmingham at UKOLN's Exploiting The Potential Of Blogs And Social Networks workshop in order to video-stream the event live onto the Web and into Second Life.

I want to use this blog entry to summarise what we did, why we did it, what worked and what didn't.  It wasn't a total success but I think there were some useful lessons, which I'll try and come back to at the end.

So, what were we trying to do and why?  Well, in discussion with Brian Kelly, who was organising the event, I agreed that it would be useful to investigate how easy it is to video-stream live meetings onto the Web and into Second Life at little or no cost.  Investigate in the sense of actually trying it for real, as opposed to simply theorising about what technology is now available.  The reason this is of interest, for me at least, is the whole agenda around virtual meetings - both for environmental and widening participation reasons.

So, we started with 3 basic requirements:

  • low cost technology
  • streaming both onto the Web (for viewing in a Web browser) and into Second Life
  • using chat facilities to encourage active participation between real and virtual delegates.

Ukolneventkit The solution we agreed on included the use of a basic Web-cam, a podcasting kit, two laptops (one for the streaming and one for Second Life - note that a very well spec'ed single laptop might have sufficed for both tasks but one wasn't available and using two felt like the safest option), the newly announced Veodia streaming service, the Virtual Congress Centre venue on Eduserv Island in Second Life, a Moodle chat room (hosted at sloodle.org) and Sloodle chat-logger Second Life object to link in-world chat to the Moodle chat room, and Slideshare to host a copy of the slides being shown in the venue (for those delegates viewing the video-stream on the Web).

I am very grateful to Tom Blossom at Veodia for upgrading our free account for the day and to Dan Livingstone and Peter Bloomfield at the University of Paisley for help with the Moodle and Sloodle tools.

I arrived early at the venue to get set up.  We had separate wired Internet connections for the two laptops and the venue support staff allowed me to take an audio feed direct from their PA system into my podcast kit audio mixer.  A Second Life connection was quickly established.  Phew.  So far, so good.

Next, I tried a quick streaming test.  Veodia is very easy to use... navigate to the Veodia home page, sign in, start a new broadcast, name and describe it, select your camera and microphone, then go.  Bang.  Done.  Couldn't be easier.  (Note that there are also facilities to pre-schedule broadcasts in your own 'channel', though I have to confess that I found the interface to this somewhat confusing so didn't bother using it.)

Once the stream is up and running it is possible to cut-and-paste the Quicktime-compatible stream URL from the Veodia Web page into the media tab on a land parcel in Second Life.  I pasted the URL into the Virtual Congress Centre land parcel and viewed the feed.  Everything seemed OK.  I began to relax.

Time for a quick coffee.

Ukolneventsl Next I checked that the slides that I'd previously loaded into the screen (see the left-hand screen in the picture) in the Virtual Congress Centre worked OK.  Yup.  Note that this needs doing in advance for any sizable presentation.  In this case, about 130 textures had to be upload into Second Life.  At L$10 per texture, that's about £2.00 in real money!  I also checked that the Sloodle/Moodle chat room link up was working OK.

By this time, the real venue, the virtual venue, and the Moodle chat room were starting to fill up.

Note that we had three audiences for this event... those in the room (some of whom were beginning to make use of the venue's wireless network), those in the Virtual Congress Centre in Second Life, and those watching on the Web.  As far as I could tell, we had about 100 delegates in the venue, 15 or so in Second Life (at least at the start of the day) and 5 or 6 wtahcing on the Web.  I think it is worth noting that we hadn't promoted the virtual side of this event too hard, luckily as it turned out, so we weren't expecting too many more virtual delegates than this.  We'd previously announced a Wiki page for the streaming and this was kept updated with information about what members of the three different audiences should do to take part in the streaming experiment.

Ukolneventrl Brian introduced the day with a short presentation.  I started a new video stream, plugged the URL into the Virtual Congress Centre land parcel, announced the URL in the Moodle chat room and kept my fingers crossed.

My avatar (Art Fossett) was also in-world, able to chat with the virtual audiences and keep the in-world slide-show in step with Brian's slides in the venue.

Everything was going smoothly.  Too smoothly as it happened!  After 10 minutes or so the virtual delegates started to complain that the sound was breaking up.  This got so bad during Brian's talk that by the end of it I decided to stop the stream and start it again.  Bad move.  Trying to start it again simply resulted in repeated errors from Veodia saying that there wasn't enough upstream bandwidth to push the stream up to the Veodia servers.  I tried repeatedly to restart it, but even on the few occasions it started, the sound was so poor as to be of little use to the virtual delegates.

I should stress that this was not a fault with Veodia... simply a lack of upstream bandwidth in the venue.  I think what had happened was that as soon as the delegates in the venue took their seats, got out their laptops and started doing whatever delegates do online while they are supposed to be listening to speakers talk, the available bandwidth for streaming got significantly reduced. I tried Ustream.tv, an alternative free video-streaming service, but had similar problems - not enough bandwidth.  Note that unlike Veodia, Ustream.tv does not support Second Life, which is why we hadn't used it in the first place, but I was getting desperate!

OK... realising that I was going to look a complete twit if I didn't do something, I took the decision to switch to audio-streaming on the basis that the bandwidth requirements for audio would be greatly reduced.  Unfortunately, I hadn't planned well enough for this.  I had to spend valuable time installing a copy of Winamp, buying a Shoutcast server plan on Viastreaming and generally faffing about.  It was the final talk of the morning session before I got the audio stream up and running.

However, once it was running things went pretty well.  I stepped thru the slides in Second Life as before (significantly harder than with the Veodia stream I should add, since the Viastreaming server introduced a delay of about 2 or 3 minutes) and got some positive comments from the in-world delegates.

Now, I think it is worth noting that an interesting thing happened while I was messing around trying to set up the audio stream.  I expected the virtual audience in Second Life to drift away, bored with the lack of anything to see or hear.  But they didn't.  They started talking (i.e. chatting) to each other.  They introduced themselves to each other, saying who they were and where they were from.  This wasn't prompted in any sense... just natural chit-chat between a group of people stuck in a venue with nothing to do.  Except they weren't stuck in a venue in any real sense... they were in a virtual venue.  I remember that at one point, probably while I was waiting for Winamp to install or something, I jokingly remarked, "That's right, talk amongst yourselves :-)".

It was an interesting phenomenon, re-enforcing, for me at least, the sense of presence and community you get from a virtual world like Second Life.  This is much more than simply being in a chat-room together.  I'd be very interested in comments from the virtual delegates on this point.

There were two talks after lunch, both of which were audio-streamed without any problems.  Unfortunately, by this stage many of the virtual delegates had gone - I'd half suspected this might happen anyway - and we were left with only 4 or 5 delegates in Second Life.  One of the problems with being a virtual delegate is that you don't get lunch or any of the socialising that goes with it.

Similarly, when the delegates in the venue broke into groups for their discussion session, the virtual delegates were left with nothing to do.  We could, perhaps, have had a discussion group of our own but unfortunately, I hadn't prepared properly for that.  Again, this is one of the things that needs thinking about when planning a hybrid RL/SL event.

So, what did we learn?

  • Never attempt video streaming without understanding the network environment within which you are working and in particular without checking the upstream bandwidth in whatever venue you are using.  Speakeasy offer a natty bandwidth tester which can help with this.  As far as I know Veodia requires a guaranteed 200Kbps upstream, but having more than that obviously helps - having it through a dedicated line that other people aren't sending traffic over is a good idea as well!
  • A combination of audio-streaming, Second Life and in-world slides is very effective as an alternative to video streaming.  Now that Second Life supports voice, one could use that as the mechanism for streaming the audio - we didn't do this on the day because we'd decided in advance that we wanted to support delegates on the Web as well as those in Second Life.  With hindsight, I wonder if we shouldn't have bitten the bullet and only supported Second Life.  If we had, then I suspect we would have had audio working much more quickly.
  • I'm convinced that the use of Second Life brings a sense of presence that is missing from many other forms of virtual conferencing.
  • The Sloodle chat-logger worked very reliably for linking Second Life chat to a Moodle chat room and this was definitely used for communication between the virtual delegates on the Web and those in Second Life.  I don't know how much interaction we got with delegates in the real-life venue - not much I suspect - though at least one person came into Second Life using the venue's wireless network.  This possibly could have been improved by better publicising the chat facilities to the real-life delegates on the day.
  • Preempt problems by having alternatives in mind.  I'd thought about using audio-streaming as an alternative but hadn't prepared for it by installing the required software, etc.  This cost valuable time on the day.  When you are streaming a live event, you can't ask the speakers to wait while you sort out problems.

Despite the problems, I'm convinced that this kind of video-streaming technology is now well within reach at little or no cost.  (Note that costs will probably depend in part on the number of people you want to stream to - for example, I think that Veodia only supports up to 5 simultaneous streams on their free package).  Overall it was an interesting experience and I hope this report has been useful.  I learned a lot about what not to do and I plan to do better next time.

November 27, 2007

On the road again

Both Pete and I have been on the road a lot over the last few days, hence the lower than usual number of blog entries... for which, apologies.

My travels started last week with the JISC CETIS conference in Birmingham and my somewhat abortive attempt at a video blog entry (see previous blog entry).  My original plan was to video blog both days, but the blunt realisation that some people would rather not have their photos made available online (even without any association with their name) and the ensuing gap since the conference finished means I won't bother.  I don't think you are missing much to be honest (and even I have to confess that I'm already bored by the photo transitions available on Animoto!).

The conference was very enjoyable and it was particularly good to meet Sarah Robbins and Mark Bell who had come over from the US to speak at the event, both of whom I had only previously met in Second Life.  It was very nice to be able to meet with virtual friends in a real-life pub and warm beer kind of way.  Both gave very interesting presentations in the virtual worlds session at the event (as did Dan Livingstone, who spoke in the same session), my only major comment being that it was a shame that the audience for both was relatively small.  It is also worth noting that, as far as I could tell, the network at the conference venue did not support Second Life connections, so no live demoing was possible.

My other lasting thought (I confess that I only brought away a few scrappy notes, so any kind of detailed blog is out of the question) was the apparent gulf between the somewhat conservative computing services view of the world, as presented by Iain Stinson (University of Liverpool), and what I perceived to be the rather more cutting edge view of the conference more generally.  I don't mean that in a derogatory way to either viewpoint, since we probably need some of both... but the gap between the two struck me as pretty startling and I think that ultimately we have to find ways of bringing them together to take any kind of sensible path forward.

The following day I traveled to London to speak at the UKSG event, Caught up in Web 2.0? I had been asked to speak about Second Life, something I'm always happy to do, though in this particular case I spent some time explaining what I saw as the similarities and differences between SL and Web 2.0.  It is also worth noting that I'd arrived armed only with a very thin presentation, expecting to be able to demo Second Life live to the assembled masses.  Unfortunately, the venue's firewall prevented this from happening, meaning that I had to spend the first two talks re-purposing a previous set of slides :-(.  Despite that distraction, I found the other presentations on the day very interesting.

There's a small theme emerging here... Second Life is technically advanced enough that being able to use it in any given venue is not guaranteed.  It was therefore with some trepidation that I went back to Birmingham yesterday for UKOLN's workshop on blogs and social networks which I had, somewhat madly, agreed to try streaming into Second Life with no real knowledge of what kind of network was going to be available.

I'll blog the event separately on the grounds that there are some useful lessons to be learned, but suffice to say that things went less smoothly than they might have, though not necessarily for the reasons I was concerned about before I went!

November 21, 2007

JISC CETIS conference - day 1

A short 'video' blog of day one of the JISC CETIS conference, using the photos I took during the opening plenaries in the morning and the MUVE session after lunch, peppered with words and phrases that I noted popping up...

2007-11-21: Video link removed temporarily.  A delegate asked me not to make their photo available on the Web and I have no sure way of knowing yet whether their image was in one or more of the audience shots that I used in the video.  I've therefore taken it down again.  Apologies to all concerned.

2007-11-27: OK, I've re-instated the video, having spent some time checking thru the images it contains...

November 20, 2007

Semantic structures for teaching and learning

I'm attending the JISC CETIS conference at Aston University in Birmingham over the next couple of days.  One of the sessions that I've chosen to attend is on the use of the semantic Web in elearning, Semantic Structures for Teaching and Learning.  A couple of days ago a copy of all the position papers by the various session speakers came thru for people to read - hey, I didn't realise I was actually going to have to do some work for this conference! :-)

The papers made interesting reading, all essentially addressing the question of why the semantic Web hasn't had as much impact on elearning as we might have hoped it would a few years back, all taking a variety of viewpoints and perspectives.

Reading them got me thinking...

Some readers will know that I have given a fair few years of my recent career to metadata and the semantic Web, and to Dublin Core in particular.  I've now stepped back from that a little, partly to allow me to focus on other stuff... but partly out of frustration with the lack of impact that these kinds of developments seem to be having.

Let's consider the area of resource discovery for a moment, since that is probably what comes to mind first and foremost when people talk about semantic Web technologies.  Further, let's break the world into three classes of people - those who have content to make available, those who want to discover and use the content provided by others, and those that are building tools to put the first two groups in touch with each other.  Clearly the are significant overlaps between these groups and I realise that I'm simplifying things significantly but bear with me for a second.

The first group is primarily interested in the effective disclosure and use of their content.  They will do whatever they need to do to ensure that their content gets discovered by people in the second group, choosing tools supplied by the third group that they deem to be most effective and balancing the costs of their exposure-related efforts against the benefits of what they are likely enable in terms of resource discovery.  Clearly, one of the significant criteria in determining which tools are 'effective' has to do with critical mass (how many people in the second group are using the tool being evaluated).

It's perhaps worth noting that sometimes things go a bit haywire.  People in the first group put large amounts of effort into activities related to resource discovery where there is little or no evidence of tools being provided by the third group to take advantage of it.  Embedding Dublin Core metadata into HTML Web pages strikes me as an example of this - at least in some cases.  I'm not quite clear why this happens, but suspect that it has something to do with policy drivers taking precedence over the natural selection of what works or doesn't.

People in the second group want to discover stuff and are therefore primarily interested in the use of tools developed by the third group that they feel are most useful.  Their choices will be based on what they perceive to work best for resource discovery, balanced against other factors such as usability.  Again, critical mass is important - tools need to be comprehensive (within a particular area) to be deemed effective.

The third group need users from the other groups to use their tools - they want to build up a user-base.  The business drivers for why they want to do this might vary (ad revenue, subscription income, preparing for the sale of the business as a whole, kudos, etc.), but, often, that is the bottom line.  They will therefore work with the first group to ensure that users in the second group get what they want.

Now, when I use the phrase 'work with' I don't mean in a formal business arrangement kind of way - as a member of the first group, I don't 'work with', say, Google in that sense.  But I do work within the framework given to me by Google (or whoever) to ensure that my content gets discovered.  I'll optimise my content according to agreed best-practices for search-engine optimisation.  I'll add my content to del.icio.us and similar tools in order to improve its Google-juice.  I'll add a Google site map to my site.  And so on and so forth...

I'll do this because I know that Google has the attention of people in the second group.  The benefits in terms of resource discovery of working within the Google framework outweigh the costs of what I have to do to take part.  In truth, the costs are relatively small and the benefits relatively large.

Overall, one ends up with a loosely coupled cooperative system where the rules of engagement between the different parties are fairly informal, are of mutual benefit, evolve according to natural selection, and are endorsed by agreed conventions (sometimes turning into standards) around best-practice.

I've made this argument largely in terms of resource discovery tools and services but I suspect that the same can be said of technologies and other service areas.  The reasons people adopt, say, RSS have to do with low cost of implementation, high benefit, critical mass and so on.  Again, there is a natural selection aspect at play here.

So, what about the Semantic Web?  Well, it suffers from a classic chicken and egg problem.  Not enough content is exposed by members of the first group in a form suitable for members of the third group to develop effective tools for members of the second group.  Because the tools don't exist, the potential benefits of 'semantic' approaches aren't fully realised.  Members of the second group don't use the tools because they aren't felt to be good or comprehensive enough.  As a result, members of the first group perceive the costs of exposing richer Semantic Web data to outweigh any possible benefits because of lack of critical mass.

Can we break out of this cycle?  I don't know.  I would hope so... and Eduserv continue to put work into Semantic Web technologies such as the Dublin Core on the basis that we will.  On the other hand, I've felt that way for a number of years and it hasn't happened yet!  In rounding up the position papers in her blog, Lorna Campbell quotes David Millard, University of Southampton:

the Semantic Web hasn’t failed, it just hasn’t succeeded enough.

That's one way of looking at it I suppose and it's probably a reasonable view for now.  That said, I'm not convinced that it is a position that can reasonably be adopted forever and, with reference to my earlier use of the phrase "natural selection" it hardly makes one think of the survival of the fittest!?

What do I conclude from this?  Nothing earth shattering I'm afraid.  Simply that for semantic approaches to succeed they will need to be low cost to implement, of high value, and adopted by a critical mass of parties in all parts of the system.  I suspect that means we need to focus our semantic attention on things that aren't already well catered for by the very clever but essentially brute-force approaches across large amounts of low-semantic Web data that work well for us now... i.e. there's no point in simply inventing a semantic Web version of what Google can already do for us.  One of the potential problems with activities based on the Dublin Core is that one gets the impression that is what people are trying to do.

Again, I'm not trying to argue against the semantic Web, metadata, Dublin Core or other semantic approaches here... just suggesting that we need to be clearer about where their strengths lie and how they most effectively fit into the overall picture of services on the Web.

November 19, 2007

OpenID & FOAF

Another question emerging from the OpenID event, which I think Scott may have mentioned in passing, but I can't recall anyone discussing in detail, is raised by Mike Ellis here: how does OpenID fit in with the "social graph" and the various specifications which deal with "personal profiles" and other aspects of my "social network", things like the Friend of a Friend (FOAF) RDF vocabulary and XFN?

While I don't claim to be in a position to give a full answer to that question, it's worth noting that the FOAF-folk have recently introduced a property foaf:openid , which is designed to express the relationship between an agent and an OpenID. A couple of points about the FOAF approach to OpenID:

  • the rdfs:range of the foaf:openid property is the class foaf:Document, so the URI used as the object of a triple with a foaf:openid predicate - i.e. the OpenID URI - denotes a document, not the agent itself. However, the property is defined as - in OWL terms - an inverse functional property, meaning that anything that is the foaf:openid of something, is the foaf:openid of no more than one thing. If I find two separate triples each associating some unidentified agent with the same OpenID URI, I can conclude that they are talking about the same agent. So in effect, the OpenID URI becomes an "indirect identifier" of the agent.
  • the rdfs:domain of the foaf:openid property is the class foaf:Agent, not the class foaf:Person. Since the class foaf:Agent includes not only persons but also "organisations" and "groups", this allows for the scenario in which a single OpenID URI is indeed shared by several individuals constituting a single foaf:Agent.

And for a rather nice example building on this and using FOAF and OpenID in tandem, see Dan Connolly's piece, "FOAF and OpenID: two great tastes that taste great together", where the "social graph" obtained from FOAF data is used as the basis of a "whitelist" for authorisation choices:

... you can comment on our blog if:

  1. You can show ownership of a web page via the OpenID protocol.
  2. That web page is related by the foaf:openid property to a foaf:Person, and
  3. That foaf:Person is
    1. listed as a member of the DIG group in http://dig.csail.mit.edu/data, or
    2. related to a dig member by one or two foaf:knows links.

OpenID and metadata in URIs

As Andy reports, at the Eduserv Foundation OpenID event, there was a good deal of discussion around the question of the levels of trust which "relying parties" - service providers - might ascribe to different OpenID identity provider services, with some of the participants from educational institutions suggesting that, as relying parties, they might trust only OpenIDs provided by their own institution (as an OpenID identity provider), or at least by some named set of "friends". (See, for example, the comments by Sean Mehan and Scott Wilson). I guess this becomes particularly relevant when you consider that there are OpenID identity providers like Anonymous OpenID which issue me with an OpenID (and respond to OpenID requests from a "relying party") without requesting any authentication on my part.

I think this touches on an important issue related to the use of URIs generally, and the use of URIs in OpenID in particular. i.e. I think it's important here that we take care to avoid falling into the trap of conflating what are two quite distinct assertions:

_:person hasOpenID some-URI-with-"xyz.ac.uk"-as-domain.

and

_:person isCurrentlyAffiliatedWith the-academic-institution-which-owns-the-"xyz.ac.uk"-domain .

or if not conflating those two assertions, then assuming that the latter can be inferred from the former.  And I'm certainly not suggesting that either Scott or Sean were doing this, I hasten to add - but I'll labour the point because I have seen discussions elsewhere where I think this has been a problem.

Take a concrete example. Until eighteen months or so ago, I was an employee of the University of Bath, and if the University had assigned OpenIDs to their staff, I might have ended up with an OpenID of something like http://openid.bath.ac.uk/petejohnston. And I could have used that URI as my OpenID when authenticating both to services provided by the University and also to services provided by other agencies.

And when I left the University, I would have expected to be able to continue using my University-supplied OpenID as part of my authenticating to various services. My authorisation to use the resources licensed by the University for current staff and students of the University would have been revoked, certainly. But I would have expected to continue to use that OpenID when I authenticated to services that I was still authorised to use (because my access to them is not conditional on my membership of the University of Bath): I don't want to lose access to my Magnolia account just because I changed jobs.

And conversely, of course, I currently have OpenIDs supplied by a range of OpenID identity providers, many of whom I've had no other relationship with except to sign-up to obtain an OpenID: I have no other affiliation with the organisations that provide those services and own the domain names on which those URIs are based.

So it would be an error for a "relying party" to make decisions about my affiliation to the University of Bath (and issues of authorisation dependent on that) purely on the basis of the fact that my OpenID URI was issued by the University of Bath. If such access is dependent on current institutional affiliation, those relying parties need some piece of information other than my OpenID URI alone to assess whether I am currently authorised to access resources.

I think this is really a case of the "metadata in URIs" question which is the topic of a finding of the W3C Technical Architecture Group. Essentially that document says that it is quite legitimate (and indeed may be useful) for URI owners to encode some metadata about the (unchanging attributes of) identified resources in the URIs they assign to those resources, and those URI owners may describe the conventions they use in specifications they publish. The users/consumers of URIs assigned by others, however, should - in the absence of such an explicit licence to infer metadata from a URI - be cautious in the inferences they make based on URIs they encounter.

November 17, 2007

Animoto

Animoto is a web application that automatically generates professionally produced videos using patent-pending Cinematic Artificial Intelligence technology and high-end motion design. Each video is a fully customized orchestration of user-selected images and music. Produced in widescreen format, Animoto videos have the visual energy of a music video and the emotional impact of a movie trailer.

So sayeth the blurb...

In practice Animoto is a very neat application that takes a bunch of your images and mashes them together with a sound-track to produce a very appealing video.  Free accounts are limited to 30 second shorts...  $30 a year allows you to create an unlimited number of longer videos.

At the moment I'm mainly using this to make my fairly crap holiday snaps more visually appealing :-).  Here, for example, is a friend's recent birthday party.  But I also suspect that there'll be interesting work-related applications of this tool.  Here is a video summary of the virtual RICH 2007 conference that we hosted on Eduserv Island earlier today.  Here is a video trailer for UKOLN's forthcoming Exploiting the potential of blogs and social networks workshop (though you'll have to excuse the slightly poor quality images). And finally, the short video below uses some of the photos that I took at our OpenID event (with apologies to the afternoon speakers who didn't fit within the 30 seconds):

You get the general idea...

OpenID - every student should have one

Our OpenID event took place in London last Thursday and various materials from it are now available (photos, slides and blog entries).  (Note that at the time of writing Gavin Bell's presentation is missing in action - we are working on making it available as soon as possible).

I found the day very interesting and worthwhile, if a little stressful because of having to act as chair (not my favorite activity if I'm honest).  The presentations were all very good and we had a lot of interesting discussion, both in-between talks and in the panel session at the end.

I started with some scene setting.  My original plan was to do this from the somewhat theoretical perspective of how learning and research are changing in academic institutions.  However, in preparing my talk, I realised that most of the pertinent issues would be covered by later speakers.  Instead, I chose to get personal, describing the ways in which I see Web 2.0 changing the way my family live their lives.  The point was to show that the management of our online identities is increasingly a user-centric and lifelong activity - it doesn't start and stop at the system-induced transition points of our lives (going to school - leaving school, going to uni - leaving uni, getting a job - leaving a job, etc.).  In consequence, there is a danger of us offering a poor fit to our user's requirements if the approaches to identity management that we adopt are too rooted within particular sectors or phases of sectors.

P1040413 David Recordon (Six Apart) (pictured) gave a very nice overview of where OpenID is now and where it is going in the future and I recommend looking through his slides for anyone not there on the day.  I'm hugely grateful to David for stopping off en route between Berlin and California to give this presentation.  But I should also apologise - we were hoping to record audio for the whole event and turn the slides into slidecasts.  Unfortunately, a combination of trying to both chair the day and look after the recorder meant that I screwed up rather badly :-(

David's presentation covered so much ground that it is hard to summarise here.  However, it was interesting that he noted that OpenID, as a technology, is still too visible in the user's experience of the Web.  He predicted that, over the next year or so, we will begin to see tools such as Web browsers getting much better at hiding away some of the complexity of OpenID transactions.

It seems to me that this is a good example of why the education community stands to gain by going with more mainstream approaches such as OpenID.  Mainstream technologies get embedded into the fabric of the tools we all use - community-specific technologies do not, or if they do it takes much longer.

Gavin Bell (Nature Publishing Group) talked about the way that academic research and scholarly communication is changing, the impact that the Web in general and social tools in particular are having on that, and what role OpenID might play in that space.  He noted that however good the current approaches to identity and access management in the education space are, they haven't enabled personal interactions with Web services on the wider Internet beyond education.  Once again, the point is that we have tended to do things differently in academia and we need to go more mainstream.

He put forward some interesting analogies during his talk.  Firstly that OpenID is a bit like the Oyster Cards in use on the London Underground in the sense that it provides an additional way of getting into the system, layered over the existing facilities, but without compromising them in any sense, or requiring them to be updated or removed.  Secondly that there are similarities between the players in the OpenID business and the players in the fish & chip shop business - there are providers, there are consumers, and the providers are almost always also consumers! :-)

He went on to suggest that every student should have an OpenID, possibly thru some sort of central, trusted OpenID provider within (or closely aligned with) academia.  Various members of the audience questioned the approach, arguing that any centralisation of services brings with it the danger of a single point of failure and that institutions are well placed to administer OpenIDs themselves.  This is a discussion still to be had (it surfaced several times during the day) but I don't think anyone disagreed with the central proposition, that some kind of trusted provision of OpenIDs to all members of the education sector seems like an obvious next step.

He suggested that OpenID, as an identifier, might provide an important and necessary addition to the scholarly communication infrastructure - in the form of identifiers for people.  That said, he also noted the significant security and privacy issues brought by the use of a single identifier (or small number of identifiers), potentially allowing information that is currently held separately to be pulled together and aggregated into a single body of knowledge about the individual.

Gavin was followed by Nicole Harris (JISC), who talked about the wider issues around Identity 2.0 and what JISC is doing.  Nicole noted the significant level of interest within the community around OpenID at the moment (part of the reason why we set up the meeting of course) leading to JISC initiating various activities in this area.  She also noted the fit between Shibboleth and OpenID, describing user-centric identity as a natural progression from what we have now.  I very much agree with this.

However, she also noted the difference between managing identity as a way of "controlling access to protected resources" and managing identity as a way of sharing information "about me ".  She questioned whether individuals can be trusted to manage their online identities, noting in particular the cavalier approach we have to things like installing new Facebook applications and accepting out of date certificates in our Web browsing sessions.  She suggested that institutions need to review their role as identity and service providers, in particular suggesting that the role of the institution is different when it has funded access to particular resources than when it hasn't.  Like Gavin, Nicole also noted the potential value that OpenID brings to initiatives working towards people identifiers, such as the JISC-funded Names project.

Interestingly, she said that we will be seeing institutions signing up for the UK Access Management Federation using OpenID rather than Shibboleth in the relatively near future.  I must admit that I hadn't realised that was on the cards yet.

Nicole ended with a quick summary of what JISC is doing in this area, including various reviews and studies, but noting in particular that the infrastructural nature of 'identity' means that it will potentially touch on many of JISC's current programmes.

After lunch we had presentations from Sean Mehan (UHI) and Scott Wilson (Institute for Educational Cybernetics at the University of Bolton and JISC CETIS). These presentations provided two institutionally-oriented perspectives on OpenID, Sean speaking primarily from the point of view of computing services and Scott giving us a rather more academic perspective on the role of OpenID in elearning.

Sean reminded us that the ICT expectations of staff and students entering higher education are changing rapidly - they want to make use of externally provided services and there is little that institutions can do to stop them.  OpenID can be seen as an access point into these services, allowing externally-held content to be re-integrated into institutional service provision.  This requires a significant change of mindset for institutions, not least because it will feel like they are giving up a lot of control.  Such integration might include pulling external content back into the institution for the purpose of preservation, audit trails and assessment.

There are risks in this approach of course but Sean argued that there are always risks and that the use of OpenID does not make things significantly worse.  He suggested that institutions should become OpenID providers for their members (or delegate that responsibility to someone else on their behalf).  Furthermore he argued that for trust and operational reasons, institutions are unlikely to allow their members to use OpenIDs from non-institutional providers.  Allowing people to use other OpenIDs will "be a step too far" for most institutions.

Educationasasystem_2 Scott gave us a systems perspective on education, arguing that one of the fundamental tensions within institutions is coping with the fact there are relatively large numbers of students and relatively small numbers of teaching staff.  This tension is resolved in two ways - firstly, through resource bargaining, either within the institution or between the institution and its funders (leading to adaptations of the system in one way or another) and secondly, through the development of informal student peer-support mechanisms (leading to less demand on the formal parts of the system).

Scott argued that only 40% of learning happens within the formal systems of education - the rest happening through informal social networks.  However, institutional approaches to learning have historically developed in an environment where the institution owned all the technology and the student very little.  Clearly, this situation no longer applies. If 60% of a student's learning happens outside of the formal systems of the institution using technology that isn't owned or controlled by the institution then we need new approaches.  Institutions need to recognise that they can only be viable if they give up trying to manage everything. 

How does identity management relate to this?  Well, institutionally or nationally provided identifiers fit well with resource bargaining, controlling entitlements, accreditation and the other aspects of formal systems within the institution.  On the other hand, OpenID (or other user-centric approaches to identity management) fits well with the informal parts of the system.  More importantly, OpenID offers a useful axis through which the formal and informal parts of the system can be coordinated.

Echoing Sean's suggestion that institutions should become OpenID providers for their members, Scott suggested that another reason why institutions will be reluctant to give up their role as identity providers is because their own organisational identity is to a large extent built on the collective identities of its members.

Scott closed by summing up what he liked about OpenID - the fact that it is not an authentication system (echoing a comment by one of the earlier speakers that it it is "just a pipe"), that it doesn't "verify identity" or "identify the user" (OpenID is just a useful "proxy for the user"), that it doesn't assume policy alignment or trust, and that it is not a provisioning system.  All it does is to provide "a means for asserting a relationship between an agent (not necessarily a person) and a URL, how cool is that?".

Scott's presentation was a good place to end the talks.  It was followed by a panel session in which all the panelists (the speakers were joined by David Orrell of Eduserv for the panel) gave their views on where OpenID is likely to go over the next couple of years.  There was some interesting discussion.

Overall, the day was a good one.  I'm not sure we met all our objectives and the were certainly unresolved issues that need further discussion - around trust for example.  It was also noted that OpenID remains largely in the realm of the techies at the moment.  It needs to move beyond that, to become usable and understandable by ordinary people.  I would also have liked us to to unpick further the issues around OpenID as a "pipe" vs OpenID as an "identifier".  That said, one message came through loud and clear - that institutions should begin thinking about offering all their members an institutional OpenID.

November 16, 2007

Use of open content licences by cultural heritage organisations - report now available

The study that Jordan Hatcher has been working on for us is now available.  The report looks at the current usage of Creative Commons and other open content licences by cultural heritage organisations in the UK.

Note that this report, and survey on which it is based, only reflects those individuals that participated (107 respondents in all), and does not purport to represent the entire sector.  That said, it mildly surprises me that about half of those completing the survey hadn't heard of Creative Commons or Creative Archive licences.  It also struck me as interesting to note that only about half the respondents have "an in-house legal department or designated person that deals with copyright issues" and that a similar proportion do not have "a copyright policy publicly stated on its website".

I've argued before that it is too hard to re-use cultural heritage content in the UK for anything other than personal educational use (particularly in comparison with the US).  Moving towards making copyright and licensing terms explicit would be a big step in the right direction.

November 15, 2007

UniProt, URNs, PURLs

Stu Weibel - yes, that Stu Weibel, the notorious Facebook transgressor ;-) - made a post yesterday in which he responds to a comment questioning OCLC's motivation in providing the PURL service. What caught my attention, however, was Stu's mention of the fact that:

Evidence of success of the strategy may be found in the adoption of PURLs for the identification of some billion URI-based assertions about proteins in the UniProt database, an international database of proteins intended to support open research in the biological sciences. In the latest release of UniProt (11.3), all URIs of the form:

urn:lsid:uniprot.org:{db}:{id}

have been replaced with URLs of the form:

http://purl.uniprot.org/{db}/{id}

Some "live" examples:

http://purl.uniprot.org/uniprot/P12345

http://purl.uniprot.org/taxonomy/9606

http://purl.uniprot.org/pdb/1BRC

I gather that this change by UniProt was announced some time ago, so it isn't really news, but it does look to me like a very nice example of an adoption of the approaches advocated in the draft W3C Technical Architecture Group finding URNs, Namespaces and Registries, which

addresses the questions "When should URNs or URIs with novel URI schemes be used to name information resources for the Web?" and "Should registries be provided for such identifiers?". The answers given are "Rarely if ever" and "Probably not". Common arguments in favor of such novel naming schemas are examined, and their properties compared with those of the existing http: URI scheme.

Three case studies are then presented, illustrating how the http: URI scheme can be used to achieve many of the stated requirements for new URI schemes.

Or as Andy paraphrased it a few months ago (actually, a year and a bit now - crikey, time flies): "New URI schemes: just say no" ;-)

November 14, 2007

Exploiting The Potential Of Blogs and Social Networks

Brian Kelly at UKOLN is running a full-day workshop in Birmingham on the 26th November entitled, Exploiting The Potential Of Blogs and Social Networks.  In a moment of madness a while ago I offered to help stream the presentations from the workshop onto the Web so that they could be seen and/or heard live by people who are not able to attend for one reason or another.  I have no idea why I did this - I have no experience of doing this kind of thing and my last attempt at recording the audio of an event (at our recent OpenID meeting) failed miserably.

Oh well...  I think I can cope with the pressure!

The plan (and this is my big excuse when/if it all goes horribly wrong) is to demonstrate the possibilities for video-streaming live meetings using cheap or free equipment and services.  Video will be captured using an ordinary Web-cam or a fairly basic digital camera (I haven't decided which yet).  Audio will be captured using a low-end podcasting kit.  The resulting stream will be fed to Veodia, where it will be streamed onto the Web and into the Virtual Congress Centre in Second Life.  Second Life delegates will be able to chat to each other in-world and to other virtual delegates and those people using the wireless network in the venue via Twitter or IRC (again, I haven't decided which yet).

Sounds complex?  Probably.  Do-able?  I think/hope so.  It'll be interesting to see how things work out.

If you want to attend as a virtual delegate there is no registration as such, but it would help me think about numbers in Second Life and elsewhere if you could let me know by email (andy.powell@eduserv.org.uk - how 20th century!) or in-world IM (to Art Fossett) if you are interested in attending.

Repositories as Web sites (again)

There's a recurring (though somewhat occasional) theme on this blog about the need for us to reconceptualise the software components we currently refer to as repositories as being Web content management systems or just Web sites.  This is more than a change of label - it changes the way we think about them and the kinds of questions we ask about their deployment.

By way of example I note that there is currently a thread of discussion on one of the UK repository discussion lists stemming from the question "Do you know if it is possible to store metadata about an article, and hide that from public access, so that members of the public won't see it?".  Not that there's anything fundamentally wrong with the question you understand, but the way it is phrased and the tone of the ensuing discussion leave me somewhat cold these days.

In terms of Web content management I suspect the same kind of question would be along the lines of "How do I configure my system to expose different levels of information on the article Web page, depending on how public it is?".  This is a subtle difference in phraseology but the consequences in terms of how we approach the problem space are significant.  The focus would shift away from metadata (and, by implication, the OAI-PMH) towards Web site design, usability and information architecture.  And, in more general terms, this change of emphasis would re-focus repository discussions on things like accessibility, cool URIs, REST, Google sitemaps, search engine optimisation, microformats, tagging, RSS and/or Atom feeds, etc., etc.

Exactly the same topics the rest of the world talks about when they are dealing with making information available on the Web!

Now, you may argue that the way repositories work is totally wrapped up in the metadata they contain and what use is made of it - and I would completely agree with you.  That is true of any Web-based system, more or less.  But one doesn't think first and foremost about metadata when one is dealing with, say, Flickr.  One is much more interested in the functionality of the Flickr site and the way that images can be integrated with other Web services.  Yet in every respect Flickr is a repository service that manages my content and exposes it on the Web.  Yes, it is a service that is fundamentally based on metadata but the primary focus of our attention around it lies elsewhere.

Interestingly, I don't think I have ever seen a repository-oriented discussion about search engine optimisation - yet, surely, that is how most repository-held content is discovered?

JISC CETIS gets a new look Web site

JISC CETIS (I think that's what I'm now supposed to call them - though I have to say that I much prefer plain ol' CETIS) have announced a new look Web site:

The new site http://jisc.cetis.ac.uk/ gives us the flexibility  to select and publish news and features for a variety of audiences through the front page and Domain pages. This  is enabled by a system of rss aggregation accompanied by an administrative tool, built in-house, which provides lots of editorial controls.

More on the detail and background to this change are available from Sarah Holyfield and Sam Easterby-Smith.

November 13, 2007

Heard the one about the Radio 1 DJ, the comedian and Wikipedia?

Wikipedia1 In general I try to avoid BBC Radio One but in a houseful of teenagers this is sometimes difficult to achieve and, secretly, I do enjoy some of the stuff.  Yesterday morning on the Chris Moyles' Breakfast Show one of the guests was Alan Carr, a UK stand-up comedian.  During the show the discussion turned to Wikipedia and the fact that it was possible to change Alan's Wikipedia entry to say whatever they liked.  Various suggestions were made and members of the studio team began making changes.  This quickly escalated with listeners joining in.  I captured the page (as shown here) very soon afterwards.

It seems to have become fashionable in the mainstream media to knock Wikipedia as an unreliable source of information - for example, I've noticed it being used as the butt of jokes in a couple of recent UK TV programmes.  However, despite the obvious issues, I remain a strong believer in the user-generated approach it adopts.

So... how did Wikipedia stand up to this limited attempt at content disruption?  Pretty well actually.  Within about 10 minutes the page was beginning to get back to normal.  Within the hour, the page had been locked and had reverted to, what looked like, its steady state.  By today, everything was back to normal.

Now, I'm happy to admit that Alan Carr's Wikipedia entry isn't an information resource that many of us worry about on a day to day basis.  But this scenario serves as a useful example of how the system self-heals quite nicely.  Yes, there are problems - but the benefits far outweight them, at least IMHO.

Strangling creativity

I've mentioned the TED talks before on this blog and I think it is true to say that all the ones I've watched in the series have been excellent.  The recently announced talk by Lawrence Lessig, How creativity is being strangled by the law, is no exception:

The Net's most adored lawyer brings together John Philip Sousa, celestial copyrights, and the "ASCAP cartel" to build a case for creative freedom. He pins down the key shortcomings of our dusty, pre-digital intellectual property laws, and reveals how bad laws beget bad code. Then, in an homage to cutting-edge artistry, he throws in some of the most hilarious remixes you've ever seen.

This presentation works on a number of levels - it is thought-provoking, inspirational and very funny and is given using a presentational style that makes it a joy to watch.  Well worth the 30 minutes or so that it will take to view it.

Meanwhile, over on the Guardian Unlimited Technology blog, Cory Doctorow pokes fun at the National Portrait Gallery, Warhol is turning in his grave, by highlighting the irony of putting on an exhibition of pop art, an art movement that to a large extent celebrated "nicking the work of others, without permission, and transforming it to make statements and evoke emotions never countenanced by the original creators", in an environment adorned with copyright-induced restrictions.

Does this show - paid for with public money, with some works that are themselves owned by public institutions - seek to inspire us to become 21st century pop artists, armed with cameraphones, websites and mixers, or is it supposed to inform us that our chance has passed and we'd best settle for a life as information serfs who can't even make free use of what our eyes see and our ears hear? 

November 10, 2007

Facebook - if you are going to invade my privacy, please do so quietly

Nicholas Carr on Rough Type makes the point that it's not so much the fact that there is a privacy issue with Facebook Beacon but that they have chosen to make it so obvious that there is a privacy issue!  Seems like a slightly odd argument to me but I can understand where he is coming from.  As the recent OCLC report tells us:

Less than a third of the total general public surveyed consider most information searching, browsing or buying activities as extremely or very private.

Perhaps we really don't care, as long as we're not made to care.  Facebook Beacon is no worse, I guess, than the kind of things Google has been doing for the last while - and most of us still seem happy to use Google umpteen times a day.

Facebook Beacon gives external services a way of writing to your Facebook profile.  On the face of it, your permission is always requested before this happens.  The worry is that when you say 'no' the data still flows into Facebook, it just isn't displayed for public consumption.

If you are concerned,  Nate Weiner on the Idea Shower has some practical suggestions for preventing the 44 Beacon partner sites (many of which are listed by Om Malik on GigaOM) - or any other external service for that matter - from sharing any data with Facebook.  That said, I strongly suspect most of us won't bother! :-(

November 09, 2007

OpenID event - brief update

One of us will do a longer and more thoughtful post on Monday but for now I just wanted to note that presentations from our OpenID event yesterday (slides only) are starting to become available on Slideshare tagged as efopenid2007.  Alternatively, you can view them on the event's 'Programme' page.

November 06, 2007

Aggregating your social network presence

The b-side blog provides a useful review of 20 social network aggregators on the grounds that

social networks aren't helping us organize; since all of them require different credentials to log in, they're just adding to the noise.

I have to confess that it's not clear to me, so far, whether the aggregators aren't just adding to the problem - spreading our existing social noise even further afield!  Perhaps I'm doing something wrong?

New Dublin Core in X/HTML spec available for comment

DCMI has announced the availability for public comment of a proposal for a new version of the specification which describes how to encode DC metadata using the <meta> and <link> elements of HTML and XHTML. Comments should be sent to the DC-ARCHITECTURE mailing list.

This document is based explicitly on the DCMI Abstract Model, so it specifies

  • a subset of the constructs and components of the DCAM description model which the syntax supports
  • how each of the supported constructs and components of the DCAM description set are "encoded" using HTML/XHTML elements and attributes

Also, the document is an X/HTML meta data profile document, and it contains a link to an XSLT transform which acts as a "profile transformation" in the sense specified by the W3C GRDDL recommendation, and outputs an RDF/XML representation of the metadata. (The XSLT is currently work-in-progress and liable to change.)

So a GRDDL-aware processor can extract RDF data from any XHTML document which uses the profile URI http://dublincore.org/documents/2007/11/05/dc-html/ See for example, one of the examples in the document:

http://dublincore.org/documents/2007/11/05/dc-html/ex26/

from which the W3C GRDDL service generates

http://www.w3.org/2007/08/grddl/?docAddr=http://dublincore.org/documents/2007/11/05/dc-html/ex26/

TwitterPoster - mapping UK usage of Twitter (almost)

Via a post from Alan Levine I noticed TwitterPoster and in particular TwitterPosterUK.  (No, I wasn't simply looking for myself (*) - how very dare you!)

TwitterPoster provides a "provides a visual representation of the degree of influence of the Twitter users" based simply on the number of followers that a  person has (I think).  It also makes quite a nice picture.  The mouse-over text for each Twitter user's image also shows their geographic location (based on the information held in that user's profile on Twitter).  Quickly scanning the poster suggested, superficially, that Brighton seemed to feature quite highly in the UK list.

I wrote a quick Perl script to look at where the most influential UK Twitterers are based - it'd be interesting to plot this info on a Google map I guess - though it's worth noting that the data is quite dirty.  Here's the result (tidied up slightly by hand with a bit of guesswork):

London: 207
Brighton: 40
Cambridge: 23
Birmingham: 17
Bristol: 15
Edinburgh: 12
Glasgow: 12
Oxford: 11
Newcastle: 8
Liverpool: 7
Southampton: 6
Nottingham: 6
Leeds: 6
Hull: 4
Reading: 4
Kingston: 4
Portsmouth: 4
Chester: 4
Milton Keynes: 4
Bath: 3

I stopped at Bath, UK - it being my home town.  Where's Manchester?  Also interesting to note that @bbcbrasil appears to be more highly followed than any other BBC Twitter channel?

(*) I appear mid-table if you are really interested :-)

November 05, 2007

OpenID - questions for the community

Some readers will know that we are running an OpenID event in London later this week, OpenID - online identity for the social network generation of learners and researchers.  We have a great line-up of speakers and I'm really looking forward to it.  The stated aims of the day are to:

  • raise awareness (why is OpenID of interest?);    
  • discuss issues (what are the problems with OpenID?, how can it be implemented?);    
  • help to influence practice and inform policy at both institutional and national levels.

We are ending the day with a panel session and we have built plenty of time into each of the speaking slots to allow for questions and discussion.  Delegates be warned... we want this day to be as interactive as possible!

To that end, we've been thinking about the kinds of questions and issues that we'd like to see discussed during the day.  Here's our list, grouped into 4 broad areas:

Understanding the landscape

  • How does OpenID complement our existing 'identity technology' approaches?  Does OpenID meet any requirements that are unmet by other technologies – are there gaps in our current service provision that OpenID might help fill?
  • Do we have specific educational community requirements or challenges (from the perspective of both service providers and users) that are different to other users/uses of OpenID?
  • What are the strengths/weaknesses of in-house vs. outsourced service provision in this area?

Ways forward

  • What should funding bodies (JISC, Eduserv, etc.) be doing in this space?
  • How should institutions and service providers respond to the growing interest in OpenID?
  • What more do institutions and service providers need to know before deciding on whether to make use of OpenID technology?
  • Is there a need for 'comparative' guidance around alternative 'identity technologies'?  If so, how should that guidance be developed and maintained?

Influencing standards

  • Does the educational community need to directly influence the development of OpenID specifications in the future and, if so, how?
  • Does the UK education community need a more formal way of interacting with the OpenID Foundation?
  • Do we need a 'UK Identity Focus' (similar to UK Web Focus)?

Sharing good practice

  • Are there recommendations that we can make now for institutions considering deploying OpenID for their users?
  • Are there recommendations that we can make now for service providers considering deploying OpenID for their services?
  • How should such recommendations be developed and maintained in the future?

Note that these questions are not intended to restrict discussion - just give some helpful suggestions for the kinds of areas we might touch on during the day.

Whether you are attending or not, I'd be interested in your thoughts about what the day should produce in terms of outcomes?

Sharing, Privacy and Trust in Our Networked World

An advanced copy of OCLC's latest membership report, Sharing, Privacy and Trust in Our Networked World, dropped heavily on my desk a while back (in recognition of the fact that I was one of the people interviewed during its preparation).  I'm not 100% it was technically an advanced copy by the time it had traveled across the Atlantic but anyway...

This OCLC membership report explores this web of social participation and cooperation on the Internet and how it may impact the library’s role, including:

  • The use of social networking, social media, commercial and library services on the Web
  • How and what users and librarians share on the Web and their attitudes toward related privacy issues
  • Opinions on privacy online
  • Libraries’ current and future roles in social networking.

As with previous OCLC membership reports, there is a lot of very useful material here, not just the results of surveying over 6000 people, but the contextual information and opinion that goes with it.  It's a weighty document and will take a while to assimilate.  Note that the Web version comes in separate sections (PDF files) which may help if you want to read it on a plane or you suffer from a bad back! :-)

November 02, 2007

OpenSocial - initial thoughts

Mike Nolan over on the Edge Hill Web Services blog is one of many who have written recently about the OpenSocial API.  See Marc Andreessen's Open Social: a new universe of social applications all over the web for a good overview.

There is no doubt that this is a really interesting development and one that those of us that are interested in developing, using or deploying Facebook-style social applications will need to take note of over the coming months.

Several thoughts occur to me at this stage - note that by 'at this stage' I mean 'before I've actually looked at the spec in any detail! :-)

Firstly, I'm going to ignore the issue of what impact this spec has directly on Facebook - Facebook can look after itself and, as a result, its users (including yours truly) will take decisions about whether to stay with it or go elsewhere.  However, as a very small-scale application developer within Facebook (Second Friends if you are interested) I would like to note one thing...  a Facebook application is a Facebook application because FBML, the markup language used to create applications, enforces a look and feel and a set of application conventions that encourage a reasonably consistent approach to usability across the site as a whole.  My suspicion is that this, albeit fairly superficial, consistency of approach is one of the things that makes Facebook compelling to the end-user.  It'll be interesting to see whether OpenSocial's use of Javascript and HTML will offer a less rigid (i.e. more flexible) framework and, if so, whether that, combined with the demand for developing applications that work inside multiple frameworks, ultimately leads to a less homogeneous overall experience for the end-user.

Secondly, and with more of a focus on what has been happening within the education community over the last decade, I'm interested in what impact the availability of OpenSocial will have on the kinds of containers (to use Marc's terminology) we are more used to dealing with inside institutions - i.e. those offered by VLEs such as Blackboard and Moodle and portals such as uPortal (does anyone still do uPortal BTW!?).  Now, I'm sure someone is going to tell me that those are different kinds of containers!  Maybe so?  But it seems to me that there is potential for seeing some useful convergence here?

Farewell to Rachel

John Kirriemuir recently blogged an old photo of UKOLN staff circa 1996, Wear your tie neatly if you want to get on..., reminiscing about the good old days and noting what has happened to some of those pictured.  In particular John commented that Wednesday was Rachel Heery's last day at UKOLN.  This is very sad in many ways.  Rachel is leaving UKOLN for all the wrong reasons - ill-health taking its toll on one of the nicest people I know.  I, for one, already miss the contribution that she is able to make to the digital library space.

Rachel was on my interview panel, such as it was - more like an informal chat with Rachel and Lorcan actually - when I first joined UKOLN.  It was also Rachel who gave me away when I left, something I very much appreciated.  1996 seems like a long time ago now but since that time Rachel and I have worked on more projects and activities than I care to remember.  Whilst we often disagreed on technical and other matters - UKOLN staff will testify to that - our arguments were what drove our thinking forward.  To be completely honest, it was a big part of what made the job at UKOLN fun and I don't think we have ever actually fallen out over anything in any kind of serious way!

Rachel, good luck in the future.  Have a great retirement, look after yourself and, if you have the time and inclination, don't feel shy about joining our digital library conversations from afar.  You'll be more than welcome.

The Second Life of UK academics

John Kirriemuir has an article in the current Ariadne based on the series of snapshot studies that we are currently funding him to undertake for us - looking at how Second Life is being used in UK education and, hopefully, what impact that usage is having.

About

Search

Loading
eFoundations is powered by TypePad