April 08, 2011

Scholarly communication, open access and disruption

I attended part of UKSG earlier this week, listening to three great presentations in the New approaches to research session on Monday afternoon (by Philip Bourne, Cameron Neylon and Bill Russell) and presenting first thing Tuesday morning in the Rethinking 'content' session.

(A problem with my hearing meant that I was very deaf for most of the time, making conversation in the noisy environment rather tiring, so I decided to leave the conference early Tuesday afternoon. Unfortunately, that meant that I didn't get much of an opportunity to network with people. If I missed you, sorry. Looking at the Twitter stream, it also meant that I missed what appear to have been some great presentations on the final day. Shame.)

Anyway, for what it's worth, my slides are below. I was speaking on the theme of 'open, social and linked', something that I've done before, so for regular readers of this blog there probably won't be too much in the way of news.

With respect to the discussion of 'social' and it's impact on scholarly communication, there is room for some confusion because 'social' is often taken to mean, "how does one use social media like Facebook, Twitter, etc. to support scholarly communication?". Whilst I accept that as a perfectly sensible question, it isn't quite what I meant in this talk. What I meant was that we need to better understand the drivers for social activity around research and research artefacts, which probably needs breaking down into the various activities that make up the scholarly research workflow/cycle, in order that we can build tools that properly support that social activity. That is something that I don't think we have yet got right, particularly in our provision of repositories. Indeed, as I argued in the talk, our institutional repository architecture is more or less in complete opposition to the social drivers at play in the research space. Anyway... you've heard all this from me before.

Cameron Neylon's talk was probably the best of the ones that I saw and I hope my talk picked up on some of the themes that he was developing. I'm not sure if Cameron's UKSG slides are available yet but there's a very similar set, The gatekeeper is dead, long live the gatekeeper, presented at the STM Innovation Seminar last December. Despite the number of slides, these are very quick to read thru, and very understandable, even in the absence of any audio. On that basis, I won't re-cap them here. Slides 112 onwards give a nice summary: "we are the gatekeepers... enable, don't block... build platforms, not destinations... sell services, not content... don't think about filtering or control... enable discovery". These are strong messages for both the publishing community and libraries. All in all, his points about 'discovery defecit' rather than 'filter failure' felt very compelling to me.

On the final day there were talks about open access and changing subscription models, particularly from 'reader pays' to 'author pays', based partly on the recently released study commissioned by the Research Information Network (RIN), JISC, Research Libraries UK (RLUK), the Publishing Research Consortium (PRC) and the Wellcome Trust, Heading for the open road: costs and benefits of transitions in scholarly communications. We know that the web is disruptive to both publishers and libraries but it seemed to me (from afar) that the discussions at UKSG missed the fact that the web is potentially also disruptive to the process of scholarly communication itself. If all we do is talk about shifting the payment models within the confines of current peer-review process we are missing a trick (at least potentially).

What strikes me as odd, thinking back to that original hand-drawn diagram of the web done by Tim Berners-Lee, is that, while the web has disrupted almost every aspect of our lives to some extent, it has done relatively little to disrupt scholarly communication except in an 'at the margins' kind of way. Why is that the case? My contention is that there is such a significant academic inertia to overcome, coupled with a relatively small and closed 'market', that the momentum of change hasn't yet grown sufficiently - but it will. The web was invented as a scholarly device, yet it has, in many ways, resulted in less transformation there than in most other fields. Strange?

Addendum: slides for Philip Bourne's talk are now available on Slideshare.

December 10, 2010

A standards-based, open and privacy-aware social Web

One of the things we did with our last tranche of Eduserv Foundation project funding (a couple of years ago now) was to fund Harry Halpin of Edinburgh University to work on what became the W3C Social Web Incubator Group. The result of that group's work has recently been published, A Standards-based, Open and Privacy-aware Social Web:

The Social Web is a set of relationships that link together people over the Web. The Web is an universal and open space of information where every item of interest can be identified with a URI. While the best known current social networking sites on the Web limit themselves to relationships between people with accounts on a single site, the Social Web should extend across the entire Web. Just as people can call each other no matter which telephone provider they belong to, just as email allows people to send messages to each other irrespective of their e-mail provider, and just as the Web allows links to any website, so the Social Web should allow people to create networks of relationships across the entire Web, while giving people the ability to control their own privacy and data. The standards that enable this should be open and royalty-free. We present a framework for understanding the Social Web and the relevant standards (from both within and outside the W3C) in this report, and conclude by proposing a strategy for making the Social Web a "first-class citizen" of the Web.

This is a great piece of work, not just in terms of the final document but also in the building of a community around it. Edited by Harry Halpin and Mischa Tuffield (Garlik), the document itself covers a broad sweep of social Web activity and standards, including areas such as identity, profiles, social media, privacy and activity (outlining scenarios, issues and standards related to each) and also addressing accessibility and business considerations before making a series of recommendations for further work that needs to be undertaken.

Well worth reading. I'm proud to say that Eduserv funding helped bring it to fruition.

November 26, 2010

Tools for sharing - Posterous

There is latent value to others in what we are reading. I say latent because, often, knowledge about what we are reading is either not shared at all or is shared in ways that don't necessarily have much obvious impact. Value also comes at different levels. In some cases, reading something will result in a blog post in response. In others, an "I am reading X" tweet suffices. Indeed, some people seem to make almost exclusive use of Twitter for this purpose - and it's arguably quite effective. And then there's a the middle ground of stuff where you want to make a comment on what you are reading but you don't have the time or inclination to write a blog post and the 140 character limit of Twitter is too limiting to get your point across.

With that middle ground in mind, I've been playing with Posterous, channelled thru my personal hosting at aggregate.andypowe11.net. Nothing unusual in that I know... but it's taking me a while to figure out where the correct balance between Twitter, Posterous and this blog lies. Ditto the balance between personal and corporate. Oh, and then there's also Del.icio.us to think about just to keep things interesting!

My plan, such as it is, is to use Posterous as a place to lodge things that will eventually become full-blown blog posts. Hence the name - Aggregate is what you need to make eFoundations... get it! To date, that hasn't happened - the act of writing a one line comment for Posterous has been sufficient to get the thing out of my system.

We'll see... it may come to nothing.

October 13, 2010

What current trends tell us about the future of federated access management in education

As mentioned previously, I spoke at the FAM10 conference in Cardiff last week, standing in for another speaker who couldn't make it and using material crowdsourced from my previous post, Key trends in education - a crowdsource request, to inform some of what I was talking about. The slides and video from my talk follow:

As it turns out, describing the key trends is much easier than thinking about their impact on federated access management - I suppose I should have spotted this in advance - so the tail end of the talk gets rather weak and wishy-washy. And you may disagree with my interpretation of the key trends anyway. But in case it is useful, here's a summary of what I talked about. Thanks to those of you who contributed comments on my previous post.

By way of preface, it seems to me that the core working assumptions of the UK Federation have been with us for a long time - like, at least 10 years or so - essentially going back to the days of the centrally-funded Athens service. Yet over those 10 years the Internet has changed in almost every respect. Ignoring the question of whether those working assumptions still make sense today, I think it certainly makes sense to ask ourselves about what is coming down the line and whether our assumptions are likely to still make sense over the next 5 years or so. Furthermore, I would argue that federated access management as we see it today in education, i.e. as manifested thru our use of SAML, shows a rather uncomfortable fit with the wider (social) web that we see growing up around us.

And so... to the trends...

The most obvious trend is the current financial climate, which won't be with us for ever of course, but which is likely to cause various changes while it lasts and where the consequences of those changes, university funding for example, may well be with us much longer than the current crisis. In terms of access management, one impact of the current belt-tightening is that making a proper 'business case' for various kinds of activities, both within institutions and nationally, will likely become much more important. In my talk, I noted that submissions to the UCISA Award for Excellence (which we sponsor) often carry no information about staff costs, despite an explicit request in the instructions to entrants to indicate both costs and benefits. My point is not that institutions are necessarily making the wrong decisions currently but that the basis for those decisions, in terms of cost/benefit analysis, will probably have to become somewhat more rigorous than has been the case to date. Ditto for the provision of national solutions like the UK Federation.

More generally, one might argue that growing financial pressure will encourage HE institutions into behaving more and more like 'enterprises'. My personal view is that this will be pretty strongly resisted, by academics at least, but it may have some impact on how institutions think about themselves.

Secondly, there is the related trend towards outsourcing and shared services, with the outsourcing of email and other apps to Google being the most obvious example. Currently that is happening most commonly with student email but I see no reason why it won't spread to staff email as well in due course. At the point that an institution has outsourced all its email to Google, can one assume that it has also outsourced at least part of its 'identity' infrastructure as well? So, for example, at the moment we typically see SAML call-backs being used to integrate Google mail back into institutional 'identity' and 'access management' systems (you sign into Google using your institutional account) but one could imagine this flipping around such that access to internal systems is controlled via Google - a 'log in with Google' button on the VLE for example. Eric Sachs, of Google, has recently written about OpenID in the Enterprise SaaS market, endorsing this view of Google as an outsourced identity provider.

Thirdly, there is the whole issue of student expectations. I didn't want to talk to this in detail but it seems obvious that an increasingly 'open' mashed and mashable experience is now the norm for all of us - and that will apply as much to the educational content we use and make available as it does to everything else. Further, the mashable experience is at least as much about being able to carry our identities relatively seamlessly across services as it is about the content. Again, it seems unclear to me that SAML fits well into this kind of world.

There are two other areas where our expectations and reality show something of a mis-match. Firstly, our tightly controlled, somewhat rigid approach to access management and security are at odds with the rather fuzzy (or at least fuzzilly interpretted) licences negotiated by Eduserv and JISC Collections for the external content to which we have access. And secondly, our over-arching sense of the need for user privacy (the need to prevent publishers from cross-referencing accesses to different resources by the same user for example) are holding back the development of personalised services and run somewhat counter to the kinds of things we see happening in mainstream services.

Fourthly, there's the whole growth of mobile - the use of smart-phones, mobile handsets, iPhones, iPads and the rest of it - and the extent to which our access management infrastructure works (or not) in that kind of 'app'-based environment.

Then there is the 'open' agenda, which carries various aspects to it - open source, open access, open science, and open educational resources. It seems to me that the open access movement cuts right to the heart of the primary use-case for federated access management, i.e. controlling access to published scholarly literature. But, less directly, the open science movement, in part, pushes researchers towards the use of more open 'social' web services for their scholarly communication where SAML is not typically the primary mechanism used to control access.

Similarly, the emerging personal learning environment (PLE) meme (a favorite of educational conferences currently), where lecturers and students work around their institutional VLE by choosing to use a mix of external social web services (Flickr, Blogger, Twitter, etc.) again encourages the use of external services that are not impacted by our choices around the identity and access management infrastructure and over which we have little or no control. I was somewhat sceptical about the reality of the PLE idea until recently. My son started at the City of Bath College - his letter of introduction suggested that he created himself a Google Docs account so that he could do his work there and submit it using email or Facebook. I doubt this is college policy but it was a genuine example of the PLE in practice so perhaps my scepticism is misplaced.

We also have the changing nature of the relationship between students and institutions - an increasingly mobile and transitory student body, growing disaggregation between the delivery of learning and accreditation, a push towards overseas students (largely for financial reasons), and increasing collaboration between institutions (both for teaching and research) - all of which have an impact on how students see their relationship with the institution (or institutions) with whom they have to deal. Will the notion of a mandated 3 or 4 year institutional email account still make sense for all (or even most) students in 5 or 10 years time?

In a similar way, there's the changing customer base for publishers of academic content to deal with. At the Eduserv Symposium last year, for example, David Smith of CABI described how they now find that having exposed much of their content for discovery via Google they have to deal with accesses from individuals who are not affiliated with any institution but who are willing to pay for access to specific papers. Their access management infrastructure has to cope with a growing range of access methods that sit outside the 'educational' space. What impact does this have on their incentives for conforming to education-only norms?

And finally there's the issue of usability, and particularly the 'where are you from' discovery problem. Our traditional approach to this kind of problem is to build a portal and try and control how the user gets to stuff, such that we can generate 'special' URLs that get them to their chosen content in such a way that they can be directed back to us seemlessly in order to login. I hate portals, at least insofar as they have become an architectural solution, so the less said the better. As I said in my talk, WAYFless URLs are an abomination in architectural terms, saved only by the fact that they work currently. In my presentation I played up the alternative usability work that the Kantara ULX group have been doing in this area, which it seems to me is significantly better than what has gone before. But I learned at the conference that Shibboleth and the UK WAYF service have both also been doing work in this area - so that is good. My worry though is that this will remain an unsolvable problem, given the architecture we are presented with. (I hope I'm wrong but that is my worry). As a counterpoint, in the more... err... mainstream world we are seeing a move towards what I call the 'First Bus' solution (on the basis that in many UK cities you only see buses run by the First Group (despite the fact that bus companies are supposed to operate in a free market)) where you only see buttons to log in using Google, Facebook and one or two others.

I'm not suggesting that this is the right solution - just noting that it is one strategy for dealing with an otherwise difficult usability problem.

Note that we are also seeing some consolidation around technology as well - notably OpenID and OAuth - though often in ways that hides it from public view (e.g. hidden behind a 'login with google' or 'login with facebook' button).

Which essentially brings me to my concluding screen - you know, the one where I talk about all the implications of the trends above - which is where I have less to say than I should! Here's the text more-or-less copy-and-pasted from my final slide:

  • ‘education’ is a relatively small fish in a big pond (and therefore can't expect to drive the agenda)
  • mainstream approaches will win (in the end) - ignoring the difficult question of defining what is mainstream
  • for the Eduserv OpenAthens product, Google is as big a threat as Shibboleth (and the same is true for Shibboleth)
  • the current financial climate will have an effect somewhere
  • HE institutions are probably becoming more enterprise-like but they are still not totally like commercial organisations and they tend to occupy an uncomfortable space between the ‘enterprise’ and the ‘social web’ driven by different business needs (c.f. the finance system vs PLEs and open science)
  • the relationships between students (and staff) and institutions are changing

In his opening talk at FAM10 the day before, David Harrison had urged the audience to become leaders in the area of federated access management. In a sense I want the same. But I also want us, as a community, to become followers - to accept that things happen outside our control and to stop fighting against them the whole time.

Unfortunately, that's a harder rallying call to make!

Your comments on any/all of the above are very much welcomed.

September 28, 2010

An App Store for the Government?

I listened in to a G-Cloud web-cast organised by Intellect earlier this month, the primary intention of which was to provide an update on where things have got to. I use the term 'update' loosely because, with the election and change of government and what-not, there doesn't seem to have been a great deal of externally visible progress since the last time I heard someone speak about the G-Cloud. This is not surprising I guess.

The G-Cloud, you may recall, is an initiative of the UK government to build a cloud infrastructure for use across the UK public sector. It has three main strands of activity:

The last of these strikes me as the hardest to get right. As far as I can tell, it's an idea that stems (at least superficially) from the success of the Apple App Store though it's not yet clear whether an approach that works well for low-cost, personal apps running on mobile handsets is also going to work for the kinds of software applications found running across government. My worry is that, because of the difficulty, the ASG will distract from progress on the other two fronts, both of which strike me as very sensible and potentially able to save some of the tax-payer's hard-earned dosh.

App stores (the real ones I mean) work primarily because of their scale (global), the fact that people can use them to showcase their work and/or make money, their use of relatively micro-payments, and their socialness. I'm not convinced that any of these factors will have a role to play in a government app store so the nature of the beast is quite different. During the Q&A session at the end of the web-cast someone asked if government departments and/or local councils would be able to 'sell' their apps to other departments/councils via the ASG. The answer seemed to be that it was unlikely. If we aren't careful we'll end up with a simple registry of government software applications, possibly augmented by up-front negotiated special deals on pricing or whatever and a nod towards some level of social engagement (rating, for example) but where the incentives for taking part will be non-obvious to the very people we need to take part - those people who procure government software. It's the kind of thing that Becta used to do for the school's sector... oh, wait! :-(

For the ASG to work, we need to identify those factors that might motivate people to use it (other than an outright mandate) - as individuals, as departments and as government as a whole. I think this will be quite a tricky thing to get right. That's not to say that it isn't worth trying - it may well be. But I wonder if it would be better unbundled from the other strands of the G-Cloud concept, which strike me as being quite different.

Addendum: A G-Cloud Overview [PDF, dated August 2010] is available from the G-Digital Programme website:

G-Digital will establish a series of digital services that will cover a wide range of government’s expected digital needs and be available across the public sector. G-Digital will look to take advantage of new and emerging service and commercial models to deliver benefits to government.

July 02, 2010

Now don't tell me I've nothin' to do


Clay Shirky gave a polished performance at the Watershed in Bristol the other night for his talk, Our Cognitive Surplus: Creativity and Generosity in a Connected Age, given as part of the Bristol Festival of Ideas. One would expect nothing less of course.

The basic premise of the talk was that a combination of free time, talent, goodwill (our 'cognitive surplus') and the social Web are now allowing things to happen in ways that were previously not possible. The talk was peppered with anecdotal evidence for the kinds of changes being wrought by new technology and social media, from struggles for women's rights in India thru to changes of government policy on the environment (specifically car-sharing) in Canada and, yes, even to our use of Lolcats.

The individual examples were all new to me, though I've seen the general theme being covered several times before, using different examples of much the same thing. For me, there was a certain sense of, "Well, yes... but so what?" - perhaps I missed something? - though, oddly, that didn't detract from a very enjoyable evening.

Listening to the talk though did cause me to question my own use of social networks, something that I actually find quite hard to justify in any rational sense.

Here's an example...

For the last 574 days I have taken a photograph every day and put it on Blipfoto.com along with a few words of text. Blipfoto is a photo-blogging site - a social network, at least at the level of the number of "Wow... nice image" type comments that get exchanged, though it probably comes closer to the Lolcats end of the spectrum than the 'changing the planet' end. I probably spend somewhere between 30 minutes and an hour and a half on each photo - by the time I've taken the photo, editied it, uploaded it, written some text and so on. That probably represents something like 400 hours of my life over the last couple of years. Boggle!

To which one might sensibly ask, "Why?". And I don't think I'd be able to give you a coherent answer to such a question.

It's the closest thing I have to an artistic outlet I guess - which is certainly not a bad thing. My photography is getting better... maybe? There's a slight competetive element to it, both in the sense of forcing oneself to do something every day and in the sense of getting good comments and ratings. And there's the "Woo hoo... this is me... I'm over here" type of thing going on as well I suppose (something that is present in all social networks). But beyond that I'm not sure I can offer any rationalisation that will convince either you or me about why I am doing it? I'm certainly not making the world a better place with my time, whereas I could be. I could use that time to be a governor of a school again. Or use it to edit Wikipedia. Or to spend additional time working on my local school's website. Or to campaign on environmental issues. Or any number of other things. I could even do some private consultancy and make some money!

But I don't do any of those things... instead, I spend my time faffing around with a camera and a website in the vain hope of getting one or two positive comments from people that I've never met and who I will probably never meet.

Or as the Statler Brothers put it:

Countin' flowers on the wall
That don't bother me at all
Playin' solitaire till dawn with a deck of fifty-one
Smokin' cigarettes and watchin' Captain Kangaroo
Now don't tell me I've nothin' to do
[Photo created using Autostitch on an iPhone 3G]

May 20, 2010

Audiences and chairing events in a 'social media' world

This is the first of two blog posts about the recent Eduserv Symposium 2010: The Mobile University, which took place last Thursday at the Royal College of Physicians in London.

My next post will take a look at the content of the day, including my take on what it all meant. For this post I want to think more about mechanics - not of the "did the streaming and wifi work?" kind (actually, we did have some problems with the streaming early on in the day but Switch New Media, our streaming partner, and the venue's networking staff acted swiftly to resolve them by and large, for which I am very grateful) but thinking about my role as chair of the event.

Before doing so, let's think a little bit about the nature of conferences, and conference audiences, in the new 'social media' world (I'm using social media here as a shorthand for the use of those technologies that allow people to collaborate online in a real-time, relatively open, and social way with their peers, colleagues and friends - I'm including both the live-streaming of the event and tools like Twitter).

Let's start by partitioning delegates at conferences into three broad groups:

  • Firstly, there is the local physical audience - the people who are in the venue, watching and listening live to all the talks, asking questions, collaring speakers after their talks, and drinking the coffee at the breaks but who are, critically, not taking part in any digital activity during the event. This is what you might call the 'traditional' audience I guess.
  • Secondly, there is the local virtual audience - those people who, like the first group, are physically in the venue but who are also using their mobile devices and social networking services (such as Twitter) to discuss what is going on in the room. This discussion is typically refered to as the 'conference back-channel' though it is worth noting that it might start well before the event ("I'm on the train") and continue well after it ("presentation slides are now available"). In my experience, this group is usually smaller than the first group (often much smaller) and is often mis-understood or unrecognised by the people in the first group. It is perhaps also worth noting that this group tend to create a disproportionately large amount of the wider online buzz around an event.
  • Finally, there is the remote virtual audience - the people watching the live video stream from their office or home and who are typically also an active part of the event's back-channel.

This is not a perfect partitioning of the audience, and the names aren't quite right, but bear with me for a moment...

Increasingly, I think that event organisers need to strive to bring these three groups together, i.e. to maximise the interaction that takes place in the middle of the diagram above. That responsibility can be shared of course. For example, at the symposium this year, my colleague Mike Ellis had primary responsibility for encouraging the two virtual groups to gel effectively. However, I also think that the chair of the event increasingly has to be fully engaged with all three groups in order to properly do his or her job... and that, in my experience at least, is not an easy thing to do well. In short, it's not enough just to 'chair' what is going on in the room.

It is interesting that we use the term 'back-channel' for the virtual groups above (the right-hand side of the diagram), which implies there is also a 'front-channel' (the left-hand side). The labels 'front' and 'back' seem to me to be somewhat pejorative of what I'm labelling 'virtual' and I tend to think that, for all sorts of reasons, we need to get over this. I also think there are some barriers that currently get in the way of maximising the interaction between the three groups and it is perhaps worth outlining these briefly.

For those people physically in the room there are some very practical issues around the growth of 'virtual' activity - ownership of appropriate mobile devices, availability of power outlets (still a regular issue at events), good 3G coverage, and confidence that the wifi will be good enough spring immediately to mind. There are also problems of 'attitude' to the virtual activity. How many events still ask people to turn off their mobile devices at the start of the day? At this year's symposium we offered a quiet area for those delegates who did not want to sit next to someone who was using their laptop and, as reported previously, this was reasonably popular. My suspicion is that those people who don't use mobile devices and social networks at events see them only as a distraction, as being somewhat trivial ("oh, they're just reading email"), or perhaps even as being rude to the speakers on the day. Clearly, these views would not be shared by those people who see great value in a vibrant back-channel. There is a cultural shift going on here... and such shifts take time and happen at different rates across different parts of the population and I think we are still in the relatively early stages of this particular one.

For those people in the back-channel (both local and remote) I think there is generally a good 'coming together' of the two groups and Mike's work on the day helped this to happen at this event. Clearly though, those people who are actually in the room are able to engage directly with the speakers (they can put up their hand or interrupt or whatever) in a way that remote delegates can not. Remote delegates can usually only engage with speakers via an intermediary. Admittedly, there are some speakers who do appear to be able to stay on Twitter even as they speak but these are still few and far between and so, for the most-part, the lack of direct engagement by remote participants remains. For our symposia, we channel questions from remote delegates thru a designated person in the room (Mike Ellis in this case) but for this to work properly the chair has to give that person special attention and I think that, by and large, I failed to do so on the day this time round. Even where such attention is given, it still feels like something of a second-class experience for those delegates that choose to make use of it.

There is also the cognitive barrier of doing two things at once (perhaps it's just me?) - i.e. listening to the speaker and engaging in the back channel. This is partly device dependant I think. I can live-blog an event without difficulty using my laptop - indeed I strongly suspect that doing so actually improves the way I listen to the speaker - but I can't do the same on my iPhone (largely because the soft keyboard is too fiddly for me to use without thinking).

Finally then, there's the intersection between the local physical audience (who are not using the back-channel) and the remote virtual audience (who are). It seems to me that these two groups are least engaged in any real sense. For those people who are remote, there is some sense of shared presence with those in the room by virtue of the shots of the physical audience being shown as part of the live stream. (Incidentally, this is the main reason why I actually quite like having such shots included in the stream, though this is not a view shared by some of my colleagues here, nor by part of the audience.) On the other hand, for those people in the room, it is probably quite hard to remember that there even is a remote audience (let alone the fact that such an audience might actually be bigger than the one in the room - this year, 691 visitors from 7 countries, in 93 cities, in 153 organisations watched the live stream).

The result is something of a disconnect between the two groups.

Interestingly, I think this might currently leave the local virtual group in the role of bridging the two other groups. I don't think this is done in an explicit or intentioned way but it is interesting to note it nonetheless. Of course, it is also part of the event organiser's and chair's roles to bring these two groups together in some way.

Thinking back to our 3D virtual world symposium a few years ago, we overcame the 'local audience not being aware of the remote audience' problem to a certain extent by actually showing the virtual audience to the real audience during the day. (As an aside, one of the advantages of hybrid real and virtual world events is the greater sence of presence that is generated for delegates in the virtual world.)

For this year's (non-3D virtual world) symposium, one way of highlighting the remote virtual delegates would have been to show the Twitter stream live during the talks. We took the decision (I think rightly) not to do so because of the distraction this might cause to the in-room audience. We did however try to achieve some of the same effect by displaying the event Twitter stream in the lunch/coffee/tea room. My suspicion is that this didn't work - the single screen which we used was probably too small and people were busy doing other things to notice.

So... a couple of recommendations (essentially in the form of notes to self for next year!):

Event chairs should engage as much as possible with all three groups above (preferably actively - i.e. by tweeting or whatever - but at least passively). At my age, this means having a screen in front of me for most of the day, showing me what is happening in the back-channel. This doesn't have to be projected for everyone else but trying to do it on an iPhone screen is too difficult with anything less than 20:20 eyesight!

Event chairs should speak directly to the remote audience as often as possible and should explicitly acknowledge the back-channel in their communication with speakers and audience. Oddly, I felt that I've done this better in previous years than I did this year. I'm not sure why, though the time that I gave myself to introduce the day at the start of this year's event, coupled with the fact that we had some early teething problems with the streaming, meant that I wasn't properly able to introduce the remote audience and back-channel as I would have liked.

To sum up then, a chair's role in this new 'social media' world is to actively engage with the whole audience, not just with those sitting in the room in front of him or her. This is not easy to do and I suspect it requires a slight change of mindset. The chair's role is quite complex, at least that is my experience, at the best of times, a situation made worse by the new environment. For this reason, I'm not convinced that it can easily be combined with other tasks (like keeping one eye on other mechanics of the event or preparing a final summing up). Such tasks are better handled by other people.

To a certain extent, the chair's role becomes rather like that of David Dimleby hosting BBC's Question Time. The bulk of his time is spent focusing on the local audience and speakers but the remote audience watching the TV is the real reason why the programme is being made at all and every so often he will speak explicitly to camera to address that audience.

Note that this post is not intended to be negative in any sense. I think this symposium was our best yet and I'm really pleased with the way it went both in terms of the coherence of the overall theme and individual speakers and in terms of the mechanics of the day itself. I also think that our decision to limit the back-channel to Twitter-only was the right one and actually resulted in less confusion about what should be discussed where - though there is a proviso that 140 characters is probably too short for asking serious questions (so this is something we will have to think about for next year). But one can always do things better and that only starts by acknowledging where there were areas of weakness. When I woke up the morning after the event I was concerned that I could, and possibly should, have done a much better job of embracing the true 'hybrid' nature of the symposium in my role as chair for the day.

And a final thought... I've written this post with a particular focus on the chair's role within an event. The reality is that embracing the hybrid nature of events is incumbent on us all. We are going thru a cultural shift that requires the development of new social norms, not just in the digital space but in the hybrid space where physical meets digital. My suspicion is that the groups above will remain for some time to come (probably for ever) and that we will all have to work to bring these groups together as best we can - chairs, speakers and delegates - even if that just means remembering that the other groups exist!

March 24, 2010

'Slide talks' as contemporary theatre

I turned 50 a few weeks back. There's no reason for me to tell you that other than someone gave me a copy of David Byrne's Bicycle Diaries as a present. It looks like a good read though I haven't started it yet - but flicking through it earlier today (it's not a novel so I think that's allowed!) I noticed the following passage on PowerPoint which I quite liked:

A History of PowerPoint

I do a talk about the computer presentation program PowerPoint at the University of California in Berkeley for an audience of IT legends and academics. I have, over a couple of years, made little "films" in this program normally used by businesspeople or academics for slide shows and presentations. In my pieces I made the graphic arrows and the corny backgrounds dissolve and change without anyone having to click on the next slide. These content-less "presentations" run by themselves. I also attached music files---sound tracks---so the pieces are like little abstract art films that play off the familiar (to some people) style of this program. I removed, or rather never included, what is usually considered "content," and what is left is the medium that delivers that content. In a situation like this one here in Berkeley one is usually asked to talk about one's work, but rather than do that I have decided to tell the history of the computer program itself. I tell who invented it and who refined it and I offer some subjective views on the program---my own and those of its critics and supporters.

I am terrified. Many of the guys that originally turned PowerPoint into a software program are present. What are they going to think of what I did with their invention? Well, couldn't they just get up to talk about it? They could call me out and denounce me!

Luckily, I'm not talking about the details of the programming but about the ubiquity of the software and how, because of what it does and how it does it, it limits what can be presented---and therefore what is discussed. All media do this to some extent---they do certain things well and leave other things out altogether. This is not news, but by bringing this up, reminding everyone, I hope to help dispel the myth of neutrality that surrounds many software programs.

I also propose that a slide talk, the context in which this software is used, is a form of contemporary theater---a kind of ritual theater that has developed in boardrooms and academia rather than on the Broadway stage. No one can deny that a talk is a performance, but again there is a pervasive myth of objectivity and neutrality to deal with. There is an unspoken prejudice at work in those corporate and academic "performance spaces" that performing is acting and therefore it's not "real." Acknowledging a talk as a performance is therefore anathema. I want to dispel this myth of authenticity somewhat, in an entertaining and gentle way.

The talk goes fine. I can relax, they're laughing. Bob Gaskins, Dennis Austin, and Peter Norvig are all here. Bob Gaskins was one of the guys who refined the original program and realized its potential. Bob declined to be introduced, so I stick with a picture of a concertina when I mention his name. (He's retired and buys and sells antique concertinas now.) That gets a laugh. He tells me afterward that he likes the PowerPoint-as-theater idea, which is a relief. I mean, there is a lot of hatred for this program out there, and a lot of people laugh at the mere mention of bullet points, so he must feel kind of vulnerable.

In working on these pieces, and others, I have become aware that there is a pyramid of control and influence that exists between text, image, and sound. I note that today we give text a preferential position: a label under an image "defines" that image even if it contradicts what we can see. I wonder, in a time before text became ubiquitous, was image (a symbol, a gesture, a sign) the most influential medium? Did sound---singing, chanting, rhythm---come in second, and text, limited as it might have been thousands of years ago, come in third? Was text once a handmaiden to image and sound and then gradually managed to usurp their places and take control? Did the pyramid of communicative power at some point become inverted?

Wittgenstein famously said, "The limits of my language are the limits of my mind. All I know is what I have words for." I am a prisoner of my language.

This presupposes that conscious thought cannot happen without verbal or written language. I disagree. I sense a lot of communication goes on nonverbally---and I don't mean winks and nods. I mean images get ahold of us, as do sounds. They grab and hold us emotionally. Smells too. They can grip in a way that is hard to elucidate verbally. But maybe for Ludwig it just wasn't happening. Or maybe because he couldn't express what sounds, smells, and images do in words he chose to ignore them, to deny that they were communicating.

I like the notion of "presentation as performance" and have increasingly come to see things that way myself. One of the reasons I don't consider myself to be a particularly good presenter is that I'm not a particularly good performer.  I get too nervous and am typically not able to marshal my thoughts clearly or consistently enough.  That said, it is certainly the case that the presentations of mine that I consider to be my best are those where I was able to lose myself in the material - when stuff just flows, something cuts in and takes over.

Role models for good performances/talks are also rather more in our faces than they used to be. (This can be seen as being both good and bad since it highlights how good others can be and probably raises expectations across the board a little.)  I'm thinking particularly of TED talks here.  It strikes me that the Ten Commandments for TED speakers are a good indication of the general move from 'talk' to 'performance'.

And thinking specifically about PowerPoint, it seems to me that the role of the 'slides' in the performance is now a little confused.  I don't know when Byrne wrote this piece but I suspect it was before (or in the absence of knowledge about) the rise of Slideshare.  My slides are certainly part of my 'performance' (such as it is) - and the whole trend for and discussion about the use of image-heavy rather than text-heavy slides is part of that - but if I give my performance in front of a room of, say, 50 people, but then have my slides viewed 5000 times on Slideshare, where is the major impact taking place?  Do I design my slides for the 'performance' or for the 'record'?  Do I create a separate set of slides for each?

I think one now has to acknowledge that the slides live on (in a very significant way) after the performance has been given and design accordingly.

PS.  Noting Byrne's use of the phrase content-less "presentations" - I've seen plenty of those where the slides have lots of text on them! :-)

March 16, 2010

We met, we tweeted, we archived... then what?

We're all getting increasingly used to using Twitter as a back-channel at events. Indeed, it is now relatively uncommon to turn up for an event at which there isn't both a pre-announced hashtag and an active circle of twitterers already in attendance.

We also recognise that Twitter doesn't leave our tweets lying around for very long in the Twitter search engine and that if we want some kind of a more persistent and accessible record of Twitter activity at an event then we need to arrange for a copy of all the tweets to be archived somewhere. Normally, in my experience at least, TwapperKeeper is currently used to create that archive.

So far, so good... but then what? Offering a vanilla view of a few thousand tweets is potentially useful for those who want to delve into the detail, but it hardly provides an easy to grasp summary of the event.  How can we present a view of the Twitter archive such that a summary is offered without the need to read every tweet?

There are some obvious simple things that can be done with the RSS feed of tweets offered by TwapperKeeper, and I've knocked together a quick demonstrator to show the possibilities...

Firstly, we can count up the total number of tweets, twitterers, hashtags and URLs tweeted during the event. That gives us an overall feel for how 'significant' the use of Twitter was.

Secondly, we can display a list of the people who tweeted and were @replied the most (in Twitter parlance, an @reply is a tweet that directly mentions another Twitter user). We can also see who was involved in most 'conversations' (exchanges of @replies between any two Twitter users). That gives us a feel for who was tweeting the 'loudest'.

Thirdly, we can look at what hashtags and URLs were tweeted the most. That gives us a feel for the topics and resources most related to the topic of the event.

And finally, we can unpick the individual words used in the Twitter archive, providing a kind of 'word cloud' for the event.

None of which is rocket science... but it is potentially useful nonetheless. Here are such summaries for the Repositories and the Cloud meeting that we recently organised with the JISC, for the JISC Dev8D event, and for the National Digital Inclusion 2010 conference (based on the associated TwapperKeeper archives for each of the events).

In a follow-up post to the NDI10 event, After the event, and a subsequent message to the UK Government Data Developers Google Group, Alex Coley suggests going further:

I wondered if a flash based tool could be used to map sentiment by session/topic by giving positive/negative meanings to words and applying this to tweet traffic. Perhaps some real meaning and value could come out of conferences that anyone can access and use.

Sounds interesting, though I have no idea how to implement it!

Dave Challis of the Southampton ECS Web Team has also written up a couple of blog posts following Dev8D, A first look at the dev8d twitter network and Dev8D twitter network, part 2, in which he discusses the analysis of Twitter to see how people's social networks evolve during an event. Fascinating stuff!

January 22, 2010

On the use of Microsoft SharePoint in UK universities

A while back we decided to fund a study looking at the uptake of SharePoint within UK higher education institutions, an activity undertaken on our behalf by a team from the University of Northumbria led by Julie McLeod.  At the time of the announcement of this work we took some stick about the focus on a single, commercially licensed, piece of software - something I attempted to explain in a blog post back in May last year.  On balance, I still feel we made the right decision to go with such a focused study, and I think the popularity of the event that we ran towards the end of last year confirms that to a certain extent.

I'm very pleased to say that the final report from the study is now available.  As with all the work we fund, the report has been released under a Creative Commons licence so feel free to go ahead a make use of it in whatever way you find helpful.  I think it's a good study that summarises the current state of play very nicely.  The key findings are listed on the project home page so I won't repeat them here.  Instead, I'd like to highlight what the report says about the future:

This research was conducted in the summer and autumn of 2009. Looking ahead to 2010 and beyond the following trends can be anticipated:

  • Beginnings of the adoption of SharePoint 2010
    SharePoint 2010 will become available in the first half of 2010. Most HEIs will wait until a service pack has been issued before they think about upgrading to it, so it will be 2011 before SharePoint 2010 starts to have an impact. SharePoint 2010 will bring improvements to the social computing functionality of My Sites, with Facebook/Twitter style status updates, and with tagging and bookmarking. My Sites are significant in an HE context because they are the part of SharePoint that HEIs consider providing to students as well as staff. We have hitherto seen lacklustre take up of My Sites in HE. Some HEIs implementing SharePoint 2007 have decided not to roll out My Sites at all, others have only provided them to staff, others have made them available to staff and students but decided not to actively promote them. We are likely to see increasing provision and take up of My Sites from those HEIs that move to SharePoint 2010.
  • Fuzzy boundary between SharePoint implementations and Virtual Learning Environments
    There is no prospect, in the near future, of SharePoint challenging Blackboard’s leadership in the market for institutional VLEs for teaching and learning. Most HEIs now have both an institutional VLE, and a SharePoint implementation. Institutional VLEs are accustomed to battling against web hosted applications such as Facebook for the attention of staff and students. They now also face competition internally from SharePoint. Currently SharePoint seems to be being used at the margins of teaching and learning, filling in for areas where VLEs are weaker. HEIs have reported SharePoint’s use for one-off courses and small scale courses; for pieces of work requiring students to collaborate in groups, and for work that cannot fit within the confines of one course. Schools or faculties that do not like their institution’s proprietary VLE have long been able to use an open source VLE (such as Moodle) and build their own VLE in that. Now some schools are using SharePoint and building a school specific VLE in SharePoint. However, SharePoint has a long way to go before it is anything more than marginal to teaching and learning.
  • Increase in average size of SharePoint implementations
    At the point of time in which the research was conducted (summer and autumn of 2009) many of the implementations examined were at an early stage. The boom in SharePoint came in 2008 and 2009, as HEIs started to pick up on SharePoint 2007. We will see the maturation of many implementations which are currently less than a year old. This is likely to bring with it some governance challenges (for example ‘SharePoint sprawl’) which are not apparent when implementations are smaller. It will also increase the percentage of staff and students in HE familiar with SharePoint as a working environment. One HEI reported that some of their academics, unaware that the University was about to deploy SharePoint, have been asking for SharePoint because they have been working with colleagues at other institutions who are using it.
  • Competition from Google Apps for the collaboration space
    SharePoint seems to have competed successfully against other proprietary ECM vendors in the collaboration space (though it faces strong competition from both proprietary and open source systems in the web content management space and the portal space). It seems that the most likely form of new competition in the collaboration space will come in the shape of Google Apps which offers significantly less functionality, but operates on a web hosted subscription model which may appeal to HEIs that want to avoid the complexities of the configuration and management of SharePoint.
  • Formation of at least one Higher Education SharePoint User Group
    It is surprising that there is a lack of Higher Education SharePoint user groups. There are two JISCmail groups (SharePoint-Scotland and YH-SharePoint) but traffic on these two lists is low. The formation of one or more active SharePoint user groups would seem to be essential given the high level of take up in the sector, the complexity of the product, the customisation and configuration challenges it poses, and the range of uses to which it can be put. Such a user group or groups could, support the sharing of knowledge across the sector, provide the sector with a voice in relation to both Microsoft and to vendors within the ecosystem around SharePoint, enable the sector to explore the implications of Microsoft’s increasing dominance within higher education, as domination of the collaboration space is added to its domination of operating systems, e-mail servers, and office productivity software.

On the last point, I am minded to wonder what a user group actually looks like in these days of blogs, Twitter and other social networks? Superficially, it feels to me like a concept rooted firmly in the last century. That's not to say that there isn't value in collectively being able to share our experiences with a particular product, both electronically and face-to-face, nor in being able to represent a collective view to a particular vendor - so there's nothing wrong with the underlying premise. Perhaps it is just the label that feels outdated?

December 03, 2009

On being niche

I spoke briefly yesterday at a pre-IDCC workshop organised by REPRISE.  I'd been asked to talk about Open, social and linked information environments, which resulted in a re-hash of the talk I gave in Trento a while back.

My talk didn't go too well to be honest, partly because I was on last and we were over-running so I felt a little rushed but more because I'd cut the previous set of slides down from 119 to 6 (4 really!) - don't bother looking at the slides, they are just images - which meant that I struggled to deliver a very coherent message.  I looked at the most significant environmental changes that have occurred since we first started thinking about the JISC IE almost 10 years ago.  The resulting points were largely the same as those I have made previously (listen to the Trento presentation) but with a slightly preservation-related angle:

  • the rise of social networks and the read/write Web, and a growth in resident-like behaviour, means that 'digital identity' and the identification of people have become more obviously important and will remain an important component of provenance information for preservation purposes into the future;
  • Linked Data (and the URI-based resource-oriented approach that goes with it) is conspicuous by its absence in much of our current digital library thinking;
  • scholarly communication is increasingly diffusing across formal and informal services both inside and outside our institutional boundaries (think blogging, Twitter or Google Wave for example) and this has significant implications for preservation strategies.

That's what I thought I was arguing anyway!

I also touched on issues around the growth of the 'open access' agenda, though looking at it now I'm not sure why because that feels like a somewhat orthogonal issue.

Anyway... the middle bullet has to do with being mainstream vs. being niche.  (The previous speaker, who gave an interesting talk about MyExperiment and its use of Linked Data, made a similar point).  I'm not sure one can really describe Linked Data as being mainstream yet, but one of the things I like about the Web Architecture and REST in particular is that they describe architectural approaches that haven proven to be hugely successful, i.e. they describe the Web.  Linked data, it seems to me, builds on these in very helpful ways.  I said that digital library developments often prove to be too niche - that they don't have mainstream impact.  Another way of putting that is that digital library activities don't spend enough time looking at what is going on in the wider environment.  In other contexts, I've argued that "the only good long-term identifier, is a good short-term identifier" and I wonder if that principle can and should be applied more widely.  If you are doing things on a Web-scale, then the whole Web has an interest in solving any problems - be that around preservation or anything else.  If you invent a technical solution that only touches on scholarly communication (for example) who is going to care about it in 50 or 100 years - answer, not all that many people.

It worries me, for example, when I see an architectural diagram (as was shown yesterday) which has channels labelled 'OAI-PMH', XML' and 'the Web'!

After my talk, Chris Rusbridge asked me if we should just get rid of the JISC IE architecture diagram.  I responded that I am happy to do so (though I quipped that I'd like there to be an archival copy somewhere).  But on the train home I couldn't help but wonder if that misses the point.  The diagram is neither here nor there, it's the "service-oriented, we can build it all", mentality that it encapsulates that is the real problem.

Let's throw that out along with the diagram.

December 01, 2009

An increasingly common Twitter/OAuth scenario...

Twitter/OAuth challenge

An application would like to connect to your (Twitter) account?

Yeah, I know, I just clicked on the link, right?

The application _blah_ by _blah_ would like the ability to access and update your data on Twitter.

Err... OK. But why does it need access to update my data?

This application plans to use Twitter for logging you in in the future.

That's what I figured! But I still don't understand why it needs access to update my data? I think I'll pass... I'm not sure I want random applications being able to tweet on my behalf.

End of story :-(

The point is that there is a trust issue here and I don't think that current implementations are helping people to make sensible decisions. Why does the application need to update my data on Twitter? In this case, there appears to be a perfectly valid reason as far as I can tell, but even so...

  • What kinds of updates is it going to make?
  • How often is it going to make them?
  • Are any updates going to be under my control?

I just want to have some indication of these kinds of things before I click on the 'Allow' button. Thanks.

November 18, 2009

Where does your digital identity want to go today?

This morning I had cause to revisit an identity-related 'design pattern' that I originally worked on during a workshop back in January, in readiness for a follow-up workshop tomorrow.

The pattern is concerned with the way in which personal information can be aggregated, shared and re-used between social networking sites and other tools and the moral and legal rights and responsibilities that go with that kind of activity.

I don't want to write in detail about the pattern here, since it is the topic for the workshop tomorrow and may well change significantly.  What I do want to note, is that in thinking about this 'aggregating' scenario I realised that there are three key roles in any scenario of this kind:

  • the subject - the person that the personal information is about
  • the creator - the person that has created the personal information
  • the aggregator - the person aggregating personal information from one or more sources into a new tool or service.

In any given instance, an individual might play more than one of these roles.  Indeed, in the original use-case which I provided to kick-start the discussion I played all three roles.  But the important thing is that in the general case, the three are often different people, each having different 'moral' and legal rights and responsibilities and different interests in how the personal information is aggregated and re-used.

To illustrate this, here is a simple, and completely fictitious, case-study:

Amy (the subject) uses Twitter to share updates with both colleagues and friends.  Concerned about cross-over between the two audiences, Amy chooses to use two Twitter accounts, one aimed at professional colleagues and the other aimed at personal friends.  Amy uses Twitter's privacy options to control who sees the tweets from her personal account.

Ben (the creator) is both a friend and colleague of Amy and is thus a follower of both Amy's Twitter accounts. On seeing a personal tweet from Amy that Ben feels would be of wider interest to his professional colleagues, Ben retweets it (thus creating a new piece of personal information about Amy), prefixing the original text with a comment containing the name of Amy's company.

Calvin (the aggregator) works for the same company as Amy and looks after the company intranet.  He decides to use a Twitter search to aggregate any tweets that contain the company name and display them on the intranet so that all staff can see what is being said about the company.

Amy's original 'private' tweet thus appears semi-publicly in-front of all staff within the company.

Depending on the nature of the original private tweet, the damage done here is probably minimal but this scenario serves to illustrate the way that personal information (i.e. information that is part of Amy's digital identity) can flow in unexpected ways.

One can imagine lots of similar scenarios arising from unwanted tagged Flickr or Facebook images, re-used del.icio.us links, forwarding of private emails, and so on.

Who, if anyone, is at fault in this scenario?  Perhaps 'fault' is too strong a word?

Well, Amy is probably naive to assume that anything posted anywhere on the Internet is guaranteed to remain private.  Ben clearly should not have retweeted a tweet from Amy that was intended to remain somewhat private but in the general to-and-fro of Twitter exchanges it is probably understandable that it happened. Note that the Web interface to Twitter displays a padlock next to 'private' tweets but this is not a convention used by all Twitter clients. In general therefore, any shared knowledge that some tweets are intended to be treated more confidentially than others has to be maintained between the two people concerned outside of Twitter itself.  Calvin is simply aggregating public information in order to share it more widely within the company and it is thus not clear that he could or should do otherwise.

On that basis, any fault seems to lie with Ben.  Does Amy have any moral grounds for complaint? Against Ben... yes, probably, though as I said, the mistake is understandable in the context of normal Twitter usage.

The point here is to illustrate that currently, while many social networking tools have mechanisms for adjusting privacy settings, these are not foolproof and the shared knowledge and conventions about the acceptable use of personal information (i.e. digital identity) typically have to be maintained outside of the particular technology in use.  Further the trust required to ensure that things don't go wrong relies on both the goodwill and good practice of all three parties concerned.

November 12, 2009

Where Next for Digital Identity?

In collaboration with the three 'digital identity' projects that we funded last year, we have organised a day-long event looking at the future of digital identity.  The day will feature an invited talk by Ian Brown of the Oxford Internet Institute, followed by talks from each of the three projects (Steven Warburton, Shirley Williams and Harry Halpin), followed by an afternoon of discussion groups.

The event will be held at the British Library in January next year. Places are limited to 50.

November 05, 2009

Write to Reply

I've noted before how much I like the Write to Reply service, conceived and developed by Tony Hirst and Joss Winn:

A site for commenting on public reports in considerable detail. Texts are broken down into their respective sections for easier consumption. Rather than comment on the text as a whole, you are encouraged to direct comments to specific paragraphs.

On that basis, I am very pleased to announce that we have made available a small amount of funding for the service, initially covering the website hosting costs for the next 6 months but with a commitment to do so in some form for 2 years (whether that be through a continuation of the current hosting arrangement or by moving the content to Eduserv servers or elsewhere).

It strikes me that Write to Reply has already demonstrated its value in various fields, notably in the areas of education and government policy, and I'm sure it will continue to do so. It's one of those ideas that is rather simple and obvious in hindsight, yet very powerful in practice - give people a public space in which they can make comments on important documents and make it social enough that commenting feels more like having a conversation than simply annotating a text.  Good stuff.  Long may it continue.

October 22, 2009

This is me – now what was the question?

I note that the call for papers for the TERENA Networking Conference (TNC) 2010 is now out. Given that the themes focus (in part) on network lifestyle and identity issues I wondered about putting in something based on Dave White's vistors vs residents work (yeah, that again!). Something like the following:

The Web used to be seen as a tool to get various jobs done – booking a holiday, finding a train time, reading email, catching up on lecture notes, checking a bank account, and so on. The people using such tools adopted a largely visitor mentality, - they fired up their Web browser, undertook a task of some kind, and left. Little or no trace was left.

Over the past few years the Web has changed significantly. It is now a social space, as much a part of people's lives as going down the pub, going to work, or turning up for lectures. As a result, many people are now increasingly adopting a resident mentality – cohabiting a social networked environment with others and intentionally leaving a permanent record of their activities in that space.

In a world of visitors, the principle reason for asserting identity (“this is me”) is so that the particular tool being used can determine what an individual's access rights are. But in a world of residents, that is only part of the story. They are more likely to assert their identity as part of a “this is who I am”, this is what I’ve done”, this is who I know” transaction with other people in their social space.

The functional requirements of the identity infrastructure are therefore very different for residents than they used to be for visitors. SAML is geared to meeting the needs of visitors and the tools they wish to access. OpenID caters much more to a ‘resident’ way of thinking.

If we believe that the Web is changing us (as it certainly is), and particularly if we believe that the Web is changing learning and research, then we have to be prepared to change with it and adopt technologies that assist in that change.

Does that resonate with people?  I'd be interested in your thoughts.

SharePoint in UK universities event

We've just announced an event (in London on 25 November 2009) based on the work that's been done by Northumbria University (and others) as part of the Investigation into the Uptake and use of Microsoft SharePoint by HEIs study that we funded a while back.

  • Do you want to learn about how and why HEIs are using SharePoint? What worked well, lessons learned?
  • Do you want to hear from some HEIs about their experience of implementing SharePoint?
  • Do you want the opportunity to network and learn about real experiences with SharePoint in HEIs and benchmark yourself?

The event will provide a chance to hear from the project team about their findings, as well as from 4 university-based case-studies (Peter Yeadon, UWE, University of Glasgow, and University of Kent).

Please go to the registration page to sign-up - places are limited.

October 14, 2009

Open, social and linked - what do current Web trends tell us about the future of digital libraries?

About a month ago I travelled to Trento in Italy to speak at a Workshop on Advanced Technologies for Digital Libraries organised by the EU-funded CACOA project.

My talk was entitled "Open, social and linked - what do current Web trends tell us about the future of digital libraries?" and I've been holding off blogging about it or sharing my slides because I was hoping to create a slidecast of them. Well... I finally got round to it and here is the result:

Like any 'live' talk, there are bits where I don't get my point across quite as I would have liked but I've left things exactly as they came out when I recorded it. I particularly like my use of "these are all very bog standard... err... standards"! :-)

Towards the end, I refer to David White's 'visitors vs. residents' stuff, about which I note he has just published a video. Nice one.

Anyway... the talk captures a number of threads that I've been thinking and speaking about for the last while. I hope it is of interest.

October 06, 2009


FOTE (the Future of Technology in Education conference organised by ULCC), which I attended on Friday, is a funny beast.  For two years running it has been a rather mixed conference overall but one that has been rescued by one or two outstanding talks that have made turning up well worthwhile and left delegates going into the post-conference drinks reception with something of a buzz.

Last year it was Miles Metcalfe of Ravensbourne College who provided the highlight.  This year it was down to Will McInnes (of Nixon/McInnes) to do the same, kicking off the afternoon with a great talk, making up for a rather ordinary morning, followed closely by James Clay (of Gloucestershire College).  If this seems a little harsh... don't get me wrong.  I thought that much of the afternoon session was worth listening to and, overall, I think that any conference that can get even one outstanding talk from a speaker is doing pretty well - this year we had at least two.  So I remain a happy punter and would definitely consider going back to FOTE in future years.

My live-blogged notes are now available in a mildly tidied up form.  This year's FOTE was heavily tweeted (the wifi network provided by the conference venue was very good) and about half-way thru the day I began to wonder if my live-blogging was adding anything to the overall stream?  On balance, and looking back at it now, I think the consistency added by by single-person viewpoint is helpful.  As I've noted before, I live-blog primarily as a way of taking notes.  The fact that I choose to take my notes in public is an added bonus (hopefully!) for anyone that wants to watch my inadequate fumblings.

The conference was split into two halves - the morning session looking at Cloud Computing and the afternoon looking at Social Media.  The day was kicked off by Paul Miller (of Cloud of Data) who gave a pretty reasonable summary of the generic issues but who fell foul, not just of trying to engage in a bit of audience participation very early in the day, but of trying to characterise issues that everyone already understood to be fuzzy and grey into shows of hands that required black and white, yes/no answers.  Nobody fell for it I'm afraid.

And that set the scene for much of the morning session.  Not enough focus on what cloud computing means for education specifically (though to his credit Ray Flamming (of Microsoft) did at least try to think some of that through and the report by Robert Moores (of Leeds Met) about their experiences with Google Apps was pretty interesting) and not enough acknowledgment of the middle ground.  Even the final panel session (for which there was nowhere near enough time by the way) tried to position panelists as either for or against but it rapidly became clear there was no such divide.  The biggest point of contention seemed to be between those who wanted to "just do it" and those who wanted to do it with greater reference to legal and/or infrastructural considerations - a question largely of pace rather than substance.

If the day had ended at lunchtime I would have gone home feeling rather let down.  But the afternoon recovered well.  My personal highlights were Will McInnes, James Clay and Dougald Hine (of School of Everything), all of whom challenged us to think about where education is going.  Having said that, I think that all of the afternoon speakers were pretty good and would likely have appealed to different sections of the audience, but those are the three that I'd probably go back and re-watch first. All the video streams are available from the conference website but here is Will's talk:

One point of criticism was that the conference time-keeping wasn't very good, leaving the final two speakers, Shirley Williams (of the University of Reading, talking about the This is Me project that we funded) and Lindsay Jordan (of the University of Bath/University of the Arts) with what felt like less than their alloted time.

For similar reasons, the final panel session on virtual worlds also felt very rushed.  I'd previously been rather negative about this panel (what, me?), suggesting that it might descend into pantomime.  Well, actually I was wrong.  I don't think it did (though I still feel a little bemused as to why it was on the agenda at all).  Its major problem was that there was only time to talk about one topic - simulation in virtual worlds - which left a whole range of other issues largely untouched.  Shame.

Overall then, a pretty good day I think.  Well done to the organisers... I know from my own experience with our symposium that getting this kind of day right isn't an easy thing to do.  I'll leave you with a quote (well, as best as I can remember it) from Lindsay Jordan who closed her talk with a slightly sideways take on Darwinism:

in the social media world the ones who survive - the fittest - are the ones who give the most

October 05, 2009

SharePoint in UK universities - literature review

We are currently funding the University of Northumbria to undertake some work for us looking at the uptake of Microsoft SharePoint in UK universities.  As part of this work we have just published a literature review [PDF] by James Lappin and Julie McLeod:

SharePoint 2007 has spread rapidly in the Higher Education (HE) sector, as in most other market sectors. It is an extra-ordinarily wide ranging piece of software and it has been put to a wide variety of different uses by different UK Higher Education Institutions (HEIs). This literature review is based upon what HEIs have been willing to say about their implementations in public.

Implementations range from the provision of team sites supporting team collaboration, through the use of SharePoint to support specific functions, to its use as an institutional portal, providing staff and/or students with a single site from which to access key information sources and tools.

By far the most common usage of SharePoint in UK HEIs is for team collaboration. This sees SharePoint team sites replacing, or supplementing, network shared drives as the area in which staff collaborate on documents and share information with each other.

September 14, 2009

Flocking behaviour - why Twitter is for starlings, not buzzards

Byrdes of on kynde and color flok and flye allwayes together.

William Turner, 1545

Brian Kelly has posted a light analysis of Twitter usage around the ALT-C 2009 conference in Manchester last week. He notes that there were "over 4,300 tweets published in a week" using the (conference-endorsed) #altc2009 hashtag (summary), and a further "128 tweets [...] from 51 contributors" using the alternative (but not endorsed) #altc09 hashtag (summary). Pretty impressive I think.

Looking at the summaries for the two hashtags I note that @HallyMk1 was by far the highest user of the 'wrong' tag - 41 tweets - making him one of the more prolific individual tweeters at the conference I suspect.

The trouble is, in my experience at least, using a Twitter search for a particular hashtag has become the most common way to keep up to date with what is going on at a given event. On that basis, if you don't tweet using the generally agreed tag you are effectively invisible to much of the conference audience - in short, you aren't part of the conversation in the way you are if you use the same tag as everyone else.

Tags emerge naturally as part of the early 'flocking behaviour' in the run up to an event (with and without the help of conference organisers). I would argue that in general it pays to go with the flow, even if you have good reason for thinking an alternative hashtag would have been a better choice (because it is shorter for example). As I noted to @HallyMk1 on Twitter this morning, to do otherwise makes you "either a slow learner or very stubborn" :-)

July 22, 2009

What's a tweet worth?

One of the successful aspects of Twitter is its API and the healthy third-party 'value-add' application environment that has grown up around it. This environment has seen the development not just of new clients but of all sorts of weird and wonderful, serious and trivial, applications for enhancing your Twitter experience.

In the good old days, third-party applications gave you the option of tweeting your followers about how wonderful you thought their shiny new application was.  The use of such an option was typically left to your discretion and no incentives were given to encourage you to do so - other than that you thought the information might be useful to those around you.  Such an approach kind of worked when we all had relatively low numbers of followers and there were relatively few apps.

More recently I've noticed a new 'business model' emerging on Twitter which can be summed up as, "spam all your followers with a single tweet about us and we'll reward you in some way".  The rewards vary but might include a free entry into a prize draw, or money off the full subscription rate for the application in question.

Unfortunately, in a twitterverse where lots of people follow lots of other people, every person's "single tweet" quickly turns into a "deluge of tweets" for those people who follow a reasonably large number of other twitterers.

One recent example of this (in my Twitter stream at least) was people's use of the #moonfruit hashtag in order to enter into a prize draw for a Macbook, leading to a collective series of tweets that quickly became very annoying.

More recently I've noticed a similar thing, though so far much less widespread, arising from the BackupMyTweets service.  This service is somewhat more interesting than the Moonfuit example.  For a start, it isn't as mainstream as Moonfruit (I don't suppose that most people give two hoots about whether their tweets are backed-up or not!) and therefore hasn't given rise to the same level of problem.  Conversely, being more academic in nature notionally gives people a more credible reason to tweet about it.

The trouble is... BackupMyTweets is offering one year's free subscription to their service if you send one tweet about them (and they offer a facility to make doing so very easy with a stock set of phrases about how useful they are).  One year's free service is worth (US)$10 so that's quite an incentive.

However, users of this facility (and others like it) need to remember that the real cost of tweeting about it (even if that tweet is intended genuinely) lies in the trust people place in their future tweets.  If I know that someone is willing to tweet about how good something is just because they are getting paid to do so, what does that tell me about their future recommendations?

Does one such tweet have any impact on someone's credibility?  No, of course not.  But if there is a trend towards this kind of thing (as I suspect there is) then it will become more of an issue.  This is particularly true where it is a more 'corporate' Twitter account sending the tweet (as, for example, the Institutional Web Management Workshop Twitter account did yesterday).  People follow such accounts on the basis that they want to keep up to date with an event or organisation - they don't want to see them used to send spam about other people's tools and services.

Assuming this trend continues I guess we'll soon start to see the addition of a 'block more tweets like this' button in Twitter clients, followed (presumably) by some kind of Twitter equivalent of RBL?  Maybe I'm making a mountain out of a molehill here, though people probably thought the same in the early days of email spam?  Remember, the only reason these kinds of approaches work is because we so easily fall into the trap of using them.  The problem is ours and can be fixed by us.

June 25, 2009

Twitter for idiots

I'm just back from giving a 30 minute "Twitter for idiots" tutorial for one of our senior management team here at Eduserv.  Note that the title isn't intended to be offensive - in fact, he chose it - but it certainly sums up the level of what I had to say.  It reminds me that yesterday I tweeted rather negatively about the fact that CILIP are offering a Twitter for Librarians training course:

good grief... do #cilip really run a twitter course? - http://tinyurl.com/mxabo3 - speaks volumes methinks

Phil Bradley, who is running the course, quite rightly came back at me with a challenge to explain what, and who, it "speaks volumes" about.

So... two things. Firstly, it was an off the cuff remark - essentially a joke - but like all such things I guess there is a serious point behind it. The idea of running a half-day course to teach people how to tweet just struck me as funny! It's an anachronysm. In that sense, it says something about both the library community and CILIP I guess. Paying to sit in a room in order to find out how to create a "a good, rounded and effective Twitter profile", for example, smacks of a '1980s-style mainframe user-support application training programme' mentality that just doesn't sit comfortably with the way the Web works today. IMHO.

That doesn't mean that there aren't learning needs and opportunities around our use of Twitter by the way, I think there probably are, but I also think that people have to get Twitter before even thinking about such things and I'm not totally sure that you can teach people to get Twitter? People get Twitter by using it.

Secondly (and very much related to the last point), there is a visitors vs. residents issue here (to borrow David White's categorisation of online users). Twitter is a tool for residents. It's about people being immersed. It's about people "living a percentage of their life online". When visitors get hold of Twitter they see it as a tool to get a job done when the need arises - to push out an occassional marketing message for example. This is when things have the potential to go badly wrong (as seen recently with Habitat's use of Twitter). Again, the real issue here is whether you can teach/train visitors to become residents.

Note that I am not using the resident vs visitor divide in a judgemental way here. I'm happy to accept that the world is split into two types of people (no, not those who divide the world into two types of people and those who don't!) and I'm happy to accept that both approaches to the world are perfectly valid. But they are different approaches and I don't know how often people cross from one to the other, nor whether such changes come as the result of attending a course or workshop?

June 22, 2009

Influence, connections and outputs

Martin Weller wrote an interesting blog post on Friday, Connections versus Outputs - interesting in the sense that I strongly disagreed with it - that discussed a system for assessing an individual's "prominence in the online community of their particular topic" by measuring their influence, betweenness and hubness (essentially their 'connectedness' to others in that community). Martin had used the system to assess the prominence of people and organisations working in the area of 'distance learning', suggesting that it might form a useful basis for further work looking at metrics for the new forms of scholarly communication that are enabled by the social Web. The algorithm adopted by the system was not available for discussion so one was left reacting to the results it generated.

I reacted somewhat negatively, largely on the basis that the system ranked Brian Kelly's UK Web Focus blog 6th most influential in that particular subject area. This is not a criticism of Brian (who is clearly influential in other areas), but the fact remains that Brian's blog contains only three posts where the phrase 'distance learning' appears, two of which are in comments left by other people and one of which is in a guest post - hardly indicative of someone who is highly influential in that particular subject area?

In passing, I note that Brian has now also commented on this and Martin has written a follow-up post.

Why does Brian's blog appear in the list? Probably because he is very well connected to people who do write about distance learning. Unfortunately, that connectedness is not sufficient, on its own, to draw conclusions about his level of influence on that particular topic, so the whole process breaks down very quickly.

My concern is that if we present these kinds of rather poor metrics in any way seriously in counterpoint to more traditional (though still flawed) metrics like the REF we will ultimately do harm in trying to move forward any discussion around the assessment of scholarly communication in the age of the social Web.

To cut a long story short (you can see my fuller comments on the original post) I ended by suggesting that if we really want to develop "some sensible measure of scholarly impact on the social Web" then we have to step back and consider three questions:

  • what do we want to measure?
  • what can we measure?
  • how can bring these two things close enough together to create something useful?

To try and answer my own questions I'll start with the first. I suggest that we want to try and measure two aspects of 'impact':

  • the credibility of what an individual has to say on a topic,
  • and the engagement of an individual within their 'subject' community and their ability to expose their work to particular audiences.

These two are clearly related, at least in the sense that someone's level of engagement in a community (their connectedness if you like) clearly increases the exposure of their work but is also indicative of the credibility they have within that community.

Having said that, my gut feeling is that credibility, at least for the purposes of scholarly communication, can only really be measured by some kind of a peer-review (i.e. human) process. Of course, on the Web, we are now very used to infering credibility based on the weighted number of inbound links that a resource receives, not least in the form of Google's PageRank algorithm. This works well enough for mainstream Web searching but I wouldn't want it used, at least not at any trivial level, to assess scholarly credibility or impact. Why not? Well a couple of things immediately spring to mind...

Firstly, a link is typically just a link at the moment, whether it's a hyperlink between two resources or the link between people in a social network. The link carries no additional semantics. If paper A critiques paper B then we don't want to link between them to result in paper B being measured as having more credibility/impact than it otherwise would have done had the critique not been written.  (This is also true of traditional citations between journal articles of course, except that peer review mechanisms stop (most of) the real dross from ever seeing the light of day.  On the Web, everything is there to be cited.)

Secondly, if we just consider blogging for a moment, the way a blog is written will have a big impact on how people react to it and that, in turn, might affect how we measure it. Blogs written in a more 'tabloid' style for example might well result in more commenting or inbound links than those written in a more academic style. We presumably don't want to end up measuring scholarly impact as though we were measuring newspaper circulation?

Thirdly, any metrics that we choose to use in the future will ultimately influence the way that scholarly communication happens. Take blog comments for example. A comment is typically not a first class Web object - comments don't have URIs for example. One can therefore make the argument that writing a comment on someone else's blog post is less easily measurable than writing a new blog post that cites the original. One might therefore expect to see less commenting and more blog-post writing (under a given set of metrics). While this isn't necessarily a bad thing, it seems to me that our behaviour should be driven by what works best for 'scholarly communication' not by what can be most easily 'measured'.

As I said in my first comment on Martin's post, "connectedness is cheap". On that basis, we have to be very careful before using any metrics that are wholly or largely based on measures of connectedness. The point is that the things we can measure easily (to return to the second part of my question above) are likely to be highly spammable (i.e. they can be gamed, either intentionally or by accident). Yes, OK... all measures are spammable, but some are more spammable than others! If we want to start assessing academics in terms of their engagement and output as part of the social Web then I think we need to start by answering my questions above rather than by showcasing rather poor examples of what can be automated now, except as a way of saying, "look, this is hard"!

June 02, 2009

JISCMail and social bookmarking

JISCMail have announced that they will offer support for social bookmarking services from June 9th:

From Tuesday 9th June, every list homepage (https://www.jiscmail.ac.uk/yourlistname) and every posting stored on the JISCMail online archives will include a bookmark/share button which will have links to a selection of social bookmarking/sharing sites.

Social Bookmarking allows you to share, store, organise, search, tag and manage webpages you would like to be able to revisit in the future, or share with others. For example if a posting is made to a JISCMail list that you know will be of interest to someone else you can email a link to that person using our button. Alternatively you can choose one of the social networking sites you are registered with, e.g. Twitter or Facebook, to share the link with a group of people. You might use the sharing button to bookmark a link to your list homepage or a particular posting on a list that you can revisit at a later date on a site such as Delicious.

I suppose this is progress, though one might argue that it is about 2 or 3 years too late (I had to go back and double-check that I wasn't reading an old announcement from a few years ago)? But, hey, with this new realisation that people might actually want to more easily share, cite and re-use JISCMail discussions on the Web, perhaps they'll offer half-decent 'cool' URIs and allow Google in to index the contents of individual messages?

Or perhaps we've all forgotten about mailing lists as a forum for discussion anyway and it's completely irrelevant what JISCMail does?

May 18, 2009

Symposium live-streaming and social media

We are providing a live video stream from our symposium again this year, giving people who have not registered to attend in person a chance to watch all the talks and discussion and to contribute their own thoughts and questions via Twitter and a live chat facility (this year based on ScribbleLive).

Our streaming partner for this year is Switch New Media and we are looking forward to working with them on the day.  Some of you will probably be familiar with them because they provided streaming from this year's JISC Conference and the JISC Libraries of the Future event in Oxford.

If you plan on watching all or part of the stream, please sign up for the event’s social network so that we (and others) know who you are.  The social network has an option to indicate whether you are attending the symposium in person or remotely.

Also, for anyone tweeting, blogging or sharing other material about the event, remember that the event tag is ‘esym09’ (‘#esym09’ on Twitter).  If you want to follow the event on Twitter, you can do so using the Twitter search facility.

May 14, 2009

The role of universities in a Web 2.0 world?

Brian Kelly, writing about the Higher Education in a Web 2.0 World report, ends by referring to the recommendation to "explore issues and practice in the development of new business models that exploit Web 2.0 technologies" (Area 3: Infrastructure), suggesting that it has to do with "best practices for institutional engagement (or not) with Web 2.0". I don't know what the report intended by this statement but, to me at least, it seems like business models are a pretty fundamental issue... potentially much more fundamental than Brian's interpretation.

I noted a similar issue in the CILIP2 discussions of a few weeks ago. Asking "how should CILIP use Web 2.0 to engage with its members?" ignores the more fundamental question, "what is the role of an organisation like CILIP in a Web 2.0 world?". It's a bit like asking an independent high-street bookshop to think about how it uses Web 2.0 to engage with its customers, ignoring that fact that Amazon might well have just trashed its business model entirely!

Luckily for universities there isn't (yet?) the equivalent of an Amazon in the HE sector so I accept that the situation isn't quite the same. Indeed, there are strong hints in the report that aspects of the traditional university, face to face tutor time for example, are well liked by their customers (I know many people hate the term 'customers' but it strikes me that is increasingly what the modern HE student has become). Nonetheless, I think that particular recommendation would be better interpretted as having more to do with "what is the role for universities in a Web 2.0 world?" than with "how do universities best use Web 2.0 to enhance their current practice?"?

Or, to put it a different way, if Web 2.0 changes everything, I see no reason why that doesn't apply as much to professional bodies and universities as it does to high street bookshops.

May 13, 2009

Identity in a Web 2.0 world?

In the flurry of Twitter comments about the Higher Education in a Web 2.0 World report yesterday I noticed the following tweet from Nicole Harris at the JISC:

#clex09 disappointed by lack of attention to identity issues in the report-despite it being included in the definition IDM hardly mentioned

I have to say that I share Nicole's disappointment.  Having now read thru the whole report I can find little reference to identity or identity management.  Identity doesn't appear in the index, nor in the list of critical issues.

This seems very odd to me.  The management of identity (in both a technology sense and a political/social sense) is one of the key aspects of the way that the social Web has evolved, witness the growth of OpenID, OAuth, Google OpenSocial and Friend Connent, Facebook Connect and the rest.  If the social Web is destined to have a growing influence on teaching and learning (and research) in HE then we have to understand what impact that has in terms of identity management.

There are two aspects to this.  I touched on the first yesterday, which is that understanding identity forms a critical part of digital literacy.  It therefore worries me that the report seems to focus more heavily on information literacy, a significantly narrower topic.  The second has to do with technology.

Let me give you a starter for 10... identity in a Web 2.0 world is not institution-centric, as manifest in the current UK Federation, nor is it based on the currently deployed education-specific identity and access management technologies.  Identity in a Web 2.0 world is user-centric - that means the user is in control.

Now... I should note two things.  Firstly, that Nicole and I might well have parted company in terms of our thinking at this point but I won't try to speak on her behalf and I don't know what lay behind her tweet yesterday.  Secondly, that user-centric might mean OpenID, but it might mean something else.  The important point is that learners (and staff) will come into institutions with an existing identity, they will increasingly expect to use that identity while they are there (particularly in their use of services 'outside' the institution) and that they will continue using it after they have left.  As a community, we therefore have to understand what impact that has on our provision of services and the way we support learning and research.  It's a shame that the report seems to have missed this point.

May 12, 2009

HE in a Web 2.0 world?

The Higher Education in a Web 2.0 World report, which is being launched in London this evening, crossed my horizon this morning and I ended up twittering about it on and off for most of the day.

Firstly, I should confess that I've only had a chance to read the report summary, not the full thing, so if my comments below are out of line, I apologise in advance.

It strikes me that the report has a rather unhelpful title because it doesn't seem to me to be about "higher education" per se.  Rather, it is about teaching and learning in HE. For example, there's nothing in it about research practice as far as I can tell. Nor is it really about "Web 2.0" (whatever that means!).  It is about the social Web and the impact that social software might have on the way learning happens in HE.

The trouble with using the phrase "Web 2.0" in the title is that it is confusing, as evidenced by the Guardian's coverage of the report which talks, in part, about universities outsourcing email to Google.  Hello... email is about as old skool as it gets in terms of social software and completely orthogonal to the main thrust of the report itself.

And, while I'm at it, I have another beef with the Guardian's coverage.  Why, oh why, does the mainstream media insist on making stupid blanket statements about the youth of today and their use of social media?  Here are two examples from the start of the article:

The "Google generation" of today's students has grown up in a digital world. Most are completely au fait with the microblogging site Twitter...

Modern students are happy to share...

I don't actually believe either statement and would like to see some evidence backing them up.  Students might well be happy to share their music?  They might well be happy to share their photos on Facebook?  Does that make them happy to share their coursework?  In some cases, possibly...  but in the main? I doubt it.

I'm nervous about this kind of thing only because it reminds me of the early days of HE's interest in Second Life, where people were justifying their in-world activities with arguments like, "we need to be in SL because that's where the kids are", a statement that wasn't true then, and isn't true now :-(

Anyway, I digress... despite the naff title, I found the report's recommendations to be reasonably sensible. I have a nagging doubt that the main focus is on social software as a means to engender greater student/tutor engagement and/or as a pedagogic tool whereas I would prefer to see more emphasis on the institution as platform, enabling student to student collaboration and then dealing with the consequences.  In short, I want the focus to be on learning rather than teaching I suppose.  However, perhaps that is my mis-reading of the summary.

I also note that the report doesn't seem to use the words "digital literacy" (at least, not in the summary), instead using "information literacy" and "web awareness" separately. I think this is a missed opportunity to help focus some thinking and effort on digital literacy. I'm not arguing that information literacy is not important... but I also think that digital literacy skills, understanding the issues around online identity and the long term consequences of material in social networks for example, are also very important and I'm not sure that comes out of this report clearly enough.

Anyway, enough for now... the report (or at least the summary) seems to me to be well worth reading.

April 07, 2009

OKCon 2009

While I probably do spend longer than is healthy in front of a PC on a typical weekend, I have to admit to a fairly high level of resistance to attending "work-related" events at weekends, especially if travel is involved. My Saturdays are for friends, footy, films, & music, possibly accompanied by beer, ideally in some combination.

But (in the absence of any proper football) I temporarily suspended the SafFFFM rule the weekend before last and attended the Open Knowledge Conference, held at UCL. The programme was a mix of themed presentation sessions and an "Open Spaces" session based on contributions from attendees.

The morning session featured three presentations from people working in the development/aid sector. Mark Charmer talked about AKVO, and its mission to the facilitate connections between funders and projects in the area of water and sanitation, and to streamline reporting by projects (through support for submissions of updates by SMS). Vinay Gupta described the use of wiki technology to build Appropedia, a collection of articles on "appropriate technology" and related aid/development issues, including project histories and detailed "how-to"-style information. The third session was a collaboration between Karin Christiansen, on the Publish What You Fund campaign to promote greater access to information about aid, and Simon Parrish on the work of Aidinfo to develop standards for the sharing of such information.

One recurring theme in these presentations was that of valuable information - from records of practical project experience "on the ground" to records of funding by global agencies - being "locked away" from, or at least only partially accessible to, the parties who would most benefit from it. The other fascinating (to me, at least) element was the emphasis on the growing ubiquity of mobile technology: while I'm accustomed to this in the UK, I was still quite taken aback by the claim (I think, by Mark) that in the near future there will be large sections of the world's population who have access to a mobile phone, but not to a toilet.

The main part of the day was dedicated to the "Open Spaces" session of short presentations. Initially, IIRC, these had been programmed as two parallel sessions in which the speakers were allocated 10 minutes each. On the day, the decision was taken to merge them into a single session with (nearly 20, I think?) speakers delivering very short "lightning" talks. We were offered the opportunity to vote on this, I hasten to add, and at the time avoiding missing out on contributions had seemed like a Good Idea, if time permitted. But with hindsight, I'm not sure it was the right choice: it led to a situation in which speakers had to deliver their content in less time than they had anticipated (and some adjusted better than others), there was little time for discussion, and the pace and diversity of the contributions, some slightly technical, but mostly focusing more on social/cultural aspects, did make it rather difficult for me to identify common threads.

The next slot was dedicated to the relationship between Open Data and Linked Data and the Semantic Web, with short, largely non-technical, presentations by Tom Scott of the BBC, Jeni Tennison, and Leigh Dodds of Talis. Maybe it was just because I was familiar with the topic, but it felt to me that this part of the day worked well, and the cohesive theme enabled speakers to build on each other's contributions.

I thought Tom's presentation of the BBC's work on linked data was one of the best I've seen on that topic: he managed to cover a range of technical topics in very accessible terms, all in fifteen minutes. (I see Tom has posted his slides and notes on his weblog.) Jeni described her work with RDFa on the London Gazette. Leigh pursued an aquatic metaphor for RDF - triple as recombinant molecule - and semantic web applications, and also announced the launch of a Talis data hosting scheme which they are calling the Talis Connected Commons, under which public domain datasets of up to 50 million triples can be hosted for free on the Talis Platform. (I noticed this also got an enthusiastic write-up on Read Write Web).

Although I quite enjoyed the linked data talks, it's probably true to say that - Leigh's announcement aside - they didn't really introduce me to anything I didn't know already - but there again, I probably wasn't the primary target audience.

The day ended with a presentation by David Bollier, author of Viral Spiral, on the "sharing economy". Unfortunately, things were over-running slightly at that point, and I only caught the first few minutes before I had to leave for my train home - which was a pity as I think that session probably did consolidate some of the issues related to business models which had been touched on in some of the short talks.

Overall, I suppose I came away feeling the event might have benefited from a slightly tighter focus, maybe building around the content of the two themed sessions. Having said that, I recognise that the call for contributions had been explicitly very "open", and the event did attract a very mixed audience, many probably with quite different expectations from my own! :-)

W3C launches Social Web Incubator Group

The W3C has launched a Social Web Incubator Group, chaired jointly by Dan Appelquist (Vodafone), Dan Brickley (Vrije Universiteit) and Harry Halpin (W3C Fellow from the University of Edinburgh) and I'm very pleased to note that Harry Halpin's contribution to this activity is supported by Eduserv through the Assisting the W3C in opening social networking data project funding that we made available late last year.

The group's mission is to "understand the systems and technologies that permit the description and identification of people, groups, organizations, and user-generated content in extensible and privacy-respecting ways".

March 20, 2009

Unlocking Audio

I spent the first couple of days this week at the British Library in London, attending the Unlocking Audio 2 conference.  I was there primarily to give an invited talk on the second day.

You might notice that I didn't have a great deal to say about audio, other than to note that what strikes me as interesting about the newer ways in which I listen to music online (specifically Blip.fm and Spotify) is that they are both highly social (almost playful) in their approach and that they are very much of the Web (as opposed to just being 'on' the Web).

What do I mean by that last phrase?  Essentially, it's about an attitude.  It's about seeing being mashed as a virtue.  It's about an expectation that your content, URLs and APIs will be picked up by other people and re-used in ways you could never have foreseen.  Or, as Charles Leadbeater put it on the first day of the conference, it's about "being an ingredient".

I went on to talk about the JISC Information Environment (which is surprisingly(?) not that far off its 10th birthday if you count from the initiation of the DNER), using it as an example of digital library thinking more generally and suggesting where I think we have parted company with the mainstream Web (in a generally "not good" way).  I noted that while digital library folks can discuss identifiers forever (if you let them!) we generally don't think a great deal about identity.  And even where we do think about it, the approach is primarily one of, "who are you and what are you allowed to access?", whereas on the social Web identity is at least as much about, "this is me, this is who I know, and this is what I have contributed". 

I think that is a very significant difference - it's a fundamentally different world-view - and it underpins one critical aspect of the difference between, say, Shibboleth and OpenID.  In digital libraries we haven't tended to focus on the social activity that needs to grow around our content and (as I've said in the past) our institutional approach to repositories is a classic example of how this causes 'social networking' issues with our solutions.

I stole a lot of the ideas for this talk, not least Lorcan Dempsey's use of concentration and diffusion.  As an aside... on the first day of the conference, Charles Leadbeater introduced a beach analogy for the 'media' industries, suggesting that in the past the beach was full of a small number of large boulders and that everything had to happen through those.  What the social Web has done is to make the beach into a place where we can all throw our pebbles.  I quite like this analogy.  My one concern is that many of us do our pebble throwing in the context of large, highly concentrated services like Flickr, YouTube, Google and so on.  There are still boulders - just different ones?  Anyway... I ended with Dave White's notions of visitors vs. residents, suggesting that in the cultural heritage sector we have traditionally focused on building services for visitors but that we need to focus more on residents from now on.  I admit that I don't quite know what this means in practice... but it certainly feels to me like the right direction of travel.

I concluded by offering my thoughts on how I would approach something like the JISC IE if I was asked to do so again now.  My gut feeling is that I would try to stay much more mainstream and focus firmly on the basics, by which I mean adopting the principles of linked data (about which there is now a TED talk by Tim Berners-Lee), cool URIs and REST and focusing much more firmly on the social aspects of the environment (OpenID, OAuth, and so on).

Prior to giving my talk I attended a session about iTunesU and how it is being implemented at the University of Oxford.  I confess a strong dislike of iTunes (and iTunesU by implication) and it worries me that so many UK universities are seeing it as an appropriate way forward.  Yes, it has a lot of concentration (and the benefits that come from that) but its diffusion capabilities are very limited (i.e. it's a very closed system), resulting in the need to build parallel Web interfaces to the same content.  That feels very messy to me.  That said, it was an interesting session with more potential for debate than time allowed.  If nothing else, the adoption of systems about which people can get religious serves to get people talking/arguing.

Overall then, I thought it was an interesting conference.  I suspect that my contribution wasn't liked by everyone there - but I hope it added usefully to the debate.  My live-blogging notes from the two days are here and here.

January 12, 2009

Mapping Me

Last Thursday I attended the workshop on digital identity co-ordinated by members of the three new projects funded by Foundation research grants this year (Rhizome, This is Me, and Assisting the W3C in opening social networking data).

Ahead of the event, moved partly by thinking about the day (and by Andy's earlier post) and partly by a post by Botgirl Questi I happened across the other day, I thought it might be interesting to try to sketch out a "mind map" of the principal digital sources where I create (or created) content which contributes in some way to the representation of my "digital identity".

(To be honest, I did this mostly for my own purposes, just so that I could visualise what that landscape looked like, but as my posts here have been somewhat thin on the ground (mainly because I don't feel I've had much of interest to say of late, to be honest - I did half-draft a post on that topic, but it was getting too depressing!), I thought I'd share it here.)


I've included only those sources where I've identified myself by my birth name or a nickname/userid that I frequently associate with it (usually "PeteJ" or "PeteJo" or something similar) - my "work-related" identity, if you like - even if the content isn't always directly related to my work activity, it is associated with the identity under which I perform that activity. In at least some of those sources, I've actually posted very little content, so there may be little more than a minimal "profile" page, but I guess even the presence of that minimal page "says" something about work-me in that it indicates that at some point I had sufficient interest to register for a service. On some other services, my main input has been comments on, or ratings of, or maybe just subscriptions to, the contributions of others, rather than any new "primary content" of my own.

The resulting "map" probably looks fairly complex, but I was mildly surprised that it was relatively limited in extent. And kinda pleased too, because over recent months I have been making some efforts to "prune" back some of the content which I've put "out there" over the years which has left me slightly uncomfortable about just how much information about myself I have disclosed, and to "take firmer control" of other bits. I've deleted a few accounts (Orkut, LinkedIn) which I wasn't making any real use of but which nevertheless disclosed a fair amount of information, and I've restricted access to content on others (notably by switching to "protected" status on Twitter). (Though, yes, I know, caches like Google's probably have some of it.)

I keep thinking of things I've missed: I've got some accounts with other virtual worlds which I used only once or twice; I've certainly registered on dozens of other "Web 2.0" services, played around for 15 minutes, and forgotten about them by the following day....

January 05, 2009

The future of social networking?

I note that the position papers for the W3C Workshop on the Future of Social Networking are now available. There are 73 in all so there's a lot of new year reading to be done if you are interested.  In the meantime, here is a quick Wordle of the aggregated text (the creation of which wasn't helped by the lack of an RSS feed for the papers and the fact that most have been submitted as PDF... boo!).

Two of the papers have been written by people we are currently funding, one by Shirley Williams, Pat Parslow and Karsten Oster Lundqvist on behalf of the University of Reading and one by Harry Halpin entitled Ten theses on the Future of Social Networking.  Good stuff.


December 18, 2008

The @ crowd

Writing at life under electronic conditions, Benedikt Koehler discusses Networks that matter on Twitter: the @-Crowd, suggesting that there are three kinds of networks at play: your direct network of followers/followees, a wider indirect network of their followers/followees, and your so called '@-crowd', the people you are actively in conversation with using the @andypowe11 mechanism of directed tweets.  He cites a very interesting paper by Bernardo A. Huberman, Daniel M. Romero and Fang Wu called, Social networks that matter: Twitter under the microscope which provides some analysis of this last network and suggests that:

the driver of [Twitter] usage is a sparse and hidden network of connections underlying the "declared" set of friends and followers.

The paper ends with:

In conclusion, even when using a very weak definition of “friend” (i.e. anyone who a user has directed a post to at least twice) we find that Twitter users have a very small number of friends compared to the number of followers and followees they declare. This implies the existence of two different networks: a very dense one made up of followers and followees, and a sparser and simpler network of actual friends. The latter proves to be a more influential network in driving Twitter usage since users with many actual friends tend to post more updates than users with few actual friends. On the other hand, users with many followers or followees post updates more infrequently than those with few followers or followees.

I sense an (unwritten) assumption in the paper that the use of this sparser network somehow has more impact than the wider one. Perhaps I'm being unfair? Speaking personally, I would hesitate before suggesting that people who have more "friends" (using the definition from the paper above) are somehow getting more impact out of their use of Twitter than those with fewer. It's not hard to think of cases where lots of directed posts are used to share complete drivel between people - equally where a one-way feed of undirected tweets can be powerful alerting mechanism. Nonetheless, it's very interesting to see this kind of analysis taking place.

Other than that, I have two very minor gripes with the paper. Firstly, it defines "friend" in a very particular way (see above) whereas that term has traditionally been used by Twitter to mean 'a person that you follow'.  The paper introduces 'followee' for this which I quite like. (Note: although 'friend' is no longer used in that way in the Twitter Web interface, the word 'friend' still appears in the URL for the list of people that you follow). Secondly, the paper doesn't acknowledge that Twitter can also be used to send private 'direct messages' (DMs), the use of which surely forms part of this sparser network. Clearly, such usage is difficult to measure in an automated way, since it is private and not exposed through the Twitter API.

If you are interested in playing with this stuff, Benedikt Koehler's TwitterFriends application let's you see how your network of "friends" (as defined in the paper) shapes up.

December 01, 2008

What do you call a device you can use on the run with one hand?

I installed Ocarina by Smule on my iPhone the other day. Nothing stunning about that I suppose... well, apart from the fact that it's the first time I've ever turned my mobile phone into a social musical instrument! 

But that's the weird thing about the iPhone - it isn't really a phone at all. It's a ... a ... - see, the trouble is, as Stefan Fountain noted at FOWA in London, we haven't got a word for what the iPhone is.

Via @ajcann I note that 100,000 applications have been added to the iPhone App Store in the last 142 days.  That's impressive isn't it? Mine is continually in use (if not by me then by my kids, who love all the games that can be installed) yet in the 3 months or so that I've owned it, I've only used about 6 hours call time - that's about 4 minutes a day (on average). Have I been getting my money's worth? Of course. The real value comes from all the other things I can do with it - not from the fact that it is notionally a 'phone'.

Apple haven't got everything right of course. The choice of O2 as the only UK network provider isn't great (IMHO). The fact that my kids seem to ignore the mute button, turning the volume right down instead, thus causing me to not hear the ring tone every so often (a usability issue?). The somewhat closed nature of the Apple iTunes App Store (meaning that 'jailbreaking' is required for some uses). But on balance, the iPhone gets a lot more right than it gets wrong and I, for one, could never go back.

So, what's the definition of a mobile phone? In the FOWA presentation above, the following definition is suggested:

a device you can use on the run with one hand

I think that definition is slightly broken, since it appears to include the iPod Touch, which doesn't fall into my mental model of a 'phone' (despite the fact that it presumably supports VoIP over wireless). But, to be honest, I can't think of a simple definition of 'mobile phone' that doesn't rule too much out or too much in so maybe the one above is good enough. And perhaps that's the point - convergence is about the blurring of things that used to be separate and as a result, the clear-cut names we used to use no longer apply cleanly.

Well... better get used to it I guess since the situation is almost bound to get worse rather than better - or do I mean better rather than worse!?

Facebook in HE

A quickie... and one that I meant to write a while back actually, in response to a short debate I watched happening on one of the Higher Education Academy mailing lists about the use of Facebook in UK universities.  Unfortunately, as with many of my potential blog posts, it got forgotten at the time.  Then, more recently, I noticed that Brian Kelly had posted on the subject, What is the Evidence Suggesting About Facebook?, leading to several comments and a response by Paul Walk, Why I suppose I ought to become a Daily Mail reader.

The problem with Facebook in HE is that we tend (not always I'll admit, but often enough to be worth noting) to approach it with questions like, "how can we use Facebook in universities to allow us to engage with them?" - where 'us' is the lecturers and 'them' is the students. And this approach tends to degenerate into the kind of, "oh, but Facebook is their space not our space" or, "is it OK for me to have a student as a Facebook 'friend'?" debates that we see so regularly.

If, instead, we approached it with questions like, "how can we use Facebook in universities to facilitate students/prospective students/alumni talking to other students/prospective students/alumni?" - as, for example, Ruth Page does in Facebook Fresher's group: Success story - I think we'd be on firmer ground.

Basically, it's about using Facebook (or any other social network for that matter) to facilitate conversations in spaces that 'we' are not necessarily part of.

November 28, 2008

SWORD Facebook application & "social deposit"

Last week, Stuart Lewis of Aberystwyth University announced the availability of his Facebook repository deposit application, which makes use of the SWORD AtomPub profile. Stuart's post appeared just a day before a post by Les Carr in which he includes a presentation on "leveraging" the value of items once they are in a repository, by providing "feeds" of various flavours and/or supporting the embedding of deposited items in other externally-created items.

Stuart describes the SWORD Facebook application as enabling what he calls "social deposit":

Being able to deposit from within a site such as Facebook would enable what I’m going to call the Social Deposit. What does a social deposit look like? Well, it has the following characteristics:

  • It takes place within a social networking type site such as Facebook.
  • The deposit is performed by the author of a work, not a third party.
  • Once the deposit has taken place, messages and updates are provided stating that the user has performed the deposit.
  • Friends and colleagues of the depositor will see that a deposit has taken place, and can read what has been deposited if they want to.
  • Friends and colleagues of the depositor can comment on the deposit.

So the social deposit takes place within the online social surroundings of a depositor, rather than from within a repository. By doing so, the depositor can leverage the power of their social networks so that their friends and colleagues can be informed about the deposit.

It occurred to me it would be interesting to compare the approach Stuart has taken in the SWORD Facebook app with the approach taken in "deposit" tools typically used with - highly "social" - "repositories" like Flickr (e.g. the Flickr Uploadr client) or the approach sometimes used with weblogs (e.g. blogging clients like Windows Live Writer).

The actions of posting images to my Flickr collection or posting entries to my weblog are both "deposit" actions to my "repositories". As a result of that "deposit", the availability of my newly deposited resources - my images, my weblog posts - is "notified" - either through some mechanism internal to the target system, or (as Les's presentation illustrates) through approaches based on feeds "out of" the repository - to members of my various "social network(s)":

  • my "internal-to-Flickr" network of Flickr contacts;
  • the network of people who aren't my Flickr contact but subscribe to my personal Flickr feed, or to tag-based or group-based Flickr feeds I add to;
  • the network of people who subscribe to my weblog feed, or to one of my pull-my-stuff-together aggregation feeds.

And so on....

The point I wanted to highlight here is - as Stuart notes above - that the "social" aspect isn't directly associated with the "deposit" action: the Flickr uploader (AFAIK) doesn't interact with my Flickr contact list to ping my contacts; Windows Live Writer doesn't know anything about who out there in the blogosphere has subscribed to my weblog. Using these tools, deposit itself is an "individual" rather than a "social" action, if you like. Rather, the social aspect is supported from the "output"/"publication" features of the repository.

In contrast, if I understand Stuart's description of the Facebook deposit app correctly, the "social" dimension here is based on the context of the "deposit" action. Here, the "deposit" tool - Stuart's Fb app - is "socially aware", in the sense that it, rather than the target repository, is responsible for creating notifications in a feed - and the readership of that feed is shaped by the context of the deposit action rather than by the context of "publication": it's my network of Fb friends who see the notifications, not my network of Flickr contacts.

Though of course it may be that the repository I target using the Fb deposit app also enables all the sort of personal-/tag-/group-based output feed functionality I describe above for the Flickr/weblog cases. And I may well take my personal repository feed and "pipe it in to" a social network service - if I still bothered with Facebook (I don't, but that's another story!), I might be using a Flickr Fb app or a weblog app to add notifications to my Fb news feed! So these scenarios aren't exclusive, by any means.

I'm not sure I have any real conclusions here, tbh, and just to be clear, I certainly don't mean to sound negative about the development. Quite the contrary, it provides a very vivid example of how the different aspects of repository use can straddle different application contexts and how the SWORD protocol can be deployed within those different contexts. I think it also provides an illustration of Paul Walk's point about separating out some of our repository concerns (though I note that Paul's model does see the "source repository" as a provider of feeds).

It's certainly worth exploring the different dimensions of the "sociality" of the two approaches.I guess I'm arguing that (to me) "social deposit" isn't a substitute for the socialness that comes with the sort of "output" features Les describes - but it may well turn out to be a useful complement.

November 07, 2008

Some (more) thoughts on repositories

I attended a meeting of the JISC Repositories and Preservation Advisory Group (RPAG) in London a couple of weeks ago.  Part of my reason for attending was to respond (semi-formally) to the proposals being put forward by Rachel Heery in her update to the original Repositories Roadmap that we jointly authored back in April 2006.

It would be unfair (and inappropriate) for me to share any of the detail in my comments since the update isn't yet public (and I suppose may never be made so).  So other than saying that I think that, generally speaking, the update is a step in the right direction, what I want to do here is rehearse the points I made which are applicable to the repositories landscape as I see it more generally.  To be honest, I only had 5 minutes in which to make my comments in the meeting, so there wasn't a lot of room for detail in any case!

Broadly speaking, I think three points are worth making.  (With the exception of the first, these will come as no surprise to regular readers of this blog.)


There may well be some disagreement about this but it seems to me that the collection of material we are trying to put into institutional repositories of scholarly research publications is a reasonably well understood and measurable corpus.  It strikes me as odd therefore that the metrics we tend to use to measure progress in this space are very general and uninformative.  Numbers of institutions with a repository for example - or numbers of papers with full text.  We set targets for ourselves like, "a high percentage of newly published UK scholarly output [will be] made available on an open access basis" (a direct quote from the original roadmap).  We don't set targets like, "80% of newly published UK peer-reviewed research papers will be made available on an open access basis" - a more useful and concrete objective.

As a result, we have little or no real way of knowing if are actually making significant progress towards our goals.  We get a vague feel for what is happening but it is difficult to determine if we are really succeeding.

Clearly, I am ignoring learning object repositories and repositories of research data here because those areas are significantly harder, probably impossible, to measure in percentage terms.  In passing, I suggest that the issues around learning object repositories, certainly the softer issues like what motivates people to deposit, are so totally different from those around research repositories that it makes no sense to consider them in the same space anyway.

Even if the total number of published UK peer-reviewed research papers is indeed hard to determine, it seems to me that we ought to be able to reach some kind of suitable agreement about how we would estimate it for the purposes of repository metrics.  Or we could base our measurements on some agreed sub-set of all scholarly output - the peer-reviewed research papers submitted to the current RAE (or forthcoming REF) for example.

A glass half empty view of the world says that by giving ourselves concrete objectives we are setting ourselves up for failure.  Maybe... though I prefer the glass half full view that we are setting ourselves up for success.  Whatever... failure isn't really failure - it's just a convenient way of partitioning off those activities that aren't worth pursuing (for whatever reason) so that other things can be focused on more fully.  Without concrete metrics it is much harder to make those kinds of decisions.

The other issue around metrics is that if the goal is open access (which I think it is), as opposed to full repositories (which are just a means to an end) then our metrics should be couched in terms of that goal.  (Note that, for me at least, open access implies both good management and long-term preservation and that repositories are only one way of achieving that).

The bottom-line question is, "what does success in the repository space actually look like?".  My worry is that we are scared of the answers.  Perhaps the real problem here is that 'failure' isn't an option?

Executive summary: our success metrics around research publications should be based on a percentage of the newly published peer-reviewed literature (or some suitable subset thereof) being made available on an open access basis (irrespective of how that is achieved).

Emphasis on individuals

Across the board we are seeing a growing emphasis on the individual, on user-centricity and on personalisation (in its widest sense).  Personal Learning Environments, Personal Research Environments and the suite of 'open stack' standards around OpenID are good examples of this trend.  Yet in the repository space we still tend to focus most on institutional wants and needs.  I've characterised this in the past in terms of us needing to acknowledge and play to the real-world social networks adopted by researchers.  As long as our emphasis remains on the institution we are unlikely to bring much change to individual research practice.

Executive summary: we need to put the needs of individuals before the needs of institutions in terms of how we think about reaching open access nirvana.

Fit with the Web

I written and spoken a lot about this in the past and don't want to simply rehash old arguments.  That said, I think three things are worth emphasising:


Global discipline-based repositories are more successful at attracting content than institutional repositories.  I can say that with only minimal fear of contradiction because our metrics are so poor - see above :-).  This is no surprise.  It's exactly what I'd expect to see.  Successful services on the Web tend to be globally concentrated (as that term is defined by Lorcan Dempsey) because social networks tend not to follow regional or organisational boundaries any more.

Executive summary: we need to work out how to take advantage of global concentration more fully in the repository space.

Web architecture

Take three guiding documents - the Web Architecture itself, REST, and the principles of linked data.  Apply liberally to the content you have at hand - repository content in our case.  Sit back and relax. 

Executive summary: we need to treat repositories more like Web sites and less like repositories.

Resource discovery

On the Web, the discovery of textual material is based on full-text indexing and link analysis.  In repositories, it is based on metadata and pre-Web forms of citation.  One approach works, the other doesn't.  (Hint: I no longer believe in metadata as it is currently used in repositories).  Why the difference?  Because repositories of research publications are library-centric and the library world is paper-centric - oh, and there's the minor issue of a few hundred years of inertia to overcome.  That's the only explanation I can give anyway.  (And yes, since you ask... I was part of the recent movement that got us into this mess!). 

Executive summary: we need to 1) make sure that repository content is exposed to mainstream Web search engines in Web-friendly formats and 2) make academic citation more Web-friendly so that people can discovery repository content using everyday tools like Google.

Simple huh?!  No, thought not...

I realise that most of what I say above has been written (by me) on previous occasions in this blog.  I also strongly suspect that variants of this blog entry will continue to appear here for some time to come.

November 06, 2008

Web Development Conference 2008 sponsorship

I'm very pleased to announce that we are sponsoring, and will hopefully be attending, the Web Development Conference being held at the Watershed in Bristol next week:

The Web Developers Conference is an event designed for students of the Web Design degree course at the University of West England.

The BSc (Hons) in Web Design is a course intended to give students all the skills they need to build applications on the web. It has everything from building databases to designing user interfaces, from back end programming to carrying out usability testing. The teaching team intend for students leaving the course to be able to join in the Web 2.0 world as anything they want, from designer through to developer.

The conference is the chance for new and current students to meet people from the Industry. Stories are told, tricks shared and maybe even the chance for students to get those all important industry placements.

We agreed to the sponsorship a while back but it kind of got forgotten about (my fault, not their's) so there's been a bit of a frantic last minute exchange of cheques and so on.  We are sponsoring primarily because it looks like an interesting event run by a local university but obviously, if any students happen to read this and are interested in working for a local educational charity, particularly in the area of Web development, we'd be more than happy to talk to you.

Note that the conference is sold out, so if you haven't got a place, you've missed your chance. Sorry about that!

November 03, 2008

2008 grants - social networking and online identity

We have been very slow in bring you news of our grant funding for 2008.  Sorry about that.  The delay is basically down to getting all of the projects fully signed off by all parties.  Anyway, enough of the excuses...

...we are very pleased to be supporting three projects this year, representing, in total, over £200,000 of project funding.  The projects, conducted by University of Edinburgh, King's College London, and University of Reading, all focus on issues associated with social networking and digital identity.

Assisting the W3C in opening social networking data
This two-year project, undertaken by Harry Halpin at the University of Edinburgh, aims to explore the power and utility of royalty-free standards for extensible open social data. This project will help investigate and generate work proposals for opening social data at the Web's foremost standards body, the World Wide Web Consortium (W3C).
Rhizome: exploring strands of online identity in learning, teaching and research
A fourteen month project, led by Dr Steven Warburton of King's College London. The project will use narrative inquiry and scenario mapping to explore the key technical and social elements that impact on the construction of online identities. The work will build a framework for understanding the tools, literacies, and practices needed to create and manage individuals' digital self-representations.
This is me
An eight month project, led by Shirley Williams of the School of Systems Engineering at the University of Reading, will investigate how individuals can be made more aware of their digital identity and how such identities can be developed and enhanced. The project will produce a set of Web-based resources designed to be of use both within the University of Reading and by the wider UK HE community.

October 14, 2008

Thoughts on FOWA

I spent Thursday and Friday last week at the Future of Web Apps Expo (FOWA) in London, a pretty good event overall in retrospect and certainly one that left me with a lot to think about.  I'm not going to write up any of the individual talks in any kind of detail - videos of all the talks are now available, as is my (lo-fat) live-blogging from the event - but I do want to touch on several thoughts that occurred to me while I was there.

Firstly, the somewhat mundane issue of wireless access at conferences...  I say mundane because one might expect that providing wireless access to conference delegates should have become pretty much routine by now - a bit like making sure that tea and coffee are available?  But that didn't seem to be the case at this event.  My (completely unscientific and non-exhaustive) experience was that everyone with a Mac in the venue had no trouble with the wifi network but that everyone with a PC seemed to have little or no connectivity.  (Actually, that's not quite true, I did find one person with a PC laptop who had no problem using the wifi).  Whatever... my poor little brand new EeePC didn't get on the network for any significant period of time at any point in the two days :-(

P1070969So, OK, we all know that Macs are better than PCs in every way but I was amazed at the stark difference that seemed to be in evidence during this particular event.

The lack of wifi connectivity was of particular annoyance to yours truly, since I was hoping to live-blog the whole event.  In the end, I used the mobile interface to Coveritlive via my iPhone over a 3G connection to cover some of the sessions - not an easy thing to do given the soft-keyboard but actually an interesting experiment in what is possible with mobile technology these days.  By day 2 of the conference my typing on the soft-keyboard was getting pretty good - though not always very accurate.

The conference had quite a young and entrepreneurial feel to it - I'm not saying that everyone there was under 30 but there were a lot of aspects to the style of the conference that were in stark contrast to the rather more... err... traditional feel of many 'academic' conferences.  I don't want to argue that age and attitude are necessarily linked (for obvious reasons) but the entrepreneurial thing is particularly interesting I think because it is something that has a non-obvious fit with how things happen in education.  Being an entrepreneur is about taking risks - risks with money more than anything I guess.  I don't quite know how this translates into the academic space but my gut feeling is that it would be worth thinking about.  Note that I'm not thinking about money here - I'm thinking about attitude.  What I suppose I mean is our ability to break out of a conservative approach to things - our ability to overcome the inertia associated with how things have been done in the past.

I realise that there are plenty of startups in the education space - Huddle springs to mind as a good current example of a company that seems to have the potential to cross the education/enterprise divide - my concern is more about what happens inside educational institutions.  A 24 year-old can run the world's biggest social network yet we don't see similar things happening in education... do we?  Calling all 24 year old directors of university computing services...

Is that something we should worry about?  Is it something we should applaud?  Does it matter?  Is it an inevitable consequence of the kinds of institutions we find in education?

Funding by JISC, Eduserv and the like should be about encouraging an entrepreneurial approach to the use of ICT in education but I'm not sure it fully succeeds in doing that.  Project funding is by its nature a largely low risk activity - except at the transition points between funding.  There are exceptions of course - there are people that I would say are definitely educational entrepreneurs (in the attitude sense) but they tend to be the exception rather than the rule overall and even where they exist I think it is very difficult for them to have a significant impact on wider practice.

The entrepreneurial theme came out strongly in several sessions. Tim Bray's keynote for example, my favorite talk of the conference, where he focused on what startups need to do to react to the current economic climate.  And in a somewhat contrived debate about 'work-life balance' where Jason Calacanis argued that "it's ok to be average but not in my company" - ever heard that in the education sector?  I'm not saying that his was the right attitude, and to a large extent he was playing devil's advocate anyway, but these are the kinds of issues that we tend to be pretty shy about even discussing in education.

Unfortunately, the whole entrepreneurial thing brings with it a less positive facet, in that there tends to be a "it's not what you know, but who you know" kind of attitude.  This comes out both face-to-face (people looking over your shoulder for a more interesting person to talk to - yes, I know I'm a boring git, thank you!) and in people's use of social networks.  The people I'd unfollow first on Twitter are those who spend the most time tweeting who they are meeting up with next. Yawn.

Much of FOWA was split into two parallel tracks - a developer track and a business track.  I spent most time in the former.  Overall I was slightly disappointed with this track and found the talks that I went to in the business track slightly better.  It's not that there weren't a lot of good talks in the developer track - just that they didn't seem like good developer talks.  My take was that many of them would have been more appropriate for managers who wanted to get up to speed on the latest technology-related issues and thinking.  It didn't seem to me that real developers (of which I'm not one) would have got much from many of those talks - they were too superficial or something.

Now, clearly, running a developer track aimed at 700-odd delegates is not an easy task - I certainly wouldn't be able to do any better - but more than anything you've got to try and inspire people to go away and learn about and deploy new technology, not try and teach it directly during the conference.  For whatever reason, it didn't feel like there was much really new technological stuff to get inspired about.  This is not the conference organiser's fault - just timing I guess.  The business track on the other hand had plenty to focus on, given the current economic climate.

As you'd expect, there was also a lot about the cloud over the two days.  Most of it positive... but interestingly (to me, since it was the first time I'd heard something like this) there was an impassioned plea from the floor (during the joint important bits of cloud computing slot by Jeff Barr and Tony Lucas) for consumers of cloud computing to band together in order to put pressure on suppliers for better terms and conditions, prices, and the like.

Overall then... FOWA was a different kind of event to those I normally attend and to be honest it was a very last-minute decision to go at all but I did so because there were some interesting looking speakers that I wanted to see.  It wasn't a total success (hey, what is!?) but on balance I'm really glad I went and I got a lot out of it.

P1070970Two final mini-thoughts...

Firstly, virtual economies came up a couple of times.  Once in the Techcrunch Pitch at the end of the first day, where one of the panel (sorry, I forget who) suggested that virtual economies would increasingly replace subscriptions as the way services are supported.  I think he was referring to services outside the virtual world space where these kinds of economies are regularly found - Second Life being the best known example of a virtual world economy - though I must confess that I don't really understand how it might work in other contexts.  Then again in Tim Bray's talk where he noted the sales of iPhone applications at very low unit costs (e.g. 59p a time) - a model that will become increasingly sustainable and profitable because of the growing size of the mobile market.  (I appreciate that these two aren't quite the same - but think they are close enough to be of passing interest).

Secondly, I had my first chance to play on a Microsoft Surface - a kind of table-sized iPhone multi-user touch interface.  These things are beautiful to watch and interact with, and the ability to almost literally touch digital content is amazing, with obvious possibilities in the education and cultural sectors, as well as elsewhere.  Costs are prohibitive at the moment of course - but that will no doubt change.  I can't wait!

P1070972 And finally... to that Mark Zuckerberg interview at the end of day 2.  I really enjoyed it actually.  Despite being well rehearsed and choreographed I thought he came across very well.  He certainly made all the right kinds of noises about making Facebook more open though whether it is believable or not remains to be seen!

It's easy to knock successful people - particularly ones so young.  But at the end of the day I suspect that many of us simply wish we could achieve half as much!?

October 03, 2008

eFoundations LiveWire

Livewire_2eFoundations LiveWire will get its first proper outing later today (wireless network permitting) with a live-blog from the Future of Technology in Education (FOTE) event at Imperial College in London.

LiveWire is a slightly different kind of blog, more like a container for a collection of live-blogs really, and you may have to bear with us while we work out how best to squeeze the live-blog format into the new container in the most effective way. There will probably be issues with date-stamps, URLs and so on.

We are hosting it on Typepad - a choice made largely for consistency with how we host this blog rather than because it is necessarily the best way of organising things - and we'll initially use CoveritLive as the live-blogging engine. We've also pre-populated it with a number of older live-blogs from the last 6 months or so.

Feel free to drop by every so often. Keep an eye on the LiveWire RSS feed if you are interested - we'll try and announce new live-blogs about a week in advance of any meeting we are covering.

September 30, 2008

Open Science

Via Richard Akerman on Science Library Pad I note that a presentation made to a British Library Board awayday (on 23rd Sept), The Future of Research (Science and Technology), by Carole Goble is now available on Slideshare:

The presentation looks at the way in which scientific and technology-related research is changing, particularly thru the use of the Web to support open, data-driven research - essentially enabling a more immediate, transparent and repeatable approach to science.

The ideas around open science are interesting.  Coincidentally, a few Eduserv bods met with Cameron Neylon yesterday and he talked us thru some of the work going on around blog-driven open labbooks and the like.  Good stuff.  Whatever one thinks about the success or otherwise of institutional repositories as an agent of change in scholarly communication there seems little doubt that the 'open' movement is where things are headed because it is such a strong enabler of collaboration and communication.

Slide 24 of the presentation above introduces the notion that open "methods are scientific commodities".  Obvious really, but something I hadn't really thought about.  I note that there seem to be some potential overlaps here with the approaches to sharing pedagogy between lecturers/teachers enabled by standards such as Learning Design - "pedagogies as learning commodities" perhaps? - though I remain somewhat worried about how complex these kinds of things can get in terms of mark-up languages.

The presentation ends with some thoughts about the impact that this new user-centric (scientist-centric) world of personal research environments has on libraries:

  • We don’t come to the library, it comes to us.
  • We don’t use just one library or one source.
  • We don’t use just one tool!
  • Library services embedded in our toolkits, workbenches, browsers, authoring tools.

I find the closing scenario (slide 67) somewhat contrived:

Prior to leaving home Paul, a Manchester graduate student, syncs his iPhone with the latest papers, delivered overnight by the library via a news syndication feed. On the bus he reviews the stream, selecting a paper close to his interest in HIV-1 proteases. The data shows apparent anomalies with his own work, and the method, an automated script, looks suspect. Being on-line he notices that a colleague in Madrid has also discovered the same paper through a blog discussion and they Instant Message, annotating the results together. By the time the bus stops he has recomputed the results, proven the anomaly, made a rebuttal in the form of a pubcast to the Journal Editor, sent it to the journal and annotated the article with a comment and the pubcast. [Based on an original idea by Phil Bourne]

If nothing else, it is missing any reference to Twitter (see the MarsPhoenix Twitter feed for example) and Second Life! :-).  That said, there is no doubt that the times they are a'changing.

My advice?  You'd better start swimming or you'll sink like a stone :-)

September 26, 2008

Losing it

I spent much of yesterday in what felt like a time warp - sorry, I can't think of a nicer way of putting it.

I was at the JISC Services Skills event, Illuminating Event Management, a day that was intended to "explore all aspects of Event Management, from traditional 'Dressing a Stand' through to new and novel methods such as using web 2.0 to enhance your event".  Unfortunately, on the day, the event felt far more "traditional" than "novel" - since when did a 'skills' day involve listening to presentations that wouldn't have been out of place 10 years ago?

I'm not being critical of the organisers here - on paper they looked to have pulled together an interesting set of sessions covering event management, getting the most from your conference stand, the use of online conferencing tools, the impact of Web 2.0 and Second Life and so on.  No... it was just the way the day panned out I think, in part because the scheduled speaker on Web 2.0 (Matt Jukes) was unable to attend.  As a result, the day lacked some of the balance that it might otherwise have had.

You can get a feel for the day by reading my live-blog for the event on eFoundations LiveWire - but note that I was pretty despondent by the end and not typing much :-(  Look, I know it's important to label the vegetarian options correctly at lunchtime - 't was ever thus - and I accept that we don't always do it successfully at our Eduserv events (despite having a vegetarian on the team) but did we really need that level of information from a 'skills' day?  JISC is supposed to be about innovation... right?

Where was the stuff about the amplified conference?  About using tags successfully?  About streaming options?  About Flickr and Crowdvine and blogging and live-blogging and Slideshare and ... oh, you get the picture.  I'd expect these things to be at the forefront of every event manager's thinking these days?  In our sector at least.  This stuff isn't that cutting edge after all... look at this paper by Brian Kelly et al. from 2005.

Instead, the closest we got to the Web during the first presentation were some URLs for venue searches (very useful BTW) and a suggestion that you need to get all your presenters to sign a bit of paper saying they are happy for you to put their slides on the Web (as PDF - OMG!).  I was desperate to do a James Clay - leaping up with my iPhone streaming live to qik.com to ask the speaker if she'd like me to ask her to sign a bit of paper.  This stuff is out there - get used to it.  In many cases, it's not even happening over our networks anymore.

Grace Porter of the JISC was up second.  She spoke about her event manager's toolkit - essentially a wiki (to which people in the community are invited to contribute).  This was more like it!  Good stuff. I've always thought that there was space for a social network of some kind for event managers - sharing reviews about venues, information about streaming providers, sample budget templates and the like.  This sounds spot on to me and I'll certainly try and get the guys here involved.  Grace also talked about making events greener, again a useful and timely contribution.

Then there was a talk about getting the most out of your conference exhibition stand.  My innovative side wondered if we'd hear something about using an ARG to get people to your stand.  Maybe something about Moo cards at the very least.  Alas, no - just advice about dress codes, setting 'new contact' targets for staff on the stand and remembering to shower before turning up!  Hmmm...

Accessibility seemed to feature very highly in the day - I'm not quite sure why?  Not that I have anything against accessibility you understand.  But two presentations, one about 'accessible email'  - surely that was over the top (even just as a way to demonstrate some remote presentation software)?

Then in the afternoon we had presentations about using online conferencing systems - particularly focussing on Elluminate and Wimba.  This was much more on target (for me at least) and it was interesting to see the tools in action.

Is it just me that hates the use of Java in systems like this?  I know these tools are now the accepted norm but I find Java applications pretty much unbearable!  I tried to construct a question around this in terms of accessibility but all I got back was assurance that they were fully accessible (whatever that means).  I didn't make myself clear enough.  Accessibility is about inclusion - it's a social thing more than a technical thing.  Java applications aren't inclusive because they're bloody horrible.  I guess it's just a personal thing...

So what else did I learn?

That Networkshop attendees don't like people typing on their laptops while they are listening to presentations - at least not according to the evaluation forms.  Hmmm... all that proves is that luddites are at least as loud on evaluation forms as evangelists.  The reality is probably somewhere in the middle?  And if the loudness of typing really is a problem, how about putting all your mains sockets in one area of the auditorium, thus naturally pulling all the live-bloggers together in one place and letting everyone else sleep peacefully.

Oh... and that delegates to virtual conferences can sometimes be stupid enough to want to tell you their dietary requirements! Lol.

So, there was some stuff I found useful and some stuff I didn't and for some reason I allowed the latter the get the better of me.  The straw that broke the camel's back (for me) was a question from the audience about whether the DPA allows JISC services to keep lists of email addresses to which spam about future events can be emailled.  I kinda lost it at that point... pointing out that spamming people by email might not be the best approach to sharing information about events, even if it turns out the be legal. 

My comments where misplaced and I probably went too far.  Everyone uses email and there are target audiences for whom it is the only option.  In my defence, I'd say that my interjection did at least cause a nice bit of discussion.  When I started with, "I probably live on a different planet to everyone else, but ..." about 80% of the room nodded cheerfully!  And when the next questioner referred to me as "passionate", everyone in the room knew that what he really meant was, "why did you just completely lose it, you *@#%ing idiot"! :-)

On balance and after some reflection, I think it was a useful day for me.  It's good to be reminded that we don't all live in a world where blogging and live-blogging and Twitter and Slideshare and the rest are the norm - in fact, for many people, they are not even on the horizon.  This is a shame... and part of the JISC's role is to encourage people to think about these things.  I'm absolutely sure they will continue to do so.  But I guess they also have to be mindful of where people actually are.

Oh, and I nearly forgot...  I was at the event to give a talk about Second Life and how it can be used for events.  I was up last.  What can I tell you?  Getting wound up and pissing off the majority of the audience just before your own presentation probably doesn't feature in most 'presentation skills' good-practice guides but I think I got away with it.  I did the whole session in-world, with a virtual audience as well as the real audience.

I'll blog the details of my session separately, probably over on ArtsPlace SL, but suffice to say that this is a much more stressful way of giving a presentation than usual, since you have two sets of people and the technology to worry about.  In many ways, it is a whole new way of giving a presentation - one that I think will grow in popularity and one that I hope I'm getting a bit better at each time I do it (but I'll have to let the two audiences be the judge of that).

If I offended anyone yesterday I apologise - I think it's better to be honest and upfront about stuff even if it can be painful at times.  I also know that I'm at one end of a spectrum and other people are, rightly, elsewhere.  If you want to respond to this post, positively or negatively, please do so - and I'm happy to be called an idiot, because I know I act like one some of the time.  Yesterday being a case in point.

September 18, 2008

Worlds apart together

Sometimes things just seem to come together in odd ways!

Take this afternoon for example...

On the one hand, the jisc-repositories mailing list came briefly to life with a discussion about the legality of storing images of people without having explicitly gained their permission.  A variety of viewpoints came forth, both for and against, which I would broadly categorise (very unfairly!) as common sense vs. legal sense.

Meanwhile, at almost exactly the same time in another corner of the universe, James Clay was waiving his mobile phone/video camera around indiscriminately during question time at the MoLeNET conference, broadcasting all and sundry live to qik.com and challenging (in quite an "in your face" way) the assembled panel to comment on the impact of mobile technology on the delivery of learning in FE. 

The sound isn't brilliant throughout, but it's worth watching.

I don't know what point I'm making here other than to note the obvious - that nothing is straight-forward and that the 'net continues to change, and change us, in quite fundamental ways.

Residents and visitors

My dislike of the terms 'Google generation', 'digital native' and 'digital immigrant' is on record so I was interested to see (via Twitter) Dave White, writing at TALL Blog, proposing an alternative to the latter pair, Not ‘Natives’ & ‘Immigrants’ but ‘Visitors’ & ‘Residents’.

I like the notion of 'residents' and 'visitors' much better:

The resident is an individual who lives a percentage of their life online. The web supports the projection of their identity and facilitates relationships. These are people who have an persona online which they regularly maintain.


The Visitor is an individual who uses the web as a tool in an organised manner whenever the need arises. They may book a holiday or research a specific subject. They may choose to use a voice chat tool if they have friends or family abroad. Often the Visitor puts aside a specific time to go online rather than sitting down at a screen to maintain their presence at any point during the day.

It seems to me that this is a much better characterisation of what is going on than the somewhat pejorative, often ageist, use that is made of 'immigrant' and 'native'.  What distibuishes people's use of the Web (and technology more generally) is their attitude, not their age demographic.

September 17, 2008

Thoughts on ALT-C 2008

A few brief reflections on ALT-C 2008, which took place last week.

Overall, I thought it was a good event.  Hot water in my halls of residence rooms would have been an added bonus but that's a whole other story that I won't bother you with here.

I particularly enjoyed the various F-ALT sessions (the unofficial ALT-C Fringe), which were much better than I expected.  Actually, I don't know why I say that, since I didn't really know what to expect, but whatever... it seemed to me that those sessions were the main place in the conference where there was any real debate (at least from what I saw).  Good stuff and well done to the F-ALT organisers.  I hope we see better engagement between the fringe and the main conference next year because this is something that has the potential to bring real value to all conference delegates.

I also enjoyed the conference keynotes, though I think all three were somewhat guilty of not sufficiently tailoring their material to the target audience and conference themes.  I also suspect that my willingness to just sit back and accept the keynotes at face value, particularly the one by Itiel Dror, shows what little depth of knowledge I have in the 'learning' space - I know there were people in the audience who wanted to challenge his 'cognitive psychologist' take on learning as we understand it.

I live-blogged all three, as well as some of the other sessions I attended:

I should say that I live-blog primarily as a way of keeping my own notes of the sessions I attend - it's largely a personal thing.  But it's nice when I get a few followers watching my live note taking, especially when they chip in with useful comments and questions that I can pass on to the speakers, as happened particularly well with the "identity theft in VLEs" session.

I should also mention the ALT-C 2008 social network which was delivered using Crowdvine and which was, by all accounts, very successful.  Having been involved with a few different approaches to this kind of thing, I think Crowdvine offers a range of functionality that is hard to beat.  At the time of writing, over 440 of the conference's 500+ delegates had signed up to Crowdvine!  This is a very big proportion, certainly in my experience.  But it's not just about the number of sign-ups... it's the fact that Crowdvine was actively used to manage people's schedules, engage in debates (before, during and after the conference) and make contacts that is important.  I think it would be really interesting to do some post-conference analysis (both quantitative and qualitative) about how Crowdvine was really used - not that I'm offering to do it you understand.  The findings would be interesting when thinking about future events.

The conference dinner was also a triumph... it was an inspired choice to ask local FE students to both cater for us and serve the meal, and in my opinion it resulted in by far the best conference meal I've had for a long time.  Not that the conference meal makes or breaks a conference - but it's a nice bonus when things work out well :-).  Thinking about it now, it seems to me that more academic/education conferences should take kind of approach - certainly if this particular meal was anything to go by - not just in terms of the meal, but also for other aspects of the event.  How about asking media students to use a variety of new media to make their own record of a conference for example.  These are win-win situations it seems to me.

Finally, the slides from my sponsor's session are now available on Slideshare:

As I mentioned previously, the point of the talk was to think out loud about the way in which the availability of notionally low-cost or free Web 2.0 services (services in the cloud) impacts on our thinking about service delivery, both within institutions and in community-based service providers such as Eduserv.  What is it that we (institutions and service providers 'within' the community) can offer that external providers can't (sustainability, commitment to preservation of resources, adherence to UK law, and so on)?  What do they offer that we don't, or that we find it difficult to offer?  I'm thinking particularly of the user-experience here! :-) How do we make our service offerings compelling in an environment where 'free' is also 'easy'?

In the event, I spent most time talking about Eduserv - which is not necessarily a bad thing since I don't think we are a well understood organisation - and there was some discussion at the end which was helpful (to me at least).  But I'm not sure that I really got to the nub of the issue.

This is a theme that I would certainly like to return to.  The Future of Technology in Education (FOTE2008) event being held in London on October 3rd will be one opportunity.  It's now sold out but I'll live-blog if at all possible (i.e. wireless network permitting) - see you there.

September 02, 2008

ALT-C, Crowdvine and (social) tagging

The Crowdvine social network for next week's ALT-C Conference is now available and delegates are signing up apace.

One of the interesting things about Crowdvine is it's use of social tagging (solicited through a conference-specific set of profile questions) to show delegates' various areas of interest, expertise, etc.  The idea is to help people get in touch with each other and, like any tagging system, it works as well as the tags it is built on.

For a community like ALT-C, the approach to tagging, and the resulting tags, makes for quite an interesting case study.  Here's a couple of examples...

1) '(e-)learning' - As a human reader, I understand where this tag is coming from.  It's trying to tell me that the tagger is interested in both learning and e-learning without needing to create two tags.  Brilliant... if saving bits was the point of the exercise! :-)  Unfortunately, it completely fails as a tag because clicking on it shows that no-one else is using it - everyone else uses one or more of 'learning', 'e-learning' and 'elearning'.  Which brings me nicely to my second example...

2) 'elearning' vs. 'e-learning' - Both are in use.  Clicking on the tags (at the time of writing) shows 18 people interested in 'e-learning' and 9 people in 'elearning' (there may be some cross-over).  I'll go out on a limb and suggest that all these people are actually interested in the same thing!  One is therefore tempted to ask why the 9 people chose to use the less popular tag?  Actually, I can guess the answer so please don't tell me - whilst I accept that such action is completely understandable, it is also non-optimal.

There are probably other examples.

The point is that social tagging is a social activity, so you have to look at what other people are doing to get the most out of - not just when you first assign your tags, but subsequently as the community grows. 

Hyphens may well offend your tagging sensibilities but if that's what most other people are choosing, it pays to go with the crowd.

August 16, 2008

Social media and the emerging technology hype curve

I've noticed two behavioural changes in myself over the last while...

Firstly, I'm trying to do less work at home outside of normal office hours.  Yes, this blog post indicates I'm not being totally successful at this - written on a Saturday morning as it is - but I'm not intending to be totally dogmatic about it, it's just a general trend.  Me, I quite like spreading my working day over a large proportion of the available 24 hours and I tend to find early mornings and late evenings both very constructive times to work, but my family don't like it much and I have to take that into consideration.

Secondly, I find I'm reading much more based on links that turn up in my Twitter feed than I do based on explicitly seeking stuff out using Bloglines (my preferred RSS reader).  This isn't necessarily a good thing, in fact I'm pretty sure it isn't a good thing - I'm just reporting what I find myself to be doing on the ground, so to speak.  It isn't a good thing because although I like my Twitter environment, I don't think it is particularly representative of the whole working social environment in which I want to be positioned.

Anyway... via @DIHarrison I discovered Study: Fastest Growing US Companies Rapidly Adopting Social Media on ReadWriteWeb which gives yet more evidence of our changing attitudes and habits around social media and the Web.

What does this mean? It means that when you tell people you write, read or listen to blogs, wikis, podcasts, social networks and online video - if they give you a funny look, it is now officially them that's a freak, not you. Are these tools really as useful as so many people appear to believe they are? That's another question, but at least we're getting a healthy number of people and businesses trying them out.

It ends with Gartner's hype curve for emerging technology (July 2008) on which I was surprised to see that they'd positioned 'Public Virtual Worlds' and 'Web 2.0' at more or less the same point on the curve whereas I would have expected to see the former well behind the latter?  They also position 'SOA' as climbing out of the trough of disillusionment, which is not a view that I happen to share.

While I'm (just about) on the subject of virtual worlds, there seems to have been a recent surge in the breadth and depth in available virtual worlds - or, more likely, that breadth and depth seems to have been made much more visible of late - particularly as evidenced by this diagram and this video (both via Stephen Downes).  In my spare time I've vaguely started work on a project called MUVEable.com, which is intended to bring together material from various virtual world offerings, but I strongly suspect that I don't have the engery to do it justice, particularly in light of the breadth noted here and the first point above.



eFoundations is powered by TypePad