Many of you will know that Brian Kelly, a colleague of mine for many years at UKOLN, was recently named IWR Information Professional of the Year (or IPOTY as one of his current colleagues called it). Many congratulations to Brian - his ability to forge and sustain communities around Web policy issues is second to none and the award is well deserved.
Here's what I wrote about Brian, way back in Feb '95, after seeing him present for the first time:
An intro to the WWW - Brian Kelly, Leeds
A pretty standard intro to the Web, probably wasted on most of those there. Not too suprisingly the only live demo attempted on the day, showing the 'Frog Dissection Kit', failed!
By the time Brian joined UKOLN in '96 I'd pretty much forgotten about sending that particular message. Unfortunately, he found it in the Web archives of the mailing list (still preserved today thanks to the Wayback Machine) and has never let me forget it since! :-)
Final thought... in his predictions for the future, presented at that meeting, I note that Brian suggested that "MOOs and MUDS-For support of distributed teaching" were "likely to become important". Hey, not a bad prediction for a future IPOTY, given current levels of interest in the use of virtual worlds in education.
Lorcan Dempsey mentioned to me in passing that the Urban Dictionary word of the day on December 16th was Facebook limbo (OK, I know that's technically two words!). Facebook limbo is:
the electronic space between accepting and rejecting a facebook friendship.
'Friending limbo' (I've just made that up) is a condition affecting any social network of course, not just Facebook, and I guess we all have our own rules of thumb about when to accept and when to reject. Mine are roughly as follows:
if I know the person asking to be my friend then I accept - hey, I'm easy!
if I don't know the person and we have no friends in common, then I reject - I'm easy but I do have some standards!
if I don't know the person and we have at least one friend in common, then I put them in limbo for up to one month then think about it again.
Now, if I'm honest, people that go into limbo rarely go anywhere else except into the reject pile (a bit like those emails that you don't deal with straight away and that subsequently disappear into your inbox, never to be seen again). As time passes I just feel more and more guilty about leaving them in limbo. Furthermore, the notion of 'knowing' someone is less than biblical - having exchanged one email, Facebook message or line of chat in Second Life is sufficient - and the rule about 'at least one friend in common' is treated somewhat fuzzily at times (like it may depend on who the friend in common is - some friends carry more weight than others). But other than that the rules above more or less capture my typical behaviour.
To a certain extent, I'm aware that other people will have a similar set of rules when they consider friend requests from me. I certainly don't always expect to get accepted.
Seeing as it's nearly Christmas and this is a somewhat lighthearted post, I'll confess my worst evar reason for asking someone to be a Facebook friend, "You have amazing eyes and I want them to appear on my profile". Gad, I'm such a smooth talker (though I did mean it). It worked as well! :-)
In his most recent Alertbox, Jacob Nielsen suggests that mainstream Web sites need to be cautious about adopting Web 2.0 features and that they are, by and large, better off focusing their attention on getting the Web 1.0 basics right.
I don't disagree, though the overall emphasis of the article seems to be on those sites that are doing the mashing, whereas I would have liked to have seen some acknowledgment of the benefits of making sure that your own content can be mashed by others.
Nielsen suggests four defining elements of Web 2.0:
"Rich" Internet Applications (RIA)
Community features, social networks, and user-generated content
Mashups (using other sites' services as a development platform)
Advertising as the main or only business model
In recent talks I've tended to use Sarah Robbins' characterisation of Web 2.0, with four slightly different bullets:
Prosumer (i.e. user as both consumer and producer)
Remote applications (i.e. accessed primarily thru the browser)
Again, the use of 'mashups' in the first list appears to highlight the mashing of other people's content and services whereas the use of 'APIs' in the second seems to focus more on being mashed by others. I'm not suggesting that one is right and the other is wrong - just noting the difference in emphasis. Both are important.
What both lists tend to obscure is that the most important and simplest thing we can all do to make our content more Web 2.0-friendly is to expose cool URIs and appropriate RSS feeds.
On Tuesday, Herbert Van de Sompel and Carl Lagoze announced the availability of "alpha" versions of the specifications developed by the OAI Object Reuse & Exchange (ORE) project Technical Committee. The set of documents is:
They describe a set of conventions for representing the relationships between what ORE calls an "Aggregation" (the "thing which has parts", for which I think we've gone through several names over the course of the year!) and its constituent resources, in the form of a "Resource Map" for the Aggregation, using the graph model of the Resource Description Framework (RDF). The documents include a specification for a serialisation of the ORE Resource Map using the Atom Syndication Format.
In the middle of November, I joined the small "editorial group" that was working on the content of the documents, and I contributed mainly to the Abstract Data Model document. I admit that there are some parts of the documents that I'm still not 100% happy with (yeah, yeah, I know, when am I ever 100% happy? ;-) ), and I think we need to do some more work on. But I also think the TC has made considerable progress in clarifying (and simplifying) many aspects that had been the topic of much debate, and Herbert and Carl have done a sterling job in co-ordinating and channeling that effort, and I am very pleased that the we've reached the stage of having a set of documents we feel more or less content to put out there for comments. (Of course, they come with the usual caveats that, at this stage, the content is still liable to change significantly.)
I blogged over on ArtsPlace SL that having an OPML file of all the nominees in this year's Edublog Awards would have made it easier to get an overall picture of what the different blogs were about. My original thinking was that if the OPML file had been available before the awards (I've since created one here) it would have made it more or less a single click to subscribe to all the blogs using your favorite RSS feed reader (Bloglines in my case).
Having announced the availability of my OPML file on Twitter, Tony Hirst over at OUseful got in touch to query why downloading it was so difficult. (My hosting service, Dreamhost, was going thru a bit of a bad patch as it happens :-( but I think it is on the mend now). Digging around a bit I discovered that his natty OPMLDashboard could be used to parse my OPML file, displaying the first 5 entries in each of the blogs up for an award. How cool is that?
This is a perfect example of why exposing stuff in standard formats is so powerful. You do something and someone else can build on top of it using their existing tools and services with little or no effort. Nice.
I attended the first day of a two-day CRIG Unconference yesterday. What's CRIG? What's an unconference? Well, CRIG is the Common Repository Interfaces Group, a JISC initiative to develop the community's thinking around repository technology and, in particular, repository APIs. And an unconference is essentially a conference without a pre-determined agenda - delegates develop one dynamically as the conference proceeds.
I came away from the day with both positive and negative thoughts - largely positive about the day itself, largely negative about the wider context within which the day took place. (Remember that I only attended the first day, so my views may be premature).
Overall, I felt that the unconference aspects of the day worked pretty well, and it'll be interesting to see how things progress today. Certainly, as a community-building and brainstorming forum I think the approach was very successful and well run.
That said, I have two minor comments... firstly, the day started with a presentation about SWORD - an attempt to use the Atom Publishing Protocol to define a 'deposit' API for repositories. Not that there was anything wrong with the presentation itself (thanks Julie) but it just seemed out of place to me to start an unconference with a scheduled presentation about one particular bit of technology. Isn't the whole point that the delegates themselves should have driven the day towards that presentation - rather than having it as a kick-off? Similarly, CRIG had developed a series of podcasts and associated mindmaps prior to the event and these were used as the initial focus of discussion. Again, this material was a useful resource, but by the end of the day I wondered if its use to frame our brainstorming had steered us in particular directions from the outset?
Regular readers will know that I have a personal problem with the community's overarching emphasis on the 'R' word (as opposed to simply thinking about surfacing content on the Web) and even more so with the 'IR' words. Look back at our previous blog entries on this subject if you want to know more. This particular meeting sat firmly within that context. About half-way thru the initial brainstorming Paul Walk of UKOLN remarked:
Wouldn't it be great if the outcome of this unconference was that repositories were just wrong?
Well, yes. I think he was joking, btw? Whatever... that sentence more or less captures my thinking on the subject. But there's no way that particular meeting could have come to that conclusion because the 'R' word solution is now so firmly engrained in national policy and direction.
It's a bit like the difference between the NHS funding research into "the treatment and prevention of the common cold" and them funding research into "the treatment and prevention of the common cold using chicken soup". Both will result in some interesting research, discussion, papers, etc. But one focuses on the problem, the other on a particular solution to the problem. What's the essential problem in our case - i.e. what should we be focusing on? Surfacing academic content on the Web in ways that maximise the benefits of open access.
Apologies... I've calmed down now.
Despite my negativity I did bring away some positives... I noted both "We don't need any more standards" and "Death to packages" written on flip charts, which I'll conveniently interpret in my own terms to mean, "We don't need any more community-specific standards" and "Death to content-packaging standards", though I suspect in the case of the first, this isn't what was intended. The fact that the glaring omission of Google from the podcast and mind map about 'search' was noted by almost everyone. A more general willingness to see the Flickrs of this world as good exemplars of what repository-type services should be like. A more widespread recognition that we tend to over complicate our solutions in technical interoperability terms. Some recognition that tagging and full-text indexing are at least as important as metadata (as we tend to use that term in the context of repositories).
OK, I'll admit it, I am tending to selectively see my own view of the world in other people's comments here!
Anyway... overall, the first day of the unconference was pretty good and I look forward to seeing the results of the second day. However you choose to interpret this particular report, CRIG is definitely worth keeping an eye on as this whole area evolves.
Through 2010, organizations implementing both customer data integration
and product integration and product information management will link
these master data management initiatives as part of an overall
enterprise information management (EIM) strategy. Metadata management
is a critical part of a company’s information infrastructure. It
enables optimization, abstraction and semantic reconciliation of
metadata to support reuse, consistency, integrity and shareability.
Metadata management also extends into SOA projects with service
registries and application development repositories. Metadata also
plays a role in operations management with CMDB initiatives.
OK, so now I remember why I don't read Gartner very often. I mean it's hard to find fault with it as a statement - other than it'd be a great paragraph for buzzword bingo. But I have no idea what practical steps organisations are going to be taking over the next few years as a result. One of the problems faced by the Dublin Core Metadata Initiative in moving into this kind of more corporate space, which is something that I suspect they'd like to do, is that this is not the kind of language that the DC community talks by and large.
Some of the sessions at the executive briefing day (see above) look quite interesting - particularly the work being done by the BBC.
Over here on eFoundations we like to think about the important issues in life such as how Web 2.0 impacts elearning in institutions, what the Web architecture has to say about repository design, whether the Semantic Web does anything for the future of library and museum systems, the trust issues around OpenID, ... that kind of thing.
Meanwhile, in the real-world [tm], I've just been trying to help a friend of mine via MSN (during my lunch hour I hasten to add) who is having difficulties printing out one of her distance learning course documents from within Moodle. Her considered opinion after a couple of hours of failure?
Alan Levine recently blogged his love for Flickr and Flickr's love for him! Ahhh, how touching to see romance blossoming in the Web 2.0 space :-). I have to confess that I share his views - though of course, as a somewhat reserved brit, I could never admit to it so publicly.
But he is right... Flickr is great. At eFoundations we often find ourselves using it as a kind of reference model against which other Web-based repository / content management systems can be assessed. Cool URIs. Good user experience. Clever integration of complex functionality in a Web-based user interface. API. It's all there.
Particularly nice is the relatively seamless way that external applications can be integrated into the Flickr experience. Alan points out that Picnik can now be used as its built-in photo editor. I have no idea how the technology behind this works... as an end-user all I get is a single prompt to confirm that I'm happy with Picnik accessing my Flickr account and I'm away. Pretty much seamless (or "well seamed" as Peter Burnhill likes to say).
I'm convinced that when we talk about 'repositories' in the academic sector we could do a lot worse than to look at Flickr and say, "how do we make our repositories more like that?".
Funny... I've been on the email@example.com mailing list for a long, long time but there's been very little traffic for the last while (like 5 years or so) to the point that I'd kinda forgotten I was on it. Just recently it has popped back into life with the announcement of the Automated Content Access Protocol (or ACAP):
Following a successful year-long pilot project, ACAP (Automated Content
Access Protocol) has been devised by publishers in collaboration with
search engines to revolutionise the creation, dissemination, use, and
protection of copyright-protected content on the worldwide web.
Danny Sullivan, over at Search Engine Land, explains some of the background to this development. It is clear that this initiative was born out of a certain amount of publisher mistrust about what search engines are doing with their content - something that makes the strap line, "unlocking content for all" a bit of a misnomer. There's an emphasis on explicitly granting permission and an attempt to move away from the current default of assuming that everything is open to indexing.
Given that none of the big search engines currently support it, one is tempted to react with a big "huh!?". I guess it's a case of wait and see. Maybe this will turn into robots.txt 2.0, maybe it won't... but I think that decision lies with the search engines rather than with the publishers who initiated the exercise. As Danny puts it:
No, I'd say not. I think it's been very useful that some group has diligently
and carefully tried to explore the issues, and having ACAP lurking at the very
least gives the search engines themselves a kick in the butt to work on better
standards. Plus, ACAP provides some groundwork they may want to use. Personally,
I doubt ACAP will become Robots.txt 2.0 -- but I suspect elements of ACAP will
flow into that new version or a successor.
In addition to the travels that Andy mentioned, we've also been grappling with the disruption caused by a relocation to a different office, so I seem to have accumulated a number of half-written posts which I'll try to find the time to get out this week.
For now, a brief pointer to a nice post by Roo Reynolds in which he compares the character and functionality of the UK government's Hansard Web site (which provides access to the "official" " edited verbatim report of proceedings" in the two houses of the UK Parliament) and two independent sites, TheyWorkForYou.com and The Public Whip, which take advantage of the availability of that data to provide more "social" functionality around the same information:
While the text is the same, the simple addition of some additional markup, links and photos brings it to life. The addition of user comments turns the whole thing into a social application, allowing us to discuss what our MPs and Lords are shouting across their respective aisles at each other every day.
In addition, Roo highlights the importance of underpinning such applications with an entity-/object-based approach - what I would probably call a resource-oriented approach:
Social software designers talk about the 'atoms', (or objects, or entities) of an application. For example, YouTube’s atoms include videos (of course) but also comments, playlists and users. Flickr’s atoms include photos, comments, users, groups and notes. TheyWorkForYou’s atoms are speeches and comments. Don’t get the impression that ’speech’ necessarily means a long speech. It could be a question, an interruption, an answer or a statement. Sometimes even standing up to speak is enough to get an entry in Hansard.
In his discussion of The Public Whip, Roo emphasises that such entities include people and also 'abstract resources' such as 'divisions' and 'policies'. I guess I might add that such entities aren't necessarily 'atomic' in the traditional sense of that word, indicating something 'indivisible': a collection or list of other entities/resources can also be an entity/resource in its own right, and indeed such entities are visible in those services.
But it's a good post, highlighting very simply and clearly the value of open data and what the "social" dimension can bring to an application.
A blog about the Web, cloud infrastructure, linked data, big data, open access, digital libraries, metadata, learning, research, government, online identity, access management and anything else that takes our fancy by Pete Johnston and Andy Powell.