« August 2010 | Main | October 2010 »

September 28, 2010

An App Store for the Government?

I listened in to a G-Cloud web-cast organised by Intellect earlier this month, the primary intention of which was to provide an update on where things have got to. I use the term 'update' loosely because, with the election and change of government and what-not, there doesn't seem to have been a great deal of externally visible progress since the last time I heard someone speak about the G-Cloud. This is not surprising I guess.

The G-Cloud, you may recall, is an initiative of the UK government to build a cloud infrastructure for use across the UK public sector. It has three main strands of activity:

The last of these strikes me as the hardest to get right. As far as I can tell, it's an idea that stems (at least superficially) from the success of the Apple App Store though it's not yet clear whether an approach that works well for low-cost, personal apps running on mobile handsets is also going to work for the kinds of software applications found running across government. My worry is that, because of the difficulty, the ASG will distract from progress on the other two fronts, both of which strike me as very sensible and potentially able to save some of the tax-payer's hard-earned dosh.

App stores (the real ones I mean) work primarily because of their scale (global), the fact that people can use them to showcase their work and/or make money, their use of relatively micro-payments, and their socialness. I'm not convinced that any of these factors will have a role to play in a government app store so the nature of the beast is quite different. During the Q&A session at the end of the web-cast someone asked if government departments and/or local councils would be able to 'sell' their apps to other departments/councils via the ASG. The answer seemed to be that it was unlikely. If we aren't careful we'll end up with a simple registry of government software applications, possibly augmented by up-front negotiated special deals on pricing or whatever and a nod towards some level of social engagement (rating, for example) but where the incentives for taking part will be non-obvious to the very people we need to take part - those people who procure government software. It's the kind of thing that Becta used to do for the school's sector... oh, wait! :-(

For the ASG to work, we need to identify those factors that might motivate people to use it (other than an outright mandate) - as individuals, as departments and as government as a whole. I think this will be quite a tricky thing to get right. That's not to say that it isn't worth trying - it may well be. But I wonder if it would be better unbundled from the other strands of the G-Cloud concept, which strike me as being quite different.

Addendum: A G-Cloud Overview [PDF, dated August 2010] is available from the G-Digital Programme website:

G-Digital will establish a series of digital services that will cover a wide range of government’s expected digital needs and be available across the public sector. G-Digital will look to take advantage of new and emerging service and commercial models to deliver benefits to government.

September 17, 2010

On the length and winding nature of roads

I attended, and spoke at, the ISKO Linked Data - the future of knowledge organization on the Web event in London earlier this week. My talk was intended to have a kind of "what 10 years of working with the Dublin Core community has taught me about the challenges facing Linked Data" theme but probably came across more like "all librarians are stupid and stuck in the past". Oh well... apologies if I offended anyone in the audience :-).

Here are my slides:

They will hopefully have the audio added to them in due course - in the meantime, a modicum of explanation is probably helpful.

My fundamental point was that if we see Linked Data as the future of knowledge organization on the Web (the theme of the day), then we have to see Linked Data as the future of the Web, and (at the risk of kicking off a heated debate) that means that we have to see RDF as the future of the Web. RDF has been on the go for a long time (more than 10 years), a fact that requires some analysis and explanation - it certainly doesn't strike me as having been successful over that period in the way that other things have been successful. I think that Linked Data proponents have to be able to explain why that is the case rather better than simply saying that there was too much emphasis on AI in the early days, which seemed to be the main reason provided during this particular event.

My other contention was that the experiences of the Dublin Core community might provide some hints at where some of the challenges lie. DC, historically, has had a rather librarian-centric make-up. It arose from a view that the Internet could be manually catalogued for example, in a similar way to that taken to catalogue books, and that those catalogue records would be shipped between software applications for the purposes of providing discovery services. The notion of the 'record' has thus been quite central to the DC community.

The metadata 'elements' (what we now call properties) used to make up those records were semantically quite broad - the DC community used to talk about '15 fuzzy buckets' for example. As an aside, in researching the slides for my talk I discovered that the term fuzzy bucket now refers to an item of headgear, meaning that the DC community could quite literally stick it's collective head in a fuzzy bucket and forget about the rest of the world :-). But I digress... these broad semantics (take a look at the definition of dcterms:coverage if you don't believe me) were seen as a feature, particularly in the early days of DC... but they become something of a problem when you try to transition those elements into well crafted semantic web vocabularies, with domains, ranges and the rest of it.

Couple that with an inherent preference for "strings" vs. "things", i.e. a reluctance to use URIs to identify the entities at the value end of a property relationship - indeed, couple it with a distinct scepticism about the use of 'http' URIs for anything other than locating Web pages - and a large dose of relatively 'flat' and/or fuzzy modelling and you have an environment which isn't exactly a breeding ground for semantic web fundamentalism.

When we worked on the original DCMI Abstract Model, part of the intention was to come up with something that made sense to the DC community in their terms, whilst still being basically the RDF model and, thus, compatible with the Semantic Web. In the end, we alienated both sides - librarians (and others) saying it was still too complex and the RDF-crowd bemused as to why we needed anything other than the RDF model.

Oh well :-(.

I should note that a couple of things have emerged from that work that are valuable I think. Firstly, the notion of the 'record', and the importance of the 'record' as a mechanism for understanding provenance. Or, in RDF terms, the notion of bounded graphs. And, secondly, the notion of applying constraints to such bounded graphs - something that the DC community refers to as Application Profiles.

On the basis of the above background, I argued that some of the challenges for Linked Data lie in convincing people:

  • about the value of an open world model - open not just in the sense that data may be found anywhere on the Web, but also in the sense that the Web democratises expertise in a 'here comes everybody' kind of way.
  • that 'http' URIs can serve as true identifiers, of anything (web resources, real-world objects and conceptual stuff).
  • and that modelling is both hard and important. Martin Hepp, who spoke about GoodRelations just before me (his was my favorite talk of the day), indicated that the model that underpins his work has taken 10 years or so to emerge. That doesn't surprise me. (One of the things I've been thinking about since giving the talk is the extent to which 'models build communities', rather than the other way round - but perhaps I should save that as the topic of a future post).

There are other challenges as well - overcoming the general scepticism around RDF for example - but these things are what specifically struck me from working with the DC community.

I ended my talk by reading a couple of paragraphs from Chris Gutteridge's excellent blog post from earlier this month, The Modeller, which seemed to go down very well.

As to the rest of the day... it was pretty good overall. Perhaps a tad too long - the panel session at the end (which took us up to about 7pm as far as I recall) could easily have been dropped.  Ian Mulvany of Mendeley has a nice write up of all the presentations so I won't say much more here. My main concern with events like this is that they struggle to draw a proper distinction between the value of stuff being 'open', the value of stuff being 'linked', and the value of stuff being exposed using RDF. The first two are obvious - the last less-so. Linked Data (for me) implies all three... yet the examples of applications that are typically shown during these kinds of events don't really show the value of the RDFness of the data. Don't get me wrong - they are usually very compelling examples in their own right but usually it's a case of 'this was built on Linked Data, therefore Linked Data is wonderful' without really making a proper case as to why.

Key trends in education? - a crowdsource request

I've been asked to give a talk at FAM10 (an event "to discuss federated identity and access management within the UK") replacing someone who has had to drop out, hence the rather late notice. I therefore wasn't first choice, nor would I expect to be, but having been asked I feel reluctant to simply say no and my previous posts here tend to indicate that I do have views on the subject of federated access management, particularly as it is being implemented in the UK. On the down side, there's a strong possibility that what I have to say will ruffle feathers with some combination of people in my own company (Eduserv), people at the JISC and people in the audience (probably all of them) so I need to be a bit careful. Still, that's never stopped me before :-)

I can't really talk about the technology - at least, not at a level that would be interesting for what is likely to be a highly technical FAM10 crowd. What I want to try and do instead is to take a look at current and emerging trends (technical, political and social), both in education in the UK and more broadly, and try to think about what those trends tell us about the future for federated access management.

To that end, I need your help!

Clearly, I have my own views on what the important trends might be but I don't work in academia and therefore I'm not confident that my views are sufficiently based in reality. I'd therefore like to try and crowdsource some suggestions for what you (I'm speaking primarily to people who work inside the education sector here - though I'm happy to hear from others as well) think are the key trends. I'm interested in both teaching/learning and research/scholarly communication and trends can be as broad or as narrow, as technological or as non-technological, as specific to education or as general as you like.

To keep things focused, how about I ask people to list their top 5 trends (though fewer is fine if you are struggling). I probably need more than one-word answers (sorry) so, for example, rather than just saying 'mobile', 'student expectations', 'open data' or 'funding pressure', indicate what you think those things might mean for education (particularly on higher education) in the UK. I'd love to hear from people outside the UK as well as those who work here. Don't worry about the impact on 'access management' - that's my job... just think about what you think the current trends affecting higher and further education are.

Any takers? Email me at andy.powell@eduserv.org.uk or comment below.

And finally... to anyone who just thinks that I'm asking them to do my work for me - well, yes, I am :-) On the plus side, I'll collate the answers (in some form) into the resulting presentation (on Slideshare) so you will get something back.

Thanks.

September 10, 2010

Comparing Twitter usage at ALT-C - 2009 vs. 2010

I've just been taking a very quick and dirty (and totally non-scientific) look at the Summarizr Twitter summaries for #altc2009 and #altc2010. Here are some observations (which may or may not be either significant or correct):

Firstly, twittering was up roughly 35% this year on last - not surprising I guess (but definitely worth noting). Oddly this was despite a fall in the number of twitterers, though I think that might be explained by the fact that the older hashtag has been in use much longer? There is also a slight, but probably not significant, shift towards a fewer number of heavy Twitter users.

Secondly, #digilit, #falt10 and #elearning appear in the top ten list of tweeted hashtags this year, none of which appeared last year. I suggest that #digilit and #falt10 reflect a growing interest in digital literacy more generally and a greater awareness of the fringe event by the main conference respectively. Conversely, #elearning seems a bit odd (to me) since I would have expected that term to be disappearing? The #fail hashtag also appears this year but not last which might be taken to indicate some problems with technology (or something else)? On the other hand, #awesome appears this year but not last, which either indicates some success somewhere or the fact that we are all getting more American in our use of language? If so, shame on us!

Thirdly, the top 10 tweeted URLs this year include a majority of blog posts (there were none last year) which either indicates that people are tending to blog earlier (and more) or that there is more of a culture of re-tweeting posts to blog entries than there used to be (or both)?

Finally, the list of most commonly appearing words in the Twitter stream for both years is largely useless as an indication of what was being talked about. However, I note that 'mobile', 'learner' and 'lecture' all make an appearance this year but not last ('lecture' for obvious reasons, given the first keynote) whereas discussion about the 'vle' seems to be shrinking.

As I say, all of this is anecdotal and unscientific, so treat accordingly.

September 09, 2010

Making sense of the ALT-C change

What I learned as a remote attendee at this year's ALT-C conference, "Into something rich and strange" - making sense of the sea-change - a brief, partial and highly irrelevantirreverent view.

Firstly... on the technology front it was good to see that audio, slides and the occasional Twitpic from those delegates who made the effort to turn up was pretty much as good as a real [tm] video stream. Also that Java is alive and well on the desktop - Elluminate being one of the few remaining applications that still require me to have Java on my local machine. Still, it's good to remind ourselves every so often what we thought the future of distributed computing was going to be like about 10 years ago.

Secondly... I learned that lecturers are called lecturers because they lecture - a medieval practice of imparting knowledge by reading an old-fashioned auto-cue rather badly. Being called a lecturer is an indication of status apparently. They're not called teachers because they don't teach, despite the fact that we really need teachers in HE - so somewhere along the line we got our wires crossed and ended up in the bargain basement. Oddly, everyone knows that lectures are sub-optimal but if you stand up and say that in a room full of people, half of whom are lecturers, half of whom used to be lecturers and all of whom think they know better, you don't make many friends, especially if you don't say what you think needs to replace them. Even more oddly, lectures are so bad that the only way of making things better is to video them and put them on YouTube so that people can watch them over and over again. I think it's called aversion therapy. Of course, the ALT-C Programme Committee feel duty bound to appoint a 'the lecture is dead' keynote speaker every year and even go so far as to send them the same images to use in their slides - you know, the one of the monk asleep at the back and the other one with the kids dressed up as victorians and sitting in rows.

I also found out that a university education is a public good - as opposed to a private good or a public bad - and that market forces are OK in science and engineering but not in the "useless" subjects. Amazingly, most of the politicians around at the moment did those very same useless subjects - many of them reading something called PPE at Oxbridge, which I can only assume is Physical Education with an emphasis on the physical and which clearly had a very popular module called "101 How to fiddle your expenses". On that basis, the definition of "useless" being used here includes the notion of getting a fast-track to lots of political power and money.  Margaret Thatcher was the only one to buck the trend, reading Chemistry apparently, but the less said about that the better - I don't want to give all chemists a bad name.

One other thing... if you're a builder that doesn't mind using computers when you don't have enough bricks to fill all the holes in your walls, especially if you don't mind kids playing on your building site, you can probably do a very popular keynote slot at future ALT-Cs. Children apparently learn best when you take away all the teachers, give them a computer (best if it's firmly attached to a wall to stop them nicking it) and let them get on with it. Many of them turn into rocket scientists within 2 or 3 years. Nobody is quite sure if this also applies to teenagers and adults but it's worth a shot I reckon. I suggest using the University of Hull as an experiment - sacking all the teacherslecturers, moving the computers outside, and letting the students get on with it. We could call it the University of Hull in the Wall and see if we get away with it?

Unfortunately, I missed out on F-ALT, the alternative ALT-C, this year cos I forgot to set my Twitter radar to the correct hashtag, so I can't report on that particular cliqueexperience. It's been so long since I took part in F-ALT that they've probably withdrawn my membership.

Oh well... here's to next year's conference. I don't know what it'll be called yet but it'll probably be some 'clever' reference to the massive changes happening in the wider world whilst ignoring the complete lack of change inside the sector.

Change? What change?

In the words of the late, great Kenny Everett... all meant in the best possible taste!

September 06, 2010

When technology disappears

A colleague at Eduserv asked me the other day why there isn't as much noise as there used to be about OpenID and whether it was indicative of a loss of interest or something else.

It's inevitable I guess. New developments, particularly those that look as transformative as OpenID looked at the time, tend to generate a lot of noise and activity, often at a level that isn't sustainable for very long. Something else of interest comes along, there are various contenders in this case, and the world shifts its interest - or, at least, the audible noise that results from such interest.

In the discussion that followed the initial question it turned out that we both thought that some combination of OpenID and OAuth was somehow being used behind the scenes of things like Google Friend Connect and Facebook Connect but we weren't quite sure how much and how often.

I decided to look around and find out.

Unfortunately, I was somewhat disappointed with what I could find - at least without spending more time on it than I could afford. The OpenID.net website carries an impressive list of adopters across the bottom of the page but doesn't indicate whether they are Identity Providers or Replying Parties (or both), nor what the status of their adoption is. So I asked on the openid-board@lists.openid.net mailing list:

Also, when I chose to login via Google, Facebook, whatever... from a typical pull-down list (e.g. that offered by something like Janrain Engage)... is it ever using OpenID behind the scenes? If so, what proportion of the time?

and got the following helpful response from Brian Kissel at Janrain:

Speaking for Janrain Engage, yes, it’s OpenID behind the scenes for Google, Yahoo, AOL, MySpace, LiveJournal, Blogger, PayPal, etc. Facebook, Twitter, LinkedIn are based on OAuth, and some use a hybrid of OpenID and OAuth.

So... OpenID is alive and well (I'm sure you knew that) but looks like it is probably disappearing into the infrastructure to a certain extent - which is exactly where it belongs.

In case you were worried...

If I was a Batman villain I'd probably be...

The Modeller.

OK... not a real Batman villain (I didn't realise there were so many to choose from) but one made up by Chris Gutteridge in a recent blog post of the same name. It's very funny:

I’ve invented a new Batman villain. His name is “The Modeller” and his scheme is to model Gotham city entirely accurately in a way that is of no practical value to anybody. He has an OWL which sits on his shoulder which has the power to absorb huge amounts of time and energy.

...

Over the 3 issues there’s a running subplot about The Modeller's master weapon, the FRBR, which everyone knows is very very powerful but when the citizens of Gotham talk about it none of them can quite agree on exactly what it does.

...

While unpopular with the fans, issue two, “Batman vs the Protégé“, will later be hailed as a Kafkaesque masterpiece. Batman descends further into madness as he realises that every moment he’s the Batman of that second in time, and each requires a URI, and every time he considers a plan of action, the theoretical Batmen in his imagination also require unique distinct identifiers which he must assign before continuing.

I suspect there's a little bit of The Modeller in most of us - certainly those of us who have a predisposition towards Linked Data/the Semantic Web/RDF - and as I said before, I tend to be a bit of a purest, which probably makes me worse than most. I've certainly done my time with the FRBR. The trick is to keep The Modeller's influence under control as far as possible.

On funding and sustainable services

I write this post with some trepidation, since I know that it will raise issues that are close to the hearts of many in the community but discussion on the jisc-repositories list following Steve Hitchcock's post a few days ago (which I posted in full here recently) has turned to the lessons that the withdrawl of JISC funding for the Intute service might teach us in terms of transitioning JISC- (or other centrally-) funded activities into self-sustaining services.

I'm reminded of a recent episode of the Dragon's Den on BBC TV where it emerged that the business idea being proposed for investment had survived thus far on European project funding. The dragons took a dim view of this, on the basis, I think, that such funding would only rarely result in a viable business because of a lack of exposure to 'real' market forces and the proposer was dispatched forthwith (the dragons clearly never having heard of Google! :-) ).

On the mailing list, views have been expressed that projects find it hard to turn into services because they attract the wrong kind of staff, or that the IPR situation is wrong, or that they don't get good external business advice. All valid points I'm sure. But I wonder if one could make the argument that it is the whole model of centralised project funding for activities that are intended to transition into viable, long-term, self-sustaining businesses that is part of the problem. (Note: I don't think this applies to projects that are funded purely in the pursuit of knowledge). By that I mean that such funding tends to skew the market in rather unhelpful ways, not just for the projects in question but for everyone else - ultimately in ways that make it hard for viable business models to emerge at all.

There are a number of reasons for this - reasons that really did not become apparent to me until I started working for an organisation that can only survive by spending all its time worrying about whether its business models are viable.

Firstly, centralised funding tends to mean that ideas are not subject to market forces early enough - not just not subjected, but market forces are not even considered by those proposing/evaluating the projects. Often we can barely get people to use the results of project funding when we give them away for free - imagine if we actually tried to charge people for them!? The primary question is not, 'can I get user X or institution Y to pay for this?' but 'can I get the JISC to pay for this?' which is a very different proposition.

Secondly, centralised funding tends to support people (often very clever people) who can then cherry-pick good ideas and develop them without any concern for sustainable business-models, and who subsequently may or may not be in a position to support them long term, but who thus prevent others, who might develop something more sustainable, from even getting started.

Thirdly, the centrally-funded model contributes to a wider 'free-at-the-point-of-use' mindset where people simply are not used to thinking in terms of 'how much is it really costing to do this?' and 'what would somebody actually be prepared to pay for this?' and where there is little incentive to undertake a cost/benefit analysis or prepare a proper business case. As I've mentioned here before, I've been on the receiving end of many proposals under the UCISA Award for Excellence programme that were explicitly asked to assess their costs and benefits but who chose to treat staff time at zero cost simply because those staff were in the employ of the institutions anyway.

Now... before you all shout at me, I don't think market forces are the be-all and end-all of this and I think there are plenty of situations where services, particularly infrastructural services, are better procured centrally than by going out to the market. This post is absolutely not a rant that everything funded by the JISC is necessarily pants - far from it.

That said, my personal view is that Intute did not fall into that class of infrastructural service and that it was rightly subjected to an analysis of whether its costs outweighed its benefits. I wasn't involved in that analysis, so I can't really comment on it - I'm sure there is a debate to be had about how the 'benefits' were assessed and measured. But my suspicion is that if one had asked every UK HE institution to pay a subscription to Intute not many would have been willing to do so - were that the case, I presume that Intute would be exploring that model right now? That, it seems to me, is the ultimate test of viability - or at least one of them. As I mentioned before, one of the lessons here is the speed with which we, as a community, can react to the environmental changes around us and how we deal with the fall-out - which is as much about how the viability of business models changes over time as it is about technology.

I certainly don't think there are any easy answers.

Comparing Yahoo Directory and the eLib Subject Gateways (the fore-runners of Intute), which emerged at around the same time and which attempted to meet a similar need (see Lorcan Dempsey's recent post, Curating the web ...), it's interesting that the Yahoo offering has proved to be longer lasting than the subject gateways, albeit in a form that is largely hidden from view, supported (I guess) by an advertising- and paid-for-listings- based model, a route that presumably wasn't/isn't considered appropriate or sufficient for an academic service?

Addendum (8 September 2010): Related to this post, and well worth reading, see Lorcan Dempsey's post from last year, Entrepreneurial skills are not given out with grant letters.

September 03, 2010

Call for 'ideas' on UK government identity directions

The Register reports that the UK government is calling for ideas on future 'identity' directions, UK.gov fishes for ID ideas:

Directgov has asked IT suppliers to come up with new thinking on identity verification.

The team, which is now within the Cabinet Office, has issued a pre-tender notice published in the Official Journal of the European Union, saying that it wants feedback on potential requirements for the public sector on all aspects of identity verification and authentication. This is particularly relevant to online and telephone channels, and the notice says the services include the provision of related software and computer services.

The notice itself is somewhat hard to find online - I have no idea why that should be! - but a copy is available from the Sell2Wales website.

Oddly, to me at least - perhaps I'm just naive? - the notice doesn't use the word 'open' once, a little strange since one might assume that this would be treated as part of the wider 'open government' agenda as it is in the US where a similar call resulted in the OpenID Foundation putting together a nice set of resources on OpenID and Open Government. In particular, their Open Trust Frameworks for Open Government whitepaper is worth a look:

Open government is more than just publishing government proceedings and holding public meetings. The real goal is increased citizen participation, involvement, and direction of the governing process itself. This mirrors the evolution of “Web 2.0” on the Internet—the dramatic increase in user-generated content and interaction on websites. These same social networking, blogging, and messaging technologies have the potential to increase the flow of information between governments and citizenry—in both directions. However, this cannot come at the sacrifice of either security or privacy. Ensuring that citizen/government interactions are both easy and safe is the goal of a new branch of Internet technology that has grown very rapidly over the past few years.

September 01, 2010

Lessons of Intute

Many years ago now, back when I worked for UKOLN, I spent part of my time working on the JISC-funded Intute service (and the Resource Discovery Network (RDN) that went before it), a manually created catalogue of high-quality Internet resources. It was therefore with some interest that I read a retrospective about the service in the July issue of Ariadne. My involvement was largely with the technology used to bring together a pre-existing and disparate set of eLib 'subject gateways' into a coherent whole. I was, I suppose, Intute's original technical architect, though I doubt if I was ever formally given that title. Almost inevitably, it was a role that led to my involvement in discussions both within the service and with our funders (and reviewers) at the time about the value (i.e. the benefits vs the costs) of such a service - conversations that were, from my point of view, always quite difficult because they involved challenging ourselves about the impact of our 'home grown' resource discovery services against those being built outside the education sector - notably, but not exclusively, by Google :-). 

Today, Steve Hitchcock of Southampton posted his thoughts on the lessons we should draw from the history of Intute. They were posted originally to the jisc-repositories mailing list. I repeat the message, with permission and in its entirety, here:

I just read the obituary of Intute, and its predecessor JISC services, in Ariadne with interest and some sadness, as will others who have been involved with JISC projects over this extended period. It rightly celebrates the achievements of the service, but it is also balanced in seeking to learn the lessons for where it is now.

We must be careful to avoid partial lessons, however. The USP of Intute was 'quality' in its selection of online content across the academic disciplines, but ultimately the quest for quality was also its downfall:

"Our unique selling point of human selection and generation of descriptions of Web sites was a costly model, and seemed somewhat at odds with the current trend for Web 2.0 technologies and free contribution on the Internet. The way forward was not clear, but developing a community-generated model seemed like the only way to go."

http://www.ariadne.ac.uk/issue64/joyce-et-al/

Unfortunately it can be hard for those responsible for defining and implementing quality to trust others to adhere to the same standards: "But where does the librarian and the expert fit in all of this? Are we grappling with new perceptions of trust and quality?" It seems that Intute could not unravel this issue of quality and trust of the wider contributor community. "The market research findings did, however, suggest that a quality-assurance process would be essential in order to maintain trust in the service". It is not alone, but it is not hard to spot examples of massively popular Web services that found ways to trust and exploit community.

The key to digital information services is volume and speed. If you have these then you have limitless opportunities to filter 'quality'. This is not to undermine quality, but to recognise that first we have to reengineer the information chain. Paul Ginsparg reengineered this chain in physics, but he saw early on that it would be necessary to rebuild the ivory towers:

"It is clear, however, that the architecture of the information data highways of the future will somehow have to reimplement the protective physical and social isolation currently enjoyed by ivory towers and research laboratories."

http://arxiv.org/macros/blurb.tex

It was common at that time in 1994 to think that the content on the emerging Web was mostly rubbish and should be swept away to make space for quality assured content. A senior computer science professor said as much in IEEE Computer magazine, and as a naive new researcher I replied to say he was wrong and that speed changes everything.

Clearly we have volume of content across the Web; only now are we beginning to see the effect of speed with realtime information services.

If we are to salvage something from Intute, as seems to be the aim of the article, it must be to recognise the relations on the digital information axis between volume, speed and quality, not just the latter, even in the context of academic information services.

Steve Hitchcock

Steve's comments were made in the context of repositories but his final paragraph struck a chord with me more generally, in ways that I'm struggling to put into words.

My involvement with Intute ended some years ago and I can't comment on its recent history but, for me, there are also lessons in how we recognise, acknowledge and respond to changes in the digital environment beyond academia - changes that often have a much larger impact on our scholarly practices than those we initiate ourselves. And this is not a problem just for those of us working on developing the component services within our environment but for the funders of such activities.

About

Search

Loading
eFoundations is powered by TypePad