« January 2008 | Main | March 2008 »

February 27, 2008

David Orrell joins the Foundation

My second "I'm very pleased to..." announcement for the day!

I'm very pleased to announce that David Orrell has joined the Eduserv Foundation as Identity Systems Architect.  He will be working initially on various aspects of OpenID as well as developing prototype diagnostic tools for Shibboleth/SAML-based systems.

David joins the Foundation from the Eduserv R&D team where he worked primarily as a developer for Athens and OpenAthens.   

Inside Out - Eduserv Foundation Symposium 2008

I'm very pleased to announce that this year's Eduserv Foundation Symposium, Inside Out: What do current Web trends tell us about the future of ICT provision for learners and researchers?, will be held on Thursday May 8th at the British Library in London.

Intended as an opportunity to think about how the Web (and in particular Web 2.0) is disrupting the delivery and use of ICT for both learning and research within the education sector, the day will bring together seven different viewpoints covering educational institutions, the mainstream media and academic publishing.

The list of speakers is as follows:

  • Larry Johnson, New Media Consortium (US)
  • Bobbie Johnson, Guardian (UK)
  • Jem Stone, BBC (UK)
  • Chris Adie, University of Edinburgh (UK)
  • David Harrison, UCISA / University of Cardiff (UK)
  • Gráinne Conole, Open University (UK)
  • Geoffrey Bilder, CrossRef (UK)

The breadth and depth here is intentional - we didn't want a day that was just the education community talking to itself.  Rather we wanted to bring together a mix of viewpoints, trying to understand how recent Web trends impact on service delivery both inside and outside the sector.  I'm really pleased with the line up and am looking forward to an informative and enlightening day.

Interested?  Go to the main symposium page or the registration page.  Please note that registration for the event is free and includes a drinks reception after the presentations.  However, there will be a small charge if you are unable to attend on the day.

UK Access Management Federation progress

A news item on the UK Access Management Federation Web site suggests that 210 institutions have now joined the federation and that (quoting Nicole Harris):

to date, 95 per cent of UK Federation members have chosen Shibbolethas [sic] their preferred platform. Shibboleth has proved popular because it is immediately available, easy to customise and well supported internationally. It builds on existing institutes’ structures, can be operated in-house and is subscription-free as it is based on open source software.

That'll be the perfect choice for everyone then!

95% of 210 is about 200... which seems like a very large number to me, though I must confess I don't have any detailed knowledge on institutional intentions in this space.

The list maintained by the federation suggests that there are currently 223 members of the federation (including both institutions and service providers), of which about 41 have implemented one or more Identity Providers (IdP), though the list doesn't make it clear whether each IdP has been implemented in-house using the Shibboleth platform, outsourced to an external service provider or something else.

It's not overly clear what the phrase "have chosen Shibboleth as their preferred platform" actually means in practice.  I appreciate that "chosen" does not mean "implemented", but even so, the numbers are, err..., impressively high.  Reporting numbers is fine, though given the potential confusion about what they mean it might be clearer for all concerned to stick to what has actually been implemented rather than talking about institutional intentions at this stage?  Part of the problem here is that "shibboleth" can be interpreted both as "shorthand for a general technological approach" and "a particular software platform".

As you might be able to tell from my tone here, I do find the messaging and discussions around the Athens to Shibboleth transition somewhat frustrating since they are very often tinged with ideology not just around standards (hey, I'm as ideological as the next person around standards) but also around implementation approaches.  As many readers will know, I've worked with JISC for a long time now and I don't recall any other scenario where a single implementation option has been pushed so heavily.  Having got the standards bit right (i.e. SAML and Shibboleth) it seems to me the time is right to step back and let the playing field level a bit, allowing institutions to make their own business choices between open source, non-profit and for-profit options as they see fit, based on a free flow of information from suppliers.  Why should the JISC care whether institutions join the federation using open source software or something else?  The important thing is to adopt the right standards is it not?

Now, you are no doubt thinking, "well, you would say that wouldn't you?", and perhaps you are right?  The world probably does look rather different from inside Eduserv than from outside... if nothing else, it is much more obvious how badly others mis-represent our offerings (either by accident or design).  As I say, I feel frustrated by the world as I see it currently.  You can try and cheer me up by telling me the JISC are adopting a completely neutral position in this area if you like, but the cynic in me may take some convincing.  Sorry.

Twitter stats

Twitter have published some stats of the kind of usage they are seeing, including a breakdown of traffic to the Twitter Web site by country and a chart showing the breakdown of twits (twitterers?) by how many followers they have.  As they note, these kinds of breakdown may be heavily skewed by the fact that they are only looking at Web traffic, not tweets via SMS or via the Twitter API.

Speaking personally, I'm seeing a significant growth in the usage of Twitter in my social network, both in terms of number of twits and number of tweets - a trend that I expect to continue for the next while.

February 26, 2008

Preserving the ABC of scholarly communication

Somewhat belatedly, I've been re-reading Lorcan Dempsey's post from October last year, Quotes of the day (and other days?): persistent academic discourse, in which he ponders the role of academic blogs in scholarly discourse and the apparent lack of engagement by institutions in thinking about their preservation.

I like Grainne Conole's characterisation of the place of blogging in scholarly communication:

  • Academic paper: reporting of findings against a particular narrative, grounded in the literature and related work; style – formal, academic-speak
  • Conference presentation: awareness raising of the work, posing questions and issues about the work, style – entertaining, visual, informal
  • Blogging – snippets of the work, reflecting on particular issues, style – short, informal, reflective

(even though it would have been better in alphabetical order! :-) ) and I'm tempted to wonder whether and how this characterisation will change over the next few years, as blogging continues to grow in importance as a communication medium.

Lorcan ends with:

Universities and university libraries are recognizing that they have some responsibility to the curation of the intellectual outputs of their academics and students. So far, this has not generally extended to thinking about blogs. What, if anything, should the Open University or Harvard be doing to make sure that this valuable discourse is available to future readers as part of the scholarly record?

As I argued in my most recent post about repositories, I suspect that most academics would currently expect to host their blogs outside their institution.  (Note that I'm hypothesising here, since I haven't asked any real academics this question - however, the breadth and depth of external blog services seems so overwhelming that it would be hard for institutions to try to compel their academics to use an institutional blogging service IMHO). This leaves institutions (or anyone else for that matter) that want to curate the blogging component of their intellectual output with a problem.  Somehow, they have to aggregate their part of the externally held scholarly record into an internal form, such that they can curate it.

I don't see this as an impossible task - though clearly, there is a challenge here in terms of both technology and policy.

In the context of the debate about institutional repositories, my personal opinion is that this situation waters down the argument that repositories have to be institutional because that is the only way in which the scholarly record can be preserved.  Sorry, I just don't buy it.

February 25, 2008

OpenID Foundation board growth

The news that Google, IBM, Microsoft, VeriSign, and Yahoo! have joined the OpenID Foundation board is significant.  We are continuing to see steady growth in the potential importance of OpenID to today's Web.  I say 'potential' only because I still don't make daily (or even occasional) use of OpenID on a regular basis for the kinds of things I do on the Internet - Typepad, Blogger, Wordpress, Flickr, Slideshare, Facebook, ... - a situation that I'd like to see changing sometime reasonably soon.

Scott Kveton's blog has a nice set of pointers to the blog commentary.

February 21, 2008

Linked Data (and repositories, again)

This is another one of those posts that started life in the form of various drafts which I didn't publish because I thought they weren't quite "finished", but then seemed to become slightly redundant because anything of interest had already been said by lots of other people who were rather more on the ball than I was. But as there seems to be a rapid growth of interest in this area at the moment, and as it ties in with some of the themes Andy highlights in his recent posts about his presentation at VALA 2008, I thought I'd make an effort to pull try to pull some of these fragments together.

If I'd got round to compiling my year-end Top 5 Technical Documents list for 2007 (whaddya mean, you don't have a year-end Top 5 Technical Documents list?), my number one would have been How to Publish Linked Data on the Web by Chris Bizer, Richard Cyganiak and Tom Heath.

In short, the document fleshes out the principles Tim Berners-Lee sketches in his Linked Data note - essentially the foundational principles for the Semantic Web. As Berners-Lee notes

The Semantic Web isn't just about putting data on the web. It is about making links, so that a person or machine can explore the web of data.  With linked data, when you have some of it, you can find other, related, data. (emphasis added)

And the key to realising this, argues Berners-Lee, lies in following four base rules:

  1. Use URIs as names for things.
  2. Use HTTP URIs so that people can look up those names.
  3. When someone looks up a URI, provide useful information.
  4. Include links to other URIs. so that they can discover more things.

Bizer, Cyganiak & Heath present linked data as a combination of key concepts from the Web Architecture on the one hand (including the TAG's resolution to the httpRange-14 issue) and the RDF data model on the other, and distill them into a form which is on the one hand clear and concise, and on the other backed up by effective, practical guidelines for their application. While many of those guidelines are available in some form elsewhere (e.g. in TAG findings or in notes such as Cool URIs...), it's extremely helpful to have these ideas collated and presented in a very practically focused style.

As an aside, in the course of assembling those guidelines, they suggest that some of those principles might benefit from some qualification, in particular the use of URI aliases, which the Web Architecture document suggests are best avoided. For the authors,

URI aliases are common on the Web of Data, as it can not realistically be expected that all information providers agree on the same URIs to identify a non-information resources. URI aliases provide an important social function to the Web of Data as they are dereferenced to different descriptions of the same non-information resource and thus allow different views and opinions to be expressed. (emphasis added)

I'm prompted to mention Linked Data now in part by Andy's emphasis on Web Architecture and Semantic Web technologies, but also by a post by Mike Bergman a couple of weeks ago, reflecting on the growth in the quantity of data now available following the principles and conventions recommended by the Bizer, Cyganiak & Heath paper. In his post, Bergman includes a copy of a graphic from Richard Cyganiak providing a "birds-eye view "of the Linked Data landscape, and highlighting the principal sources by domain or provider.

"What's wrong with that picture?", as they say. I was struck (but not really surprised) by the absence - with the exception of the University of Southampton's Department of Electronics & Computer Science - of any of the data about researchers and their outputs that is being captured and exposed on the Web by the many "repository" systems of various hues within the UK education sector. While in at least some cases institutions (or trans-institutional communities) are having a modicum of success in capturing that data, it seems to me that the ways in which it is typically made available to other applications mean that it is less visible and less usable than it might be.

Or, to borrow an expression used by Paul Miller of Talis in a post  on Nodalities, we need to think about how to make sure our repository systems are not simply "on the Web" but firmly "of the Web" - and the practices of the growing Linked Data community, it seems to me, provide a firm foundation for doing that.

Educause SW

I just asked a question of one of the speakers in the Second Life session at the start of day 2 of the Educause SW Regional Conference 2008 in Houston (US) without ever leaving my office in Bath (UK).

Now, you are probably thinking, "So what?  There's nothing particularly special about that in this day and age!".  And you'd be right.  At least in part.  But the somewhat serendipitous way in which I got to virtually attend the conference and interact with the speakers is more unusual, and indicative of how things are changing in the loosely-coupled world we now inhabit.

I use Twitter a lot.  It's "do one simple thing and do it well" approach hits all the right buttons for me and I find myself using it more and more.

One of the people I follow on Twitter is called @cmduke.  As far as I know, I have never met @cmduke and I don't know much about him (I'm assuming he is a him - as you know, on the Internet no-one knows you are a dog) other than that his first name is Chris and he maintains a Second Life blog, written under his SL avatar name of Topher Zwiers, called Muve Forward.  I'm guessing that the reason I follow him on Twitter is because of the blog - though I could be wrong. I don't actually remember how or why I started following him on Twitter to be honest - which is part of the fun!

Earlier on today Chris tweeted (i.e. sent a micro-blog via Twitter) to say that he was about to start live-blogging the Second Life session at Educause SW.  Intrigued, I followed the TinyURL Chris had embedded into his tweet in order to take a look.  The link took me to a page on Chris' site containing an embedded CoveritLive live-blogging session.  Chris was already in full swing... there's an art to live-blogging and as far as I can tell Chris has a pretty good handle on it.

CoveritLive looks like an interesting tool - one that I'll investigate further in due course (though I'm sure there are alternatives).  It basically provides the reader with a Web-based interface to a stream of live-blogging entries written by the author (in this case, Chris), updated in real-time.  As a reader, you get the opportunity to make comments and ask questions in real time (moderated by the live-blog author).  In this way, Chris was kind enough to relay one of my questions on to the presenters of the session, with their answer coming back to me thru the live-blog.  Neat.

Once the live-blog session is over, the same URL takes you to an archive of the session.

Now, as I said above, in many respects there is nothing particularly unusual or special about this scenario and readers of this blog will probably be very familiar with similar scenarios in their day-to-day work.  I repeat it here only because I think it represents an interesting shift in how we do things - the serendipity of social networks like Twitter and Facebook and the easy availability of high-quality Web 2.0 tools is fundamentally changing the way we do things.  Or so it seems to me anyway.

For info... Chris is covering many of the remaining sessions at Educause SW.  See here for details.

February 20, 2008

Repositories follow-up - global vs. institutional

There have been a number of responses to my my VALA 2008 keynote on the future of repositories, which Brian Kelly has helpfully summarised to a large extent in a post on his blog.  There are several themes here, which probably need to be separated out for further discussion.  One such is my emphasis on building 'global' (as opposed to 'institutional') repository services.

Before I do that however, I just want to clarify one thing.  Mike Ellis suggests that he is "bemused as to *why* repositories (at all)".  I'll leave others to answer that.  Suffice to say that I was not intending to argue that the management of scholarly stuff (and the workflows around that stuff) is unimportant.  Of course it is.  Just that our emphasis should not be on the particular kinds of systems that we choose to use to undertake that management, but on the bigger objective of open access and how whatever systems we put in place surface content on the Web and support the construction of compelling scholarly social networks.  I am perfectly happy that some people will build systems that they choose to call repositories.  Others will build content management systems.  Still others something else.  The labeling is almost irrelevant (except insofar that it doesn't get in the way of communicating the broader 'open access' message).

OK, back to the issue of global vs. institutional services.  Rachel Heery says:

I don’t really see that there is conflict between encouraging more content going into institutional repositories and ambitions to provide more Web 2.0 type services on top of aggregated IR content. Surely these things go together?

Paul Walk makes a similar point in his blogged response:

The half sentence I don’t quite buy is the “global repository services”. Why can’t we “focus on building and/or using global scholarly social networks” (which I support) based on institutional repository services? We don’t have a problem with institutional web sites do we? Or institutional library OPACs? We have certainly managed to network the latter on a global scale, and built interesting services around this...

Yes, point(s) taken... though I think that the institutional Web site and the OPAC are not primarily 'social networks' (and even if they are, the network they are serving is largely institutionally focussed) so there is a difference.  As I argued in the original blog entry, scholarly social networks are global in nature (or at least extra-institutional).

Of course, the blogosphere is a good example of a global social network being layered on top of a distributed base of content.  On the face of it this seems to argue against my 'global repository' view.  So what is different?  Well, to be honest I'm not sure.  Clearly, the blogosphere is not built out of 'institutional' blog services and my strong suspicion is that if we approached academic blogging in the same way we approach academic repositories we would rapidly kill off its future as a means of scholarly communication :-) .  Long live an open, free market approach to the provision of blogs!  God help us if institutions start trying to lay down the law about when and where its members can blog.  There is a role for institutional blogging services but only as part of a wider landscape of options where individuals can pick and choose a solution that is most appropriate to them.

And that is one of my fundamental points about repositories I guess...  when institutional repositories stop being an option that individuals can choose to make use of and instead become the only option on the table because that is what mandates and policies say must be used, we have a problem.  Instead we need to focus on making scholarly content available on the Web in whatever form makes sense to individual scholars.  My strong suspicion is that if someone came along and built a global research repository, let's call it ResearchShare for the sake of argument (though I'm aware that name is taken), and styled its features after the likes of Slideshare, we would end up with something far more compelling to individual scholars than current institutional offerings.

Note that I'm not being overly dogmatic here.  In my view there are as many routes to open access as there are ways of surfacing content on the Web.  If individual scholars want to do their own thing that's fine by me, provided they do it in a way that ensures their content is at a reasonably persistent URI and is indexed by Google and the like.

This leaves institutions with the problem of picking up the pieces of the multiple ways in which individual scholars choose to surface their scholarly content on the Web.  Well sorry guys... get used to it!

Overall, I don't disagree much with Stu Weibel's take on this.  It's a complex area with lots of competing interests, some rather entrenched.  As Stu notes:

It is still possible that another entirely different model will emerge... more in-the-cloud. A distributed model does seem to complicate curation, (and that institutional reputation thing), but I wouldn't count it out just yet. Still, some institution has to take care of this stuff... responsibility involves the attachement to artifacts, even if they are bitstreams.

February 13, 2008

Repositories thru the looking glass

P1050338 I spent last week in Melbourne, Australia at the VALA 2008 Conference - my first trip over to Australia and one that I thoroughly enjoyed.  Many thanks to all those locals and non-locals that made me feel so welcome.

I was there, first and foremost, to deliver the opening keynote, using it as a useful opportunity to think and speak about repositories (useful to me at least - you'll have to ask others that were present as to whether it was useful for anyone else).

It strikes me that repositories are of interest not just to those librarians in the academic sector who have direct responsibility for the development and delivery of repository services.  Rather they represent a microcosm of the wider library landscape - a useful case study in the way the Web is evolving, particularly as manifest through Web 2.0 and social networking, and what impact those changes have on the future of libraries, their spaces and their services.

My keynote attempted to touch on many of the issues in this area - issues around the future of metadata standards and library cataloguing practice, issues around ownership, authority and responsibility, issues around the impact of user-generated content, issues around Web 2.0, the Web architecture and the Semantic Web, issues around individual vs. institutional vs. national, vs. international approaches to service provision.

In speaking first I allowed myself the luxury of being a little provocative and, as far as I can tell from subsequent discussion, that approach was well received.  Almost inevitably, I was probably a little too technical for some of the audience.  I'm a techie at heart and a firm believer that it is not possible to form a coherent strategic view in this area without having a good understanding of the underlying technology.  But perhaps I am also a little too keen to inflict my world-view on others. My apologies to anyone who felt lost or confused.

I won't repeat my whole presentation here.  My slides are available from Slideshare and a written paper will become available on the VALA Web site as soon as I get round to sending it to the conference organisers!

I can sum up my talk in three fairly simple bullet points:

  • Firstly, that our current preoccupation with the building and filling of 'repositories' (particularly 'institutional repositories') rather than the act of surfacing scholarly material on the Web means that we are focusing on the means rather than the end (open access).  Worse, we are doing so using language that is not intuitive to the very scholars whose practice we want to influence.
  • Secondly, that our focus on the 'institution' as the home of repository services is not aligned with the social networks used by scholars, meaning that we will find it very difficult to build tools that are compelling to those people we want to use them.  As a result, we resort to mandates and other forms of coercion in recognition that we have not, so far, built services that people actually want to use.  We have promoted the needs of institutions over the needs of individuals.  Instead, we need to focus on building and/or using global scholarly social networks based on global repository services.  Somewhat oddly, ArXiv (a social repository that predates the Web let alone Web 2.0) provides us with a good model, especially when combined with features from more recent Web 2.0 services such as Slideshare.
  • Finally, that the 'service oriented' approaches that we have tended to adopt in standards like the OAI-PMH, SRW/SRU and OpenURL sit uncomfortably with the 'resource oriented' approach of the Web architecture and the Semantic Web.  We need to recognise the importance of REST as an architectural style and adopt a 'resource oriented' approach at the technical level when building services.

I'm pretty sure that this last point caused some confusion and is something that Pete or I need to return to in future blog entries.  Suffice to say at this point that adopting a 'resource oriented' approach at the technical level does not mean that one is not interested in 'services' at the business or function level.

[Image: artwork outside the State Library of Victoria]

February 06, 2008

Towards Low Carbon ICT

From the JISC Development mailing list, an announcement by Howard Noble of a one-day conference in Oxford on Wednesday 19 March on measures to improve energy efficiency and reduce resource consumption in the provision of ICT services.

The event is being organised by the Low Carbon ICT project, which is funded under JISC's Institutional Exemplars Programme to "demonstrate how energy and cost savings are achievable through developing technologies that reduce carbon emissions".

Google, Social Graphs, privacy & the Web

This has already received a fair amount of coverage elsewhere (TechcrunchDanny Ayers, Read-Write Web (1), Joshua Porter (1), Read-Write Web (2), Joshua Porter (2), to pick just a few) but I thought it was worth providing a quick pointer. Last week Google announced the availability of what they are calling their Social Graph API.

The YouTube video by Brad Fitzpatrick provides a good overview:

This is a Google-provided service which offers a (service-specific) query interface to a dataset that is generated by crawling data publicly available on the Web in the form of:

Result sets are returned in the form of JSON documents.

On the technical side, I have seen a few critical comments (see discussion on Semantic Web Interest Group IRC channel) around some points of respecting Web architecture principles (e.g. the conflation of (URIs for) people and (URIs for) documents (see the draft W3C TAG finding Dereferencing HTTP URIs) and what looks like the introduction of an unnecessary new URI scheme (see the draft W3C TAG finding URNs, Namespaces and RegIstries)). And some concerns are also voiced about introducing dependency on a centralised Google-provided service - though of course the data is created and held independently and other providers could aggregate that data and offer similar services, even using the same interface (though whether they will be able to do so as effectively as Google can, given their experience in this area, and/or attract the user base which a Google service inevitably will, remains to be seen). And of course there are the usual issues of spamming and trust and the significance of reciprocation: who says "PeteJ is friends with XYZ" and what does XYZ have to say about that?

Overall, however, I think the approach of such a high-profile provider exposing data gathered from distributed, devolved, openly available sources on the Web, rather than from the database of a single social networking service, is being seen as a significant development.

There are some thoughtful voices of caution, however. In a comment to Joshua Porter's first post listed above, Thomas Vanderwal notes

I am quite excited about this in a positive manner. I do have great trepidation as this is exactly the tool social engineering hackers have been hoping for and working toward.


The Google SocialGraph API is exposing everybody who has not thought through their privacy or exposing of their connections.

And in particular, a post by Danah Boyd encourages us to reflect on the social, political and ethical implications of aggregating this data and facilitating access to that aggregation in this way, and reminds us that as individuals we live within a set of power relationships which mean that some are more vulnerable than others to the use of such technologies:

Being socially exposed is AOK when you hold a lot of privilege, when people cannot hold meaningful power over you, or when you can route around such efforts. Such is the life of most of the tech geeks living in Silicon Valley. But I spend all of my time with teenagers, one of the most vulnerable populations because of their lack of agency (let alone rights). Teens are notorious for self-exposure, but they want to do so in a controlled fashion. Self-exposure is critical for the coming of age process - it's how we get a sense of who we are, how others perceive us, and how we fit into the world. We exposure during that time period in order to understand where the edges are. But we don't expose to be put at true risk. Forced exposure puts this population at a much greater risk, if only because their content is always taken out of context. Failure to expose them is not a matter of security through obscurity... it's about only being visible in context.

Even if - as Google take pains to emphasise is the case - the individual data sources are already "public", the merging of data sources, and the change of the context in which information is presented can be significant.

The opposing view is perhaps most vividly expressed in Tim O'Reilly's comment:

The counter-argument is that all this data is available anyway, and that by making it more visible, we raise people's awareness and ultimately their behavior. I'm in the latter camp. It's a lot like the evolutionary value of pain. Search creates feedback loops that allow us to learn from and modify our behavior. A false sense of security helps bad actors more than tools that make information more visible.

One of my tests for whether a Web 2.0 innovation is "good", despite the potential for abuse, is whether it makes us smarter.

I left this post half-finished at this point last night feeling very uneasy with what I perceived as an undertone of almost Darwinian "ruthlessness" in the O'Reilly position, but at the same time struggling to articulate an alternative that I was really convinced of.

So I was delighted this morning when, on opening up my Bloglines feeds, I found an excellent post by Dan Brickley which I think reflects some of the ambivalence I was feeling ("The end of privacy by obscurity should not mean the death of privacy. Privacy is not dead, and we will not get over it. But it does need to be understood in the context of the public record"), and, really, I can only recommend that you read the post in full because I think it's a very sensitive, measured contribution to the debate, based on Dan's direct experience of the issues arising from the deployment of these technologies over several years working on FOAF.

And, far from sitting on the fence, Dan concludes with very practical recommendations for action:

  • Best practice codes for those who expose, and those who aggregate, social Web data
  • Improved media literacy education for those who are unwittingly exposing too much of themselves online
  • Technology development around decentralised, non-public record communication and community tools (eg. via Jabber/XMPP)

Google's announcement of this API has certainly brought both the technical and the social issues to the attention of a wider audience, and sparked some important debate, and perhaps that in itself is a significant contribution in an area where the landscape suddenly seems to be shifting very quickly indeed.

And if I can unashamedly take the opportunity to make a another plug for the activities of the Foundation, I'm sure there's plenty of food for thought here for anyone considering a proposal to the current Eduserv Research Grants call :-)



eFoundations is powered by TypePad