May 18, 2012

Big Data - size doesn't matter, it's the way you use it that counts

...at least, that's what they tell me!

IMG_6404Here's my brief take on this year's Eduserv Symposium, Big Data, big deal?, which took place in London last Thursday and which was, by all the accounts I've seen and heard, a pretty good event.

The day included a mix of talks, from an expansive opening keynote by Rob Anderson to a great closing keynote by Anthony Joseph. Watching either, or both, of these talks will give you a very good introduction to big data. Between the two we had some specifics: Guy Coates and Simon Metson talking about their experiences of big data in genomics and physics respectively (though the latter also included some experiences of moving big data techniques between different academic disciplines); a view of the role of knowledge engineering and big data in bridging the medical research/healthcare provision divide by Anthony Brookes; a view of the potential role of big data in improving public services by Max Wind-Cowie; and three shorter talks immediately after lunch - Graham Prior talking about big data and curation, Devin Gafney talking about his 140Kit twitter-analytics project (which, coincidentally, is hosted on our infrastructure) and Simon Hodson talking about the JISC's big data activities.

All of the videos and slides from the day are avaialble at the links above. Enjoy!

For my part, there were several take-home messages:

  • Firstly, that we shouldn’t get too hung up on the word ‘big’. Size is clearly one dimension of the big data challenge but of the three words most commonly associated with big data - volume, velocity and variety - it strikes me that volume is the least interesting and I think this was echoed by several of the talks on the day.
  • In particular, it strikes me there is some confusion between ‘big data’ and ‘data that happens to be big’ - again, I think we saw some of this in some of the talks. Whilst the big data label has helped to generate interest in this area, it seems to me that its use of the word 'big' is rather unhelp in this respect. It also strikes me that the JISC community, in particular, has a history of being more interested in curating and managing data than in making use of it, whereas big data is more about the latter than the former.
  • As with most new innovations (though 'evolution' is probably a better word here) there is a temptation to focus on the technology and infrastructure that makes it work, particularly amoungst a relatively technical audience. I am certainly guilty of this. In practice, it is the associated cultural change that is probably more important. Max Wind-Cowie’s talk, in particular, referred to the kinds of cultural inertia that need to be overcome in the public sector, on both the service provider side and the consumer side, before big data can really have an impact in terms of improving public services. Attitudes like, "how can a technology like big data possibly help me build a *closer* and more *personal* relationship with my clients?" or "why should I trust a provider of public services to know this much about me?" seem likely to be widespread. Though we didn't hear about it on the day, my gut feeling is that a similar set of issues would probably apply in education were we, for example, to move towards a situation where we make significant use of big data techniques to tailor learning experiences at an individual level. My only real regret about the event was that I didn't find someone to talk on this theme from an education perspective.
  • Several talks refered to the improvements in 'evidence-based' decision-making that big data can enable. For example, Rob Anderson talked about poor business decisions being based on poor data currently and Anthony Brookes discussed the role of knowledge engineering in improving the ability of those involved in front-line healthcare provision to take advantage of the most recent medical research. As Adam Cooper of CETIS argues in Analytics and Big Data - Reflections from the Teradata Universe Conference 2012, we need to find ways to ask questions that have efficiency or effectiveness implications and we need to look for opportunities to exploit near-real-time data if we are to see benefits in these areas.
  • I have previously raised the issue of possible confusion, especially in the government sector, between 'open data' and 'big data'. There was some discussion of this on the day. Max Wind-Cowie, in particular, argued that 'open data' is a helpful - indeed, a necessary - step in encouraging the public sector to move toward a more transparent use of public data. The focus is currently on the open data agenda but this will encourage an environment in which big data tools and techniques can flourish.
  • Finally, the issue that almost all speakers touched on to some extent was that of the need to grow the pool of people who can undertake data analytics. Whether we choose to refer to such people as data scientists, knowledge engineers or something else there is a need for us to grow the breadth and depth of the skills-base in this area and, clearly, universities have a critical role to play in this.

As I mentioned in my opening to the day, Eduserv's primary interest in Big Data is somewhat mundane (though not unimportant) and lies in the enabling resources that we can bring to the communities we serve (education, government, health and other charities), either in the form of cloud infrastructure on which big data tools can be run or in the form of data centre space within which physical kit dedicated to Big Data processing can be housed. We have plenty of both and plenty of bandwidth to JANET so if you are interested in working with us, please get in touch.

Overall, I found the day enlightening and challenging and I should end with a note of thanks to all our speakers who took the time to come along and share their thoughts and experiences.

[Photo: Eliot Hall, Eduserv]

April 02, 2012

Big data, big deal?

Some of you may have noticed that Eduserv's annual symposium is happening on May 10. Once again, we're at the Royal College of Physicians in London and this year we are looking at big data, appropriate really... since 2012 has been widely touted as being the year of big data.

Here's the blurb for our event:

Data volumes have been growing exponentially for a long while – so what’s new now? Is Big Data [1] just the latest hype from vendors chasing big contracts? Or does it indeed present wholly new challenges and critical new opportunities, and if so what are they?

The 2012 Symposium will investigate Big Data, uncovering what makes it different from what has gone before and considering the strategic issues it brings with it: both how to use it effectively and how to manage it.  It will look at what Big Data will mean across research, learning, and operations in HE, and at its implications in government, health, and the commercial sector, where large-scale data is driving the development of a whole new set of tools and techniques.

Through presentations and debate delegates will develop their understanding of both the likely demands and the potential benefits of data volumes that are growing disruptively fast in their organisation.

[1] Big Data is "data that exceeds the processing capacity of conventional database systems. The data is too big, moves too fast, or doesn't fit the strictures of your database architectures. To gain value from this data, you must choose an alternative way to process it."  What is big data?  Edd Dumbill, O'Reilly Radar, Jan 2012

As usual, the event is free to attend and will be followed by a drinks reception.

You'll note that we refer to Edd Dumbill's What is big data? article in order to define what we mean by big data and I recommend reading this by way of an introduction for the day. The Wikipedia page for Big data provides a good level of background and some links for further reading. Finally, O'Reilly's follow-up publication, Planning for Big Data - A CIO's Handbook to the Changing Data Landscape is also worth a look (and is free to download as an e-book).

You'll also note that the defining characteristics of big data include not just 'size' (though that is certainly an important dimension) but also 'rate of creation and/or change', and 'structural coherence'. These are typically known as the three Vs - "volume (amount of data), velocity (speed of data in/out), and variety (range of data types, sources)". In looking around for speakers, my impression is that there is a strong emphasis on the first of these in people's general understanding about what big data means (which is not surprising given the name) and that in the government sector in particular there is potential confusion between 'big data' and 'open data' and/or 'linked data' which I think it would be helpful to unpick a little - big data might be both 'open' and 'linked' but isn't necessarily so.

So, what do we hope to get out of the day? As usual, it's primarily a 'bringing people up to speed' type of event. The focus will be on our charitable beneficiaries, i.e. organisations working in the area of 'public good' - education, government, health and the charity sector - though I suspect that the audience will be mainly from the first of these. The intention is for people to leave with a better understand of why big data might be important to them and what impact it might have in both strategic and practical terms on the kinds of activities they undertake.

We have a range of speakers, providing perspectives from inside and outside of those sectors, both hands-on and more theoretical - this is one of the things we always try and do at our sympoisia. Our sessions include keynotes by Anthony D. Joseph (Chancellor's Associate Professor in Computer Science at University of California, Berkeley) and Rob Anderson (CTO EMEA, Isilon Storage Division at EMC) as well as talks by Professor Anthony J Brookes (Department of Genetics at the University of Leicester), Dr. Guy Coates (Informatics Systems Group at The Wellcome Trust Sanger Institute) and Max Wind-Cowie (Head of the Progressive Conservatism Project, Demos - author of The Data Dividend).

By the way... we still have a couple of speaking slots available and are particularly interested in getting a couple of short talks from people with practical experience of working with big data, either using Hadoop or something else. If you are interested in speaking for 15 minutes or so (or if you know of someone who might be) please get in touch. Thanks. Another area that I was hoping to find a speaker to talk about, but haven't been able to so far, is someone who is looking at the potential impact of big data on learning analytics, either at the level of a single institution or, more likely, at a national level. Again, if this is something you are aware of, please get in touch. Crowd-sourced speakers FTW! :-)

All in all, I'm confident that this will be an interesting and informative day and a good follow-up to last year's symposium on the cloud - I look forward to seeing you there.

October 03, 2011

Virtual World Watch taking submissions for new Snapshot Report

John Kirriemuir has put out a new call for contributions to a tenth Virtual World Watch "snapshot report" on the use of virtual worlds in education in the UK and, this time, in Ireland too. His deadline for submissions is November 14 2011.

The activity is no longer funded under the Eduserv Research Programme, but John has obtained "a small amount of independent funding to carry out another snapshot over the remainder of the year", and Andy and I continue to be members of an informal "advisory board" for the activity (which means, err, we get the occasional email from John which prods us into writing blog posts like this one!)

Part of John's plan is to try to draw attention to the resulting report (and to contributors' work covered in it) by "pushing" it to various agencies, including:

  • UK funding bodies who fund virtual world in education activities
  • Journalists who specialise in technology in education news
  • Relevant government and civil service departments
  • The owners/developers of key virtual worlds
  • Major research groups (worldwide) involved in virtual world in education research

Previous reports are available here

November 09, 2010

Student perspectives on technology in UK universities

Lawrie Phipps of the JISC has written a nice response to the recommendations of the report to HEFCE by the NUS (the UK National Union of Students), Student perspectives on technology – demand, perceptions and training needs [PDF], which makes a number of recommendations around ICT strategy, staff training and so on. Lawrie's contention is that the:

challenge arising from this report is not how to use more technology, nor how to integrate it into practice. The challenge is articulating our existing practice in ways that act as both an exemplar to students (and Support their own digital literacy), and enhance our practice by sharing the exemplary work that is already there.

From my perspective, the difference between "you're not using ICT effectively" and "we are using ICT effectively but nobody recognises that we're using ICT effectively" is somewhat moot. I prefer to see the report in terms of its findings not in terms of its recommendations (which, it seems to me, are really for universities to make anyway).

The point is that where the report indicates fairly fundamental issues, such as student "dissatisfaction that the type of technology used in HE is increasingly outdated" and that a "lack of staff engagement with the Virtual Learning Environment (VLE)" is frustrating for students, we either have to show those things not to be the case (I don't know, maybe they aren't) or acknowledge that whatever it is we are currently doing isn't working well enough? It seems hard to do the former in light of this report?

As a result, I'd tend to read the combination of the report and Lawrie's response as saying, "there are problems with the way ICT is being used to support teaching and learning in universities but we're already doing most of what the report recommends and therefore we need to do something else". Would that be unfair?

As an aside, I was struck by one of the themes highlighted by the report:

Participants expressed concerns over “surface learning” whereby a student only learns the bare minimum to meet module requirements – this behaviour was thought to be encouraged by ICT: students can easily skim-read material online, focusing on key terms rather than a broader base of understanding.

It seems harsh, to me, to lay the blame for this at the door of ICT. If there's a problem with "surface learning" (again, I can only go with what the report says here) then it presumably might have other causes... the pedagogic approaches and/or assessment strategies in use for example?

Me? I love skim-reading! I thought it was a key-skill? I got about 10 paragraphs past that point in the report and stopped reading! Surface learning FTW :-)

October 25, 2010

A few brief thoughts on iTunesU

The use of iTunesU by UK universities has come up in discussions a couple of times recently, on Brian Kelly's UK Web Focus blog (What Are UK Universities Doing With iTunesU? and iTunes U: an Institutional Perspective) and on the closed ALT-C discussion list. In both cases, as has been the case in previous discussions, my response has been somewhat cautious, an attitude that always seems to be interpreted as outright hostility for some reason.

So, just  for the record, I'm not particularly negative about iTunesU and in some respects I am quite positive - if nothing else, I recognise that the adoption of iTunesU is a very powerful motivator for the generation of openly available content and that has got to be a good thing - but a modicum of scepticism is always healthy in my view (particularly where commercial companies are involved) and I do have a couple of specific concerns about the practicalities of how it is used:

  • Firstly that students who do not own Apple hardware and/or who choose not to use iTunes on the desktop are not disenfranchised in any way (e.g. by having to use a less functional Web interface). In general, the response to this is that they are not and, in the absence of any specific personal experience either way, I have to concede that to be the case.
  • Secondly (and related to the first point), that in an environment where most of the emphasis seems to be on the channel (iTunesU) rather than on the content (the podcasts), that confusion isn't introduced as to how material is cited and referred to – i.e. do some lecturers only ever refer to 'finding stuff on iTunesU', while others offer a non-iTunesU Web URL, and others still remember to cite both? I'm interested in whether universities who have adopted iTunesU but who also make the material available in other ways have managed to adopt a single way of citing the material that is on offer?

Both these concerns relate primarily to the use of iTunesU as a distribution channel for teaching and learning content within the institution. They apply much less to its use as an external 'marketing' channel. iTunesU seems to me (based on a gut feel more than on any actual numbers) to be a pretty effective way of delivering OER outside the institution and to have a solid 'marketing win on the back of that. That said, it would be good to have some real numbers as confirmation (note that I don't just mean numbers of downloads here - I mean conversions into 'actions' (new students, new research opps, etc.)). Note that I also don't consider 'marketing' to be a dirty word (in this context) - actually, I guess this kind of marketing is going to become increasingly important to everyone in the HE sector.

There is a wider, largely religious, argument about whether "if you are not paying for it, you aren't the customer, you are part of the product" but HE has been part of the MS product for a long while now and, worse, we have paid for the privilege – so there is nothing particularly new there. It's not an argument that particularly bothers me one way or the other, provided that universities have their eyes open and understand the risks as well as the benefits. In general, I'm sure that they do.

On the other hand, while somebody always owns the channel, some channels seem to me to be more 'open' (I don't really want to use the word 'open' here because it is so emotive but I can't think of a better one) than others. So, for example, I think there are differences in an institution adopting YouTube as a channel as compared with adopting iTunesU as a channel and those differences are largely to do with the fit that YouTube has with the way the majority of the Web works.

October 13, 2010

What current trends tell us about the future of federated access management in education

As mentioned previously, I spoke at the FAM10 conference in Cardiff last week, standing in for another speaker who couldn't make it and using material crowdsourced from my previous post, Key trends in education - a crowdsource request, to inform some of what I was talking about. The slides and video from my talk follow:

As it turns out, describing the key trends is much easier than thinking about their impact on federated access management - I suppose I should have spotted this in advance - so the tail end of the talk gets rather weak and wishy-washy. And you may disagree with my interpretation of the key trends anyway. But in case it is useful, here's a summary of what I talked about. Thanks to those of you who contributed comments on my previous post.

By way of preface, it seems to me that the core working assumptions of the UK Federation have been with us for a long time - like, at least 10 years or so - essentially going back to the days of the centrally-funded Athens service. Yet over those 10 years the Internet has changed in almost every respect. Ignoring the question of whether those working assumptions still make sense today, I think it certainly makes sense to ask ourselves about what is coming down the line and whether our assumptions are likely to still make sense over the next 5 years or so. Furthermore, I would argue that federated access management as we see it today in education, i.e. as manifested thru our use of SAML, shows a rather uncomfortable fit with the wider (social) web that we see growing up around us.

And so... to the trends...

The most obvious trend is the current financial climate, which won't be with us for ever of course, but which is likely to cause various changes while it lasts and where the consequences of those changes, university funding for example, may well be with us much longer than the current crisis. In terms of access management, one impact of the current belt-tightening is that making a proper 'business case' for various kinds of activities, both within institutions and nationally, will likely become much more important. In my talk, I noted that submissions to the UCISA Award for Excellence (which we sponsor) often carry no information about staff costs, despite an explicit request in the instructions to entrants to indicate both costs and benefits. My point is not that institutions are necessarily making the wrong decisions currently but that the basis for those decisions, in terms of cost/benefit analysis, will probably have to become somewhat more rigorous than has been the case to date. Ditto for the provision of national solutions like the UK Federation.

More generally, one might argue that growing financial pressure will encourage HE institutions into behaving more and more like 'enterprises'. My personal view is that this will be pretty strongly resisted, by academics at least, but it may have some impact on how institutions think about themselves.

Secondly, there is the related trend towards outsourcing and shared services, with the outsourcing of email and other apps to Google being the most obvious example. Currently that is happening most commonly with student email but I see no reason why it won't spread to staff email as well in due course. At the point that an institution has outsourced all its email to Google, can one assume that it has also outsourced at least part of its 'identity' infrastructure as well? So, for example, at the moment we typically see SAML call-backs being used to integrate Google mail back into institutional 'identity' and 'access management' systems (you sign into Google using your institutional account) but one could imagine this flipping around such that access to internal systems is controlled via Google - a 'log in with Google' button on the VLE for example. Eric Sachs, of Google, has recently written about OpenID in the Enterprise SaaS market, endorsing this view of Google as an outsourced identity provider.

Thirdly, there is the whole issue of student expectations. I didn't want to talk to this in detail but it seems obvious that an increasingly 'open' mashed and mashable experience is now the norm for all of us - and that will apply as much to the educational content we use and make available as it does to everything else. Further, the mashable experience is at least as much about being able to carry our identities relatively seamlessly across services as it is about the content. Again, it seems unclear to me that SAML fits well into this kind of world.

There are two other areas where our expectations and reality show something of a mis-match. Firstly, our tightly controlled, somewhat rigid approach to access management and security are at odds with the rather fuzzy (or at least fuzzilly interpretted) licences negotiated by Eduserv and JISC Collections for the external content to which we have access. And secondly, our over-arching sense of the need for user privacy (the need to prevent publishers from cross-referencing accesses to different resources by the same user for example) are holding back the development of personalised services and run somewhat counter to the kinds of things we see happening in mainstream services.

Fourthly, there's the whole growth of mobile - the use of smart-phones, mobile handsets, iPhones, iPads and the rest of it - and the extent to which our access management infrastructure works (or not) in that kind of 'app'-based environment.

Then there is the 'open' agenda, which carries various aspects to it - open source, open access, open science, and open educational resources. It seems to me that the open access movement cuts right to the heart of the primary use-case for federated access management, i.e. controlling access to published scholarly literature. But, less directly, the open science movement, in part, pushes researchers towards the use of more open 'social' web services for their scholarly communication where SAML is not typically the primary mechanism used to control access.

Similarly, the emerging personal learning environment (PLE) meme (a favorite of educational conferences currently), where lecturers and students work around their institutional VLE by choosing to use a mix of external social web services (Flickr, Blogger, Twitter, etc.) again encourages the use of external services that are not impacted by our choices around the identity and access management infrastructure and over which we have little or no control. I was somewhat sceptical about the reality of the PLE idea until recently. My son started at the City of Bath College - his letter of introduction suggested that he created himself a Google Docs account so that he could do his work there and submit it using email or Facebook. I doubt this is college policy but it was a genuine example of the PLE in practice so perhaps my scepticism is misplaced.

We also have the changing nature of the relationship between students and institutions - an increasingly mobile and transitory student body, growing disaggregation between the delivery of learning and accreditation, a push towards overseas students (largely for financial reasons), and increasing collaboration between institutions (both for teaching and research) - all of which have an impact on how students see their relationship with the institution (or institutions) with whom they have to deal. Will the notion of a mandated 3 or 4 year institutional email account still make sense for all (or even most) students in 5 or 10 years time?

In a similar way, there's the changing customer base for publishers of academic content to deal with. At the Eduserv Symposium last year, for example, David Smith of CABI described how they now find that having exposed much of their content for discovery via Google they have to deal with accesses from individuals who are not affiliated with any institution but who are willing to pay for access to specific papers. Their access management infrastructure has to cope with a growing range of access methods that sit outside the 'educational' space. What impact does this have on their incentives for conforming to education-only norms?

And finally there's the issue of usability, and particularly the 'where are you from' discovery problem. Our traditional approach to this kind of problem is to build a portal and try and control how the user gets to stuff, such that we can generate 'special' URLs that get them to their chosen content in such a way that they can be directed back to us seemlessly in order to login. I hate portals, at least insofar as they have become an architectural solution, so the less said the better. As I said in my talk, WAYFless URLs are an abomination in architectural terms, saved only by the fact that they work currently. In my presentation I played up the alternative usability work that the Kantara ULX group have been doing in this area, which it seems to me is significantly better than what has gone before. But I learned at the conference that Shibboleth and the UK WAYF service have both also been doing work in this area - so that is good. My worry though is that this will remain an unsolvable problem, given the architecture we are presented with. (I hope I'm wrong but that is my worry). As a counterpoint, in the more... err... mainstream world we are seeing a move towards what I call the 'First Bus' solution (on the basis that in many UK cities you only see buses run by the First Group (despite the fact that bus companies are supposed to operate in a free market)) where you only see buttons to log in using Google, Facebook and one or two others.

I'm not suggesting that this is the right solution - just noting that it is one strategy for dealing with an otherwise difficult usability problem.

Note that we are also seeing some consolidation around technology as well - notably OpenID and OAuth - though often in ways that hides it from public view (e.g. hidden behind a 'login with google' or 'login with facebook' button).

Which essentially brings me to my concluding screen - you know, the one where I talk about all the implications of the trends above - which is where I have less to say than I should! Here's the text more-or-less copy-and-pasted from my final slide:

  • ‘education’ is a relatively small fish in a big pond (and therefore can't expect to drive the agenda)
  • mainstream approaches will win (in the end) - ignoring the difficult question of defining what is mainstream
  • for the Eduserv OpenAthens product, Google is as big a threat as Shibboleth (and the same is true for Shibboleth)
  • the current financial climate will have an effect somewhere
  • HE institutions are probably becoming more enterprise-like but they are still not totally like commercial organisations and they tend to occupy an uncomfortable space between the ‘enterprise’ and the ‘social web’ driven by different business needs (c.f. the finance system vs PLEs and open science)
  • the relationships between students (and staff) and institutions are changing

In his opening talk at FAM10 the day before, David Harrison had urged the audience to become leaders in the area of federated access management. In a sense I want the same. But I also want us, as a community, to become followers - to accept that things happen outside our control and to stop fighting against them the whole time.

Unfortunately, that's a harder rallying call to make!

Your comments on any/all of the above are very much welcomed.

September 17, 2010

Key trends in education? - a crowdsource request

I've been asked to give a talk at FAM10 (an event "to discuss federated identity and access management within the UK") replacing someone who has had to drop out, hence the rather late notice. I therefore wasn't first choice, nor would I expect to be, but having been asked I feel reluctant to simply say no and my previous posts here tend to indicate that I do have views on the subject of federated access management, particularly as it is being implemented in the UK. On the down side, there's a strong possibility that what I have to say will ruffle feathers with some combination of people in my own company (Eduserv), people at the JISC and people in the audience (probably all of them) so I need to be a bit careful. Still, that's never stopped me before :-)

I can't really talk about the technology - at least, not at a level that would be interesting for what is likely to be a highly technical FAM10 crowd. What I want to try and do instead is to take a look at current and emerging trends (technical, political and social), both in education in the UK and more broadly, and try to think about what those trends tell us about the future for federated access management.

To that end, I need your help!

Clearly, I have my own views on what the important trends might be but I don't work in academia and therefore I'm not confident that my views are sufficiently based in reality. I'd therefore like to try and crowdsource some suggestions for what you (I'm speaking primarily to people who work inside the education sector here - though I'm happy to hear from others as well) think are the key trends. I'm interested in both teaching/learning and research/scholarly communication and trends can be as broad or as narrow, as technological or as non-technological, as specific to education or as general as you like.

To keep things focused, how about I ask people to list their top 5 trends (though fewer is fine if you are struggling). I probably need more than one-word answers (sorry) so, for example, rather than just saying 'mobile', 'student expectations', 'open data' or 'funding pressure', indicate what you think those things might mean for education (particularly on higher education) in the UK. I'd love to hear from people outside the UK as well as those who work here. Don't worry about the impact on 'access management' - that's my job... just think about what you think the current trends affecting higher and further education are.

Any takers? Email me at andy.powell@eduserv.org.uk or comment below.

And finally... to anyone who just thinks that I'm asking them to do my work for me - well, yes, I am :-) On the plus side, I'll collate the answers (in some form) into the resulting presentation (on Slideshare) so you will get something back.

Thanks.

September 10, 2010

Comparing Twitter usage at ALT-C - 2009 vs. 2010

I've just been taking a very quick and dirty (and totally non-scientific) look at the Summarizr Twitter summaries for #altc2009 and #altc2010. Here are some observations (which may or may not be either significant or correct):

Firstly, twittering was up roughly 35% this year on last - not surprising I guess (but definitely worth noting). Oddly this was despite a fall in the number of twitterers, though I think that might be explained by the fact that the older hashtag has been in use much longer? There is also a slight, but probably not significant, shift towards a fewer number of heavy Twitter users.

Secondly, #digilit, #falt10 and #elearning appear in the top ten list of tweeted hashtags this year, none of which appeared last year. I suggest that #digilit and #falt10 reflect a growing interest in digital literacy more generally and a greater awareness of the fringe event by the main conference respectively. Conversely, #elearning seems a bit odd (to me) since I would have expected that term to be disappearing? The #fail hashtag also appears this year but not last which might be taken to indicate some problems with technology (or something else)? On the other hand, #awesome appears this year but not last, which either indicates some success somewhere or the fact that we are all getting more American in our use of language? If so, shame on us!

Thirdly, the top 10 tweeted URLs this year include a majority of blog posts (there were none last year) which either indicates that people are tending to blog earlier (and more) or that there is more of a culture of re-tweeting posts to blog entries than there used to be (or both)?

Finally, the list of most commonly appearing words in the Twitter stream for both years is largely useless as an indication of what was being talked about. However, I note that 'mobile', 'learner' and 'lecture' all make an appearance this year but not last ('lecture' for obvious reasons, given the first keynote) whereas discussion about the 'vle' seems to be shrinking.

As I say, all of this is anecdotal and unscientific, so treat accordingly.

September 09, 2010

Making sense of the ALT-C change

What I learned as a remote attendee at this year's ALT-C conference, "Into something rich and strange" - making sense of the sea-change - a brief, partial and highly irrelevantirreverent view.

Firstly... on the technology front it was good to see that audio, slides and the occasional Twitpic from those delegates who made the effort to turn up was pretty much as good as a real [tm] video stream. Also that Java is alive and well on the desktop - Elluminate being one of the few remaining applications that still require me to have Java on my local machine. Still, it's good to remind ourselves every so often what we thought the future of distributed computing was going to be like about 10 years ago.

Secondly... I learned that lecturers are called lecturers because they lecture - a medieval practice of imparting knowledge by reading an old-fashioned auto-cue rather badly. Being called a lecturer is an indication of status apparently. They're not called teachers because they don't teach, despite the fact that we really need teachers in HE - so somewhere along the line we got our wires crossed and ended up in the bargain basement. Oddly, everyone knows that lectures are sub-optimal but if you stand up and say that in a room full of people, half of whom are lecturers, half of whom used to be lecturers and all of whom think they know better, you don't make many friends, especially if you don't say what you think needs to replace them. Even more oddly, lectures are so bad that the only way of making things better is to video them and put them on YouTube so that people can watch them over and over again. I think it's called aversion therapy. Of course, the ALT-C Programme Committee feel duty bound to appoint a 'the lecture is dead' keynote speaker every year and even go so far as to send them the same images to use in their slides - you know, the one of the monk asleep at the back and the other one with the kids dressed up as victorians and sitting in rows.

I also found out that a university education is a public good - as opposed to a private good or a public bad - and that market forces are OK in science and engineering but not in the "useless" subjects. Amazingly, most of the politicians around at the moment did those very same useless subjects - many of them reading something called PPE at Oxbridge, which I can only assume is Physical Education with an emphasis on the physical and which clearly had a very popular module called "101 How to fiddle your expenses". On that basis, the definition of "useless" being used here includes the notion of getting a fast-track to lots of political power and money.  Margaret Thatcher was the only one to buck the trend, reading Chemistry apparently, but the less said about that the better - I don't want to give all chemists a bad name.

One other thing... if you're a builder that doesn't mind using computers when you don't have enough bricks to fill all the holes in your walls, especially if you don't mind kids playing on your building site, you can probably do a very popular keynote slot at future ALT-Cs. Children apparently learn best when you take away all the teachers, give them a computer (best if it's firmly attached to a wall to stop them nicking it) and let them get on with it. Many of them turn into rocket scientists within 2 or 3 years. Nobody is quite sure if this also applies to teenagers and adults but it's worth a shot I reckon. I suggest using the University of Hull as an experiment - sacking all the teacherslecturers, moving the computers outside, and letting the students get on with it. We could call it the University of Hull in the Wall and see if we get away with it?

Unfortunately, I missed out on F-ALT, the alternative ALT-C, this year cos I forgot to set my Twitter radar to the correct hashtag, so I can't report on that particular cliqueexperience. It's been so long since I took part in F-ALT that they've probably withdrawn my membership.

Oh well... here's to next year's conference. I don't know what it'll be called yet but it'll probably be some 'clever' reference to the massive changes happening in the wider world whilst ignoring the complete lack of change inside the sector.

Change? What change?

In the words of the late, great Kenny Everett... all meant in the best possible taste!

June 10, 2010

Is the e-book glass half full or half empty in UK academia?

There was a article about e-book uptake in the (US) university sector in the THE the other day, re-printed from Inside Higher Ed, The E-Book Sector.

The piece suggests that uptake might be less than the general hype around e-book indicates except in the world of for-profit online education (I'm not sure how that applies in the UK?):

Among the respondents to a 2009 Campus Computing Project survey of 182 online programmes at non-profit universities, 9 per cent said e-textbooks were “widely used” at their institutions, while nearly half said electronic versions were “rarely used”. Even fewer brick-and-mortar institutions are deploying e-books in lieu of hard copies, with fewer than 5 per cent citing e-book deployment as a key IT priority in the short term, according to another Campus Computing Project Survey. And according to data from market research firm Student Monitor, e-textbooks accounted for only 2 per cent of all e-textbook sales last autumn.

In the UK, the final report from the JISC-funded National e-Books Observatory Project apparently paints a rather different picture:

E-books are now part of the academic mainstream: nearly 65% of teaching staff and students have used an e-book to support their work or study or for leisure purposes.

My initial reaction was that these two statements seem at odds with each other but on reflection I think not - "nearly half said electronic versions were 'rarely used'" isn't that different from "nearly 65% of teaching staff and students have used an e-book", it's just got a different emphasis.

As with our own snapshots of 3-D virtual world usage in UK education, carried out on our behalf by John Kirriemuir (a project which has coincidentally just come to the end of our funding though John plans to continue the work in other ways), stats are easy to play with. Whilst it may be technically correct to say "all UK universities are active in virtual worlds", doing so isn't particularly helpful since the uptake may be extremely patchy across each institution.

Nonetheless, the 65% figure quoted by the JISC-funded study seems very high to me (based on my very limited experience of the uptake of these things). Are e-books really gaining ground in UK academia that fast?

(I note that the JISC study doesn't actually define what it means by e-book, other than to say "it refers to generic e-books available via the library, retail channels or on the web". I'm assuming that the study uses that term in line with the Wikipedia definition:

An e-book (short for electronic book and also known as a digital book, ebook, and eBook) is an e-text that forms the digital media equivalent of a conventional printed book, sometimes restricted with a digital rights management system.

but I'm not sure.)

March 30, 2010

Mobile use at Edinburgh

The IS team at the University of Edinburgh have released the results of their survey into Mobile Services 2010. The online survey was undertaken during a 16 day period in March this year and received 1989 responses - pretty impressive I think.

The headline results are as follows:

  • 49% of students surveyed have smartphones.
  • Apple accounted for 35% of smart handsets, followed closely by Nokia at 25% and Blackberry at 17%.
  • 68% of students have pay monthly contracts.
  • 39% have a contract that gives unlimited access to internet.
  • An average of 50% of students access Email and Facebook through their mobiles several times a day.
  • 25% claim to have no internet access from their handsets.
  • The top 3 potential University services which students would most like to see available from their mobiles would be;
    • Course Information
    • Exam and course timetables
    • PC availability in Open Access Labs.

The balance of handset manufacturers in the second bullet point (I assume the switch in language from 'smartphone' to 'smart handset' isn't significant?) doesn't seem too out of kilter with the figures reported by StatCounter (e.g. see their Top 9 mobile browsers in UK from Mar 09 to Mar 10) though I guess the lower figure for BlackBerry in the Edinburgh survey is indicative of the particular audience (and, in any case, StatCounter is measuring usage rather than ownership so I'm not sure it is meaningful to compare the figures anyway).

Not all that surprisingly, access to course information and timetabling is a winner in terms of desired mobile functionality for students.

It would be interesting to see similar data for staff.

And my favorite quote... "Can the wireless service be made to NOT log me out after like, 5 minutes of inactivity?" :-)

February 09, 2010

Virtual World Watch survey call for information

John Kirriemuir has issued a request for updated information for his his eighth Virtual World Watch "snapshot" survey of the use of virtual worlds in UK Higher and Further Education.

Previous survey reports can be found on the VWW site.

For further information about the sort of information John is after, see his post. He would like responses by the end of February 2010.

Our period of funding for this work is approaching its end, so this will be the last survey funded under the Eduserv Research Programme. John is planning to continue some Virtual World Watch activity, at least through 2010, as he indicates in this presentation which he gave to the recent "Where next for Virtual Worlds?" (wn4vw) meeting in London:

The slides from the other presentations from the wn4vw meeting (including a video of the opening presentation by Ralph Schroeder) are also available here, and you can find an archive of tagged Twitter posts from the day here.

I enjoyed the meeting (even if I'm not sure we really arrived at many concrete answers to the question of "where next?"), but it also felt quite sad. It marked the end of the projects Eduserv funded in 2007 on the use of virtual worlds in education. That grants call was the first one I was involved with after joining Eduserv in 2006, and although it was an area that was completely new to me, the response we got, both in terms of the number of proposals and their quality, seemed very exciting. And I still look back on the 2007 Symposium as one of the most successful (if rather nerve-wracking at the time!) events I've been involved in. As things worked out, I wasn't able to follow the progress of the projects as closely as I'd have liked, but the recent meeting reminded me again of the strong sense of community that seems to have built up amongst researchers, learning technologists and educators working in this area, which seems to have outlived particular projects and programmes. Of course we only funded a handful of projects, and other funding agencies helped develop that community too (I'm thinking particularly of JISC with its Open Habitat project, and the EU MUVEnation project), but it's something I'm pleased we were able to contribute to in a small way.

January 22, 2010

On the use of Microsoft SharePoint in UK universities

A while back we decided to fund a study looking at the uptake of SharePoint within UK higher education institutions, an activity undertaken on our behalf by a team from the University of Northumbria led by Julie McLeod.  At the time of the announcement of this work we took some stick about the focus on a single, commercially licensed, piece of software - something I attempted to explain in a blog post back in May last year.  On balance, I still feel we made the right decision to go with such a focused study, and I think the popularity of the event that we ran towards the end of last year confirms that to a certain extent.

I'm very pleased to say that the final report from the study is now available.  As with all the work we fund, the report has been released under a Creative Commons licence so feel free to go ahead a make use of it in whatever way you find helpful.  I think it's a good study that summarises the current state of play very nicely.  The key findings are listed on the project home page so I won't repeat them here.  Instead, I'd like to highlight what the report says about the future:

This research was conducted in the summer and autumn of 2009. Looking ahead to 2010 and beyond the following trends can be anticipated:

  • Beginnings of the adoption of SharePoint 2010
    SharePoint 2010 will become available in the first half of 2010. Most HEIs will wait until a service pack has been issued before they think about upgrading to it, so it will be 2011 before SharePoint 2010 starts to have an impact. SharePoint 2010 will bring improvements to the social computing functionality of My Sites, with Facebook/Twitter style status updates, and with tagging and bookmarking. My Sites are significant in an HE context because they are the part of SharePoint that HEIs consider providing to students as well as staff. We have hitherto seen lacklustre take up of My Sites in HE. Some HEIs implementing SharePoint 2007 have decided not to roll out My Sites at all, others have only provided them to staff, others have made them available to staff and students but decided not to actively promote them. We are likely to see increasing provision and take up of My Sites from those HEIs that move to SharePoint 2010.
  • Fuzzy boundary between SharePoint implementations and Virtual Learning Environments
    There is no prospect, in the near future, of SharePoint challenging Blackboard’s leadership in the market for institutional VLEs for teaching and learning. Most HEIs now have both an institutional VLE, and a SharePoint implementation. Institutional VLEs are accustomed to battling against web hosted applications such as Facebook for the attention of staff and students. They now also face competition internally from SharePoint. Currently SharePoint seems to be being used at the margins of teaching and learning, filling in for areas where VLEs are weaker. HEIs have reported SharePoint’s use for one-off courses and small scale courses; for pieces of work requiring students to collaborate in groups, and for work that cannot fit within the confines of one course. Schools or faculties that do not like their institution’s proprietary VLE have long been able to use an open source VLE (such as Moodle) and build their own VLE in that. Now some schools are using SharePoint and building a school specific VLE in SharePoint. However, SharePoint has a long way to go before it is anything more than marginal to teaching and learning.
  • Increase in average size of SharePoint implementations
    At the point of time in which the research was conducted (summer and autumn of 2009) many of the implementations examined were at an early stage. The boom in SharePoint came in 2008 and 2009, as HEIs started to pick up on SharePoint 2007. We will see the maturation of many implementations which are currently less than a year old. This is likely to bring with it some governance challenges (for example ‘SharePoint sprawl’) which are not apparent when implementations are smaller. It will also increase the percentage of staff and students in HE familiar with SharePoint as a working environment. One HEI reported that some of their academics, unaware that the University was about to deploy SharePoint, have been asking for SharePoint because they have been working with colleagues at other institutions who are using it.
  • Competition from Google Apps for the collaboration space
    SharePoint seems to have competed successfully against other proprietary ECM vendors in the collaboration space (though it faces strong competition from both proprietary and open source systems in the web content management space and the portal space). It seems that the most likely form of new competition in the collaboration space will come in the shape of Google Apps which offers significantly less functionality, but operates on a web hosted subscription model which may appeal to HEIs that want to avoid the complexities of the configuration and management of SharePoint.
  • Formation of at least one Higher Education SharePoint User Group
    It is surprising that there is a lack of Higher Education SharePoint user groups. There are two JISCmail groups (SharePoint-Scotland and YH-SharePoint) but traffic on these two lists is low. The formation of one or more active SharePoint user groups would seem to be essential given the high level of take up in the sector, the complexity of the product, the customisation and configuration challenges it poses, and the range of uses to which it can be put. Such a user group or groups could, support the sharing of knowledge across the sector, provide the sector with a voice in relation to both Microsoft and to vendors within the ecosystem around SharePoint, enable the sector to explore the implications of Microsoft’s increasing dominance within higher education, as domination of the collaboration space is added to its domination of operating systems, e-mail servers, and office productivity software.

On the last point, I am minded to wonder what a user group actually looks like in these days of blogs, Twitter and other social networks? Superficially, it feels to me like a concept rooted firmly in the last century. That's not to say that there isn't value in collectively being able to share our experiences with a particular product, both electronically and face-to-face, nor in being able to represent a collective view to a particular vendor - so there's nothing wrong with the underlying premise. Perhaps it is just the label that feels outdated?

December 22, 2009

Online learning in virtual environments with SLOODLE - final report

The final report from the Online Learning In Virtual Environments with SLOODLE project, led by Dan Livingston of the University of the West of Scotland, is now available.  SLOODLE was one of the Second Life projects that we funded back in 2007, following a call for proposals in November 2006.  Seems like a long time ago now!

Reading the report, it is clear that the project became as much about building a community of SLOODLE users as it was about developing some open source software - which, of course, that is how all good open source projects should be, but it doesn't always work out like that.  In this case however, I think the project has been very successful and the numbers on page 4 of the report give some evidence of that.

I must admit that I have always had a nagging doubt about the sense of bringing together the kind of semi-formalised learning environment that is typical of VLEs such as Moodle with the, shall we say anarchic(?), less structured learning opportunities presented by virtual worlds in general and Second Life in particular.  To a certain extent I think the project mitigated this by developing a wide-ranging set of tools, some of which are tightly integrated with Moodle and some of which are stand-alone.  Whatever... one of the things that I really like about the report is the use of User Stories towards the end.  It's clear is that this stuff works for people.

And so to the future.  As Dan says in the Forward to the report:

Although the Eduserv project has now come to an end, SLOODLE continues to keep me busy – with regular conference and workshop presentations in both physical and virtual form. Community support and development remains as important today as it was, and can now be even more challenging – with SLOODLE tools now available on multiple virtual world platforms, and with the approach of large scale installations on university faculty and central Virtual Learning Environments.

Dan, along with representatives of all the other Second Life/Virtual World projects we funded 2 years ago, will be speaking at our Where next for virtual worlds in UK higher and further education? event at the London Knowledge Lab next year (now sold out I'm afraid).

December 21, 2009

Scanning horizons for the Semantic Web in higher education

The week before last I attended a couple of meetings looking at different aspects of the use of Semantic Web technologies in the education sector.

On the Wednesday, I was invited to a workshop of the JISC-funded ResearchRevealed project at ILRT in Bristol. From the project weblog:

ResearchRevealed [...] has the core aim of demonstrating a fine-grained, access controlled, view layer application for research, built over a content integration repository layer. This will be tested at the University of Bristol and we aim to disseminate open source software and findings of generic applicability to other institutions.

ResearchRevealed will enhance ways in which a range of user stakeholder groups can gain up-to-date, accurate integrated views of research information and thus use existing institutional, UK and potentially global research information to better effect.

I'm not formally part of the project, but Nikki Rogers of ILRT mentioned it to me at the recent VoCamp Bristol meeting, and I expressed a general interest in what they were doing; they were also looking for some concrete input on the use of Dublin Core vocabularies in some of their candidate approaches.

This was the third in a series of small workshops, attended by representatives of the project from Bristol, Oxford and Southampton, and the aim was to make progress on defining a "core Research ontology". The morning session circled mainly around usage scenarios (support for the REF (and other "impact" assessment exercises), building and sustaining cross-institutional collaboration etc), and the (somewhat blurred) boundaries between cross-institutional requirements and institution-specific ones; what data might be aggregated, what might be best "linked to"; and the costs/benefits of rich query interfaces (e.g. SPARQL endpoints) v simpler literal- or URI-based lookups. In the afternoon, Nick Gibbins from the University of Southampton walked through a draft mapping of the CERIF standard to RDF developed by the dotAC project. This focused attention somewhat and led to some - to me - interesting technical discussions about variant ways of expressing information with differing degrees of precision/flexibility. I had to leave before the end of the meeting, but I hope to be able to continue to follow the project's progress, and contribute where I can.

A long train journey later, the following day I was at a meeting in Glasgow organised by the CETIS Semantic Technologies Working Group to discuss the report produced by the recent JISC-funded Semtech project, and to try to identify potential areas for further work in that area by CETIS and/or JISC. Sheila MacNeill from CETIS liveblogged proceedings here. Thanassis Tiropanis from the University of Southampton presented the project report, with a focus on its "roadmap for semantic technology adoption". The report argues that, in the past, the adoption of semantic technologies may have been hindered by a tendency towards a "top-down" approach requiring the widespread agreement on ontologies; in contrast the "linked data" approach encourages more of a "bottom-up" style in which data is first made available as RDF, and then later application-specific or community-wide ontologies are developed to enable more complex reasoning across the base data (which may involve mapping that initial data to those ontologies as they emerge). While I think there's a slight risk of overstating the distinction - in my experience many "linked data" initiatives do seem to demonstrate a good deal of thinking about the choice of RDF vocabularies and compatibility with other datasets - and I guess I see rather more of a continuum, it's probably a useful basis for planning. The report recommends a graduated approach which focusses initially on the development of this "linked data field" - in particular where there are some "low-hanging fruit" cases of data already made available in human-readable form which could relatively easily be made available in RDF, especially using RDFa.

One of the issues I was slightly uneasy with in the Glasgow meeting was that occasionally there were mentions of delivering "interoperability" (or "data interoperability") without really saying what was meant by that - and I say this as someone who used to have the I-word in my job title ;-) I feel we probably need to be clearer, and more precise, about what different "semantic technologies" (for want of a better expression) enable. What does the use of RDF provide that, say, XML typically doesn't? What does, e.g., RDF Schema add to that picture? What about convergence on shared vocabularies? And so on. Of course, the learners, teachers, researchers and administrators using the systems don't need to grapple with this, but it seems to me such aspects do need to be conveyed to the designers and developers, and perhaps more importantly - as Andy highlighted in his report of related discussions at the CETIS conference - to those who plan and prioritise and fund such development activity. (As an aside, I this is also something of an omission in the current version of the DCMI document on "Interoperability Levels": it tells me what characterises each level, and how I can test for whether an application meets the requirements of the level, but it doesn't really tell me what functionality each level provides/enables, or why I should consider level n+1 rather than level n.)

Rather by chance, I came across a recent presentation by Richard Cyganiak to the Vienna Linked Data Camp, which I think addresses some similar questions, albeit from a slightly different starting point: Richard asks the questions, "So, if we have linked data sources, what's stopping the development of great apps? What else do we need?", and highlights various dimensions of "heterogeneity" which may exist across linked data sources (use of identifiers, differences in modelling, differences in RDF vocabularies used, differences in data quality, differences in licensing, and so on).

Finally, I noticed that last Friday, Paul Miller (who was also at the CETIS meeting) announced the availability of a draft of a "Horizon Scan" report on "Linked Data" which he has been working on for JISC, as part of the background for a JISC call for projects in this area some time early in 2010. It's a relatively short document (hurrah for short reports!) but I've only had time for a quick skim through. It aims for some practical recommendations, ranging from general guidance on URI creation and the use of RDFa to more specific actions on particular resources/datasets. And here I must reiterate what Paul says in his post - it's a draft on which he is seeking comments, not the final report, and none of those recommendations have yet been endorsed by JISC. (If you have comments on the document, I suggest that you submit them to Paul (contact details here or comment on his post) rather than commenting on this post.)

In short, it's encouraging to see the active interest in this area growing within the HE sector. On reading Paul's draft document, I was struck by the difference between the atmosphere now (both at the Semtech meeting, and more widely) and what Paul describes as the "muted" conclusions of Brian Matthews' 2005 survey report on Semantic Web Technologies for JISC Techwatch. Of course, many of the challenges that Andy mentioned in his report of the CETIS conference session remain to be addressed, but I do sense that there is a momentum here - an excitement, even - which I'm not sure existed even eighteen months ago. It remains to be seen whether and how that enthusiasm translates into applications of benefit to the educational community, but I look forward to seeing how the upcoming JISC call, and the projects it funds, contribute to these developments.

October 27, 2009

Virtual World Watch Request for Information

Over at Virtual World Watch, John Kirriemuir is embarking on collecting data for his seventh "snapshot" survey of the use of virtual worlds in UK Higher and Further Education, and has issued a request for updated information:

The question

How are you using virtual worlds (e.g. Second Life, OpenSim, Metaplace, OLIVE, Active Worlds, Playstation Home, Blue Mars, Twinity, Wonderland) in teaching, learning or research?

Things you may want to include:

  • Why you are using a virtual world.
  • If teaching using a virtual world, how it fits into your curriculum.
  • Any evaluation of the experience of using the virtual world.
  • Will you do it again next year? Why (or why not)?

Please send any response to John, by Tuesday 10 November 2009. For further information, see the post on the Virtual World Watch weblog.

October 19, 2009

The ubiquitous university

(I have no idea what that title means by the way!)

We're thinking about topics for next year's Eduserv Symposium and the front runner right now (though, of course, things may well change) is to focus on some aspect related to ubiquitous computing, mobile devices, augmented reality, 'everyware', and the Internet of things.

Just a long list of buzzwords then I hear you cry?  Well, yes, maybe!

That said, it does seem to me that the impact of this particular set of buzzwords on universities (and other educational institutions) will be quite far-reaching... and therefore worth spending some time thinking about at a reasonably strategic level.  The immediate issue, for us, (and indeed, the reason for this blog post) is in choosing some kind of useful permutation of these topics to make for a reasonably focused, interesting, useful and, ultimately, well-attended symposium during May next year!

It strikes me that two aspects of these things are particularly interesting.

The first lies in pedagogy and, in particular on whether this growth in mobility (for want of a better phrase) lends itself to changes (for the better) in the way that learning and teaching happen in universities.

The second has to do with ownership and control.  As we move from a world in which universities provisioned ICT for their staff and students (services, software, hardware, and the network) to a world in which nearly all of that provisioning is, or at least can be, owned and controlled by the end-user, where does that leave the university as a provider of services?  In particular, where does it leave the role of IT Services departments?  Do they simply become a broker/facilitator between the end-user and a bunch of external providers?

Both areas look like potentially interesting topics and I'm minded to try and cover both on the day.  But I'd be very interested in hearing your views.  Does this look like a useful and interesting topic for our next symposium?  Have these issues been done to death elsewhere?  Would you attend (it is a free event followed by a very nice drinks reception after all!)?  Let me know what you think.  Thanks!

October 09, 2009

Theatron 3 - final report

Theatron3 The final report from the Theatron 3 project is now available.

Theatron 3 was one of the projects that we funded under our 'virtual world' grants call in 2007 - seems like a long time ago now!

The project's objectives were twofold: firstly, to construct replicas of 20 historic theatres in the virtual world of Second Life (led by the Kings Visualisation Lab, King’s College London) and, secondly, to use those theatres as the basis for various sub-projects investigating the pedagogical value of 3D virtual worlds (led by the HEA English Subject Centre and HEA Subject Centre for Dance, Drama and Music).

The project has, I think, been very successful in the first aim, somewhat less-so with the second - but one of the things I really like about the final report is the honesty with which this is reported. We always said to the project that we wanted them to share what went wrong as well as what went right because it is only by doing so that we can move forward. On that basis, I repeat the summary of the final report here and I would urge those with an interest in virtual worlds to read the report fully:

  1. Second Life is a suitable environment for creating accurate and complex structures and embedding related pedagogical content. Build times can be greatly reduced through effective workflow plans.
  2. During the lifetime of the project, Second Life was too unreliable and presented too many barriers to institutions for full testing pedagogically. It is an appropriate medium for educational innovators, but early adopters will find that there are still too many issues for incorporating it into their practice.
  3. Immersive virtual worlds as a medium present many challenges to students, particularly due to cultural attitudes and the absence of embodiment experienced by some students. The time required to invest in learning to use the environments also is a barrier to adoption. For these reasons, it may always be problematic to make the use of immersive virtual worlds mandatory for students.
  4. As a medium for studying and communicating, Second Life presents many opportunities. As a performance medium it is limited when attempting to place existing, real life performance in a different medium, but has much potential when used to explore new forms of expression.
  5. The introduction of Second Life at institution often reveals many weaknesses in those institutions’ technical and service infrastructure. These inadequacies need to be resolved before widespread adoption of these technologies can occur.
  6. Immersive virtual worlds are a relatively new technology in education, and there was little understanding of the barriers to implementation within an institution and their most appropriate application to learning when the project started. Second Life itself needed much development in terms of reliability. In the intervening two years, there have been many steps forward in understanding its application to education. The technological goals of the project were well timed in this development cycle, but in retrospect the pedagogical aims were set too early, before the capabilities and limitations of the medium were sufficiently understood. However, the lessons learned pedagogically from Theatron will be invaluable in informing future practice.

I'll end with a quote from Professor Richard Beacham of the Kings Visualisation Lab, one of the project directors:

We think virtual worlds are here to stay and are getting ready to set up residence within them. We have a number of projects in progress and in prospect, primarily in Roman buildings and housing. We are adding Noh theatre and have Noh performers in collaboration with Japanese colleagues. We are excited and also grateful that the project gave us the chance to hit the ground running and to very quickly take a lot of materials which had the potential to be incorporated into a project like this and it's given us a real head start. It's put us somewhere towards the front of the pack and that’s a very good place to be.

This is very gratifying. We always took the view that Second Life was not necessarily an end in itself. Rather that its use in highly innovative and experimental ways could provide a stepping stone to greater understanding and, potentially, to other things.

[Image: Theatre at Epidaurus, Greece - borrowed (without permission) from the Kings Visualisation Lab gallery.]

October 06, 2009

FOTE09

FOTE (the Future of Technology in Education conference organised by ULCC), which I attended on Friday, is a funny beast.  For two years running it has been a rather mixed conference overall but one that has been rescued by one or two outstanding talks that have made turning up well worthwhile and left delegates going into the post-conference drinks reception with something of a buzz.

Last year it was Miles Metcalfe of Ravensbourne College who provided the highlight.  This year it was down to Will McInnes (of Nixon/McInnes) to do the same, kicking off the afternoon with a great talk, making up for a rather ordinary morning, followed closely by James Clay (of Gloucestershire College).  If this seems a little harsh... don't get me wrong.  I thought that much of the afternoon session was worth listening to and, overall, I think that any conference that can get even one outstanding talk from a speaker is doing pretty well - this year we had at least two.  So I remain a happy punter and would definitely consider going back to FOTE in future years.

My live-blogged notes are now available in a mildly tidied up form.  This year's FOTE was heavily tweeted (the wifi network provided by the conference venue was very good) and about half-way thru the day I began to wonder if my live-blogging was adding anything to the overall stream?  On balance, and looking back at it now, I think the consistency added by by single-person viewpoint is helpful.  As I've noted before, I live-blog primarily as a way of taking notes.  The fact that I choose to take my notes in public is an added bonus (hopefully!) for anyone that wants to watch my inadequate fumblings.

The conference was split into two halves - the morning session looking at Cloud Computing and the afternoon looking at Social Media.  The day was kicked off by Paul Miller (of Cloud of Data) who gave a pretty reasonable summary of the generic issues but who fell foul, not just of trying to engage in a bit of audience participation very early in the day, but of trying to characterise issues that everyone already understood to be fuzzy and grey into shows of hands that required black and white, yes/no answers.  Nobody fell for it I'm afraid.

And that set the scene for much of the morning session.  Not enough focus on what cloud computing means for education specifically (though to his credit Ray Flamming (of Microsoft) did at least try to think some of that through and the report by Robert Moores (of Leeds Met) about their experiences with Google Apps was pretty interesting) and not enough acknowledgment of the middle ground.  Even the final panel session (for which there was nowhere near enough time by the way) tried to position panelists as either for or against but it rapidly became clear there was no such divide.  The biggest point of contention seemed to be between those who wanted to "just do it" and those who wanted to do it with greater reference to legal and/or infrastructural considerations - a question largely of pace rather than substance.

If the day had ended at lunchtime I would have gone home feeling rather let down.  But the afternoon recovered well.  My personal highlights were Will McInnes, James Clay and Dougald Hine (of School of Everything), all of whom challenged us to think about where education is going.  Having said that, I think that all of the afternoon speakers were pretty good and would likely have appealed to different sections of the audience, but those are the three that I'd probably go back and re-watch first. All the video streams are available from the conference website but here is Will's talk:

One point of criticism was that the conference time-keeping wasn't very good, leaving the final two speakers, Shirley Williams (of the University of Reading, talking about the This is Me project that we funded) and Lindsay Jordan (of the University of Bath/University of the Arts) with what felt like less than their alloted time.

For similar reasons, the final panel session on virtual worlds also felt very rushed.  I'd previously been rather negative about this panel (what, me?), suggesting that it might descend into pantomime.  Well, actually I was wrong.  I don't think it did (though I still feel a little bemused as to why it was on the agenda at all).  Its major problem was that there was only time to talk about one topic - simulation in virtual worlds - which left a whole range of other issues largely untouched.  Shame.

Overall then, a pretty good day I think.  Well done to the organisers... I know from my own experience with our symposium that getting this kind of day right isn't an easy thing to do.  I'll leave you with a quote (well, as best as I can remember it) from Lindsay Jordan who closed her talk with a slightly sideways take on Darwinism:

in the social media world the ones who survive - the fittest - are the ones who give the most

September 29, 2009

The Google Book Settlement

The JISC have made a summary of the proposed Google Book Settlement available for comment on Writetoreply (a service that I really like by the way), along with a series of questions that might usefully be considered by interested parties. Thanks to Naomi Korn and Rachel Bruce for their work on this.

Not knowing a great deal about the proposed settlement I didn't really feel able to comment but in an effort to get up to speed I decided to put together a short set of Powerpoint slides, summarising my take on the issues, based largely on the JISC text.

Here's what I came up with:

Of course, my timing isn't ideal because the proposed review meeting on the 7th October has now been replaced with a 'status update' meeting [PDF] that will "decide how to proceed with the case as expeditiously as possible". Ongoing discussion between Google and the US Department of Justice looks likely to result in changes to the proposed settlement before it gets to the review stage.

Nonetheless, I think it's useful to understand the issues that have led up to any revised settlement and in any case, it was a nice excuse to put together a set of slides using CC images of books from Flickr!

Mobile learning

At the beginning of last week I attended the CILIP MmIT (Multimedia Information & Technology Group) Annual Conference for 2009 on the topic of "Mobile Learning: What Exactly is it?" (I can't give you a link to the event because as far as I can tell there isn't one :-( ).

It wasn't a bad event actually and there were some pretty good speakers. My live-blogged notes are now available, though you should note that the wireless network was pretty flakey (somewhat ironic for a mobile learning event huh?) which means that there are some big(ish) gaps in the coverage.

There were places where I wanted more depth from the speakers but given the introductory nature of the event I think it was probably pitched about right overall.

Two thoughts came to me as the day progressed...

Firstly, it was clear that most of the projects being shown on the day were based either on hardware handed out to people specifically for the particular project or on lowest common denominator standards (i.e. SMS) that work on everybody's existing personal mobile devices. The former is clearly problematic in terms of both sustainability and because of people having to deal with an additional device. The latter results (tyically) in less functionality being offered. At one point I asked if there was any evidence that projects were moving towards developing for specific devices, in particular for the iPhone, on the basis that doing so would allow for significantly more functionality to be delivered to the end-user.

I don't think I got a clear answer on this, though I suspect that the speaker made the assumption that I thought developing for the iPhone was a good thing (on the basis I was holding one at the time). In fact, I'm not sure I have a good feel for what is good and bad in this area - I can see advantages in keeping things simple and inclusive but I can also see that experimenting with the newest technologies allows us to try things that wouldn't be possible otherwise.

Coincidentally, a similar debate surfaced on the website-info-mgt@jiscmail.ac.uk mailing list a few days later, flowing from the announcement of the University of Central Lancashire freshers' iPhone application. In the discussion, I asked if we knew enough about the mobile devices that new freshers are bringing with them to university in order that we can make sensible decisions about which mobile device capabilities to target. In a world of limited development resources, there's not much point in developing an iPhone app if only a handful of your intended audience can afford to own one (unless you explicitly doing it to experiment with what is possible). Brian Kelly has since picked up this theme, We Need Evidence – But What If We Don’t Like The Findings?, though focusing more on operating systems generally rather than mobile devices specifically.

Quite a few sites came back to me with stats (Brian shows some of them). I particularly like the Student IT Services Survey 2009 (PDF) undertaken by Information Services at the University of Bristol which isn't limited to freshers but which asks a whole range of useful questions. Overall, and based on the limited evidence available to date, I suggest that the iPhone and iPod Touch have fairly low penetration in the student market thus far.

It strikes me that, given a generally rising interest in mobile technology, 'everyware', ubiquitous computing, and so on for learning and research, some sort of longitudinal study of what students are bringing with them to university might not be a bad thing?

Secondly, my other thought... was that Dave White's visitors vs. residents stuff is highly pertinent to this space. Actually, for what it's worth, I don't go to any conference these days without realising that Dave's thinking in this area is highly relevant! It seems to me that many of our uses of mobile technologies are aimed at visitors - they are aimed at people who have a job to get done. Yet the really interesting thing about mobile technology is not how 'we' (the university) can use it to reach 'them' (the learner or researcher) but how they are using it to reach each other (as part of their everyday use of technology). The interesting thing is how residents are using it to live their lives online.

We need to see ourselves primarily as enablers in this space - not as direct providers of services.

June 23, 2009

Virtual World Watch publishes new Snapshot report

Yesterday, John Kirriemuir announced the publication by the Virtual World Watch project of a new issue of the "snapshot" survey reports he has been collating covering the use of virtual worlds in UK higher and further educational institutions.

In his introductory section, John highlights a couple of points:

  • In terms of subject areas, the health and medical science sector appears to be developing a high profile in terms of its use of virtual worlds. I've noticed this from my own fairly cursory tracking of activity via mailing lists and weblogs. I was slightly surprised that some of this functionality (simulations etc) isn't covered by existing software applications, but there seems to be a gap which - in some cases at least - is being addressed through the use of virtual worlds.
  • Although some technical challenges remain, in comparison with previous surveys, reports of technical obstacles to the use of virtual worlds software are diminishing. John attributes this to the dual influence of growing institutional support in some cases and unsupported individuals abandoning their efforts in others. My own occasional experience of using Second Life (which John notes remains "the virtual world of choice" in UK universities and colleges) has been that the platform seems vastly more stable than it was a couple of years ago when John embarked on these surveys - though ironically last weekend saw one of the most widespread and prolonged disruptions that I can recall in a long time.

As a footnote, I'd highlight John's point that for the next survey he is placing more emphasis on gathering information in-world, both in Second Life and in other virtual worlds. It'll be interesting to see how well this works out, as I have to admit I find the in-world discovery and communication tools somewhat limited, and I find myself relying heavily on Web-based sources (weblogs, microblogging services, Flickr, YouTube etc) to find resources of interest (and get rather frustrated when I come across interesting in-world resources that aren't promoted well on the Web!).

Anyway, as with previous installments, the report provides a large amount of detail and insights into what UK educators are doing in virtual worlds and what they are saying about their experiences.

March 20, 2009

Unlocking Audio

I spent the first couple of days this week at the British Library in London, attending the Unlocking Audio 2 conference.  I was there primarily to give an invited talk on the second day.

You might notice that I didn't have a great deal to say about audio, other than to note that what strikes me as interesting about the newer ways in which I listen to music online (specifically Blip.fm and Spotify) is that they are both highly social (almost playful) in their approach and that they are very much of the Web (as opposed to just being 'on' the Web).

What do I mean by that last phrase?  Essentially, it's about an attitude.  It's about seeing being mashed as a virtue.  It's about an expectation that your content, URLs and APIs will be picked up by other people and re-used in ways you could never have foreseen.  Or, as Charles Leadbeater put it on the first day of the conference, it's about "being an ingredient".

I went on to talk about the JISC Information Environment (which is surprisingly(?) not that far off its 10th birthday if you count from the initiation of the DNER), using it as an example of digital library thinking more generally and suggesting where I think we have parted company with the mainstream Web (in a generally "not good" way).  I noted that while digital library folks can discuss identifiers forever (if you let them!) we generally don't think a great deal about identity.  And even where we do think about it, the approach is primarily one of, "who are you and what are you allowed to access?", whereas on the social Web identity is at least as much about, "this is me, this is who I know, and this is what I have contributed". 

I think that is a very significant difference - it's a fundamentally different world-view - and it underpins one critical aspect of the difference between, say, Shibboleth and OpenID.  In digital libraries we haven't tended to focus on the social activity that needs to grow around our content and (as I've said in the past) our institutional approach to repositories is a classic example of how this causes 'social networking' issues with our solutions.

I stole a lot of the ideas for this talk, not least Lorcan Dempsey's use of concentration and diffusion.  As an aside... on the first day of the conference, Charles Leadbeater introduced a beach analogy for the 'media' industries, suggesting that in the past the beach was full of a small number of large boulders and that everything had to happen through those.  What the social Web has done is to make the beach into a place where we can all throw our pebbles.  I quite like this analogy.  My one concern is that many of us do our pebble throwing in the context of large, highly concentrated services like Flickr, YouTube, Google and so on.  There are still boulders - just different ones?  Anyway... I ended with Dave White's notions of visitors vs. residents, suggesting that in the cultural heritage sector we have traditionally focused on building services for visitors but that we need to focus more on residents from now on.  I admit that I don't quite know what this means in practice... but it certainly feels to me like the right direction of travel.

I concluded by offering my thoughts on how I would approach something like the JISC IE if I was asked to do so again now.  My gut feeling is that I would try to stay much more mainstream and focus firmly on the basics, by which I mean adopting the principles of linked data (about which there is now a TED talk by Tim Berners-Lee), cool URIs and REST and focusing much more firmly on the social aspects of the environment (OpenID, OAuth, and so on).

Prior to giving my talk I attended a session about iTunesU and how it is being implemented at the University of Oxford.  I confess a strong dislike of iTunes (and iTunesU by implication) and it worries me that so many UK universities are seeing it as an appropriate way forward.  Yes, it has a lot of concentration (and the benefits that come from that) but its diffusion capabilities are very limited (i.e. it's a very closed system), resulting in the need to build parallel Web interfaces to the same content.  That feels very messy to me.  That said, it was an interesting session with more potential for debate than time allowed.  If nothing else, the adoption of systems about which people can get religious serves to get people talking/arguing.

Overall then, I thought it was an interesting conference.  I suspect that my contribution wasn't liked by everyone there - but I hope it added usefully to the debate.  My live-blogging notes from the two days are here and here.

January 30, 2009

What would Google do?

Whilst reading Paul Miller's new(ish) blog this morning I browsed my way over to his profile page on Business Week and thence to an article entitled Detroit Should Get Cracking on its Googlemobile by Jeff Jarvis which contains a short video (8 minutes or so) interview with the author.  I haven't read the article or the book(!) but I quite liked the video despite the fact that it is not much more than an excuse to plug Jarvis' book, What would Google do?

It's a good question, and one that I tend to ask regularly in the context of things like institutional repositories.  Come to think of it, it's probably not a bad question to ask about universities more generally (as I think Jarvis does in the book).  I don't know if I would agree with Jarvis' answers but I think it is an interesting place to start a discussion.

In the video Jarvis characterises the Google approach as having four aspects:

  • Give up control to the people/your users.
  • Think like a platform and/or a network - let people build on top of what you do.
  • Scale has changed - "small is the new big".
  • Make mistakes well.

How would these characteristics apply when thinking about the way that universities operate?

Maximising the effectiveness of virtual worlds in teaching and learning

A quick note to say that the materials, audio and presentation slides, from our virtual worlds meeting that took place at the University of Strathclyde exactly 2 weeks ago, organised jointly with CETIS, are available from the meeting Wiki.

I have to confess to having missed much of the content on the day being rather unsuccessfully tied up with technology, trying to stream audio and slides from the event to a virtual audience in Second Life. I can sum my part in the day up by saying that I learned three things:

  • Firstly, having access thru a firewall to run Second Life is not the same thing as having access thru a firewall to run Second Life voice-chat.
  • Secondly, having a 3G dongle is very handy in an emergency (thanks to Sheila MacNeill of CETIS for use of hers on the day).
  • Thirdly, taking two laptops to a meeting sometimes isn't enough (but I couldn't carry any more anyway).

From my point of view the day was very frustrating, with the combination of a broken laptop and network restrictions at Strathclyde meaning that the afternoon session couldn't be streamed. But, from what I heard on the day and have seen since, we had a great selection of talks and there's material on the Wiki that is well worth viewing if you haven't done so yet.

Final thought... I note a tweet from Ren Reynolds (one of the speakers on the day) saying that delegate badges needed to list Twitter accounts and Second Life names alongside people's real names. Yes, absolutely... this is something we, and others, need to get into the habit of doing.

January 19, 2009

The strategic impact of the PLE in HE?

I was chatting to a colleague earlier on today about the state of learning management systems in UK higher education.  My sense of the current situation goes something like this:

  1. The traditional virtual learning environment (VLE) market is now quite mature and largely sewn up by Moodle and Blackboard.
  2. Neither of the systems in 1 is viewed particularly positively, either by learners (because of poor usability) or teaching staff (because of limited pedagogic possibilities/flexibility).
  3. As a consequence of 2, some thought leaders (i.e. those people who write about such things in blogs, etc.) are suggesting a move towards unbundling current VLE functionality across multiple services (some of which are inside the institution and some outside) in the form of the personal learning environment (PLE).
  4. Conversely, institutional investment in one or other of the systems in 1 is pretty high, so there is a significant level of policy/strategic inertia to overcome if institutions really are going to change as per 3.
  5. There is a growing lack of clarity in marketplace as we see cross-over between VLE-functionality and repositories (e.g. IntraLibrary), e-portfolio systems (e.g. PebblePad), collaborative tools (e.g. Huddle) and blogging tools (e.g. Wordpress).

Is that a reasonable summary?

As a result of the conversation, I asked on Twitter, "is the PLE approach (unbundling monolithic vle fundtionality) having any significant impact on real institutional strategic thinking yet?", to which I got 5 or 6 responses, one of which one suggested that it is (Ravensbourne) while the others were somewhat more hesitent.  Clearly, this provides nothing other than a snapshot that is both random and partial!  Heather Williamson of the JISC also suggested that their User-owned Technology Demonstrator projects might be able to help with the answer in the longer term.

It's an interesting question to ask because it seems to me that there is a high potential for disconnect between those on the ground (so to speak) who are dissatisfied with current provision and feel able to articulate a better solution vs. those who hold the purse strings and who may feel that they are too far down a particular strategic road to turn back?

January 14, 2009

Resource List Management on the Semantic Web

Via a post by Ivan Herman of the W3C, I came across a W3C case study titled A Linked Open Data Resource List Management Tool for Undergraduate Students, based on work done between Talis and the University of Plymouth.

Andy and I visited Talis, well, I was going to say a few months ago, but it was probably the middle of last year, and Rob Styles, Chris Clarke & other Talisians talked to us a little bit then about this work, but at that point I don't think they had a live system to show.

This looks pretty neat stuff. It's an RDF application, based on the Talis Platform. They make use of a number of existing ontologies (SIOC, BIBO) and have designed a simple ontology for the Reading Lists themselves and also one for the organisational structure of an academic institution, the AIISO ontology - which I imagine may be of interest to other projects working in this area.

Intelligent "bookmarking" tools for adding items to lists use a variety of techniques to extract metadata from Web pages (in a similar way to the Zotero citation manager tool); the metadata is exposed as RDFa in XHTML representations of the lists, which makes it available to systems like Yahoo's SearchMonkey; other RDF formats are available via content negotiation (following the Linked Data/Cool URIs for the Semantic Web principles); and a SPARQL endpoint for the dataset is available (though I'm not sure whether this is public). The system also allows students to provide annotations, which are also stored as RDF data, but in a separate data store from the "primary" reading list data, allowing different access controls.

December 17, 2008

Virtual World Watch requests information

Over at the Foundation-funded Virtual World Watch project, John Kirriemuir has issued a request for updated information on UK university and college activity in virtual worlds, to provide the basis of a fifth "snapshot" report, which he anticipates making available in late January 2009.

This time the questionnaire is explicitly extended to look beyond the use of the Second Life virtual world and to cover other virtual worlds too. It has also been "slimmed down" to a relatively small number of "open-ended" questions. John is running to quite a tight deadline and would like responses by Tuesday 6 January 2009.

The previous snapshot reports have been well received as a current source of information, so if you have activity to report on which you'd like to see included, please take a break from the "Only Fools & Horses" repeats on Boxing Day, and have a look at John's questionnaire.

Further details available from Virtual World Watch.

December 04, 2008

Brief thoughts on the CETIS Conference 2008

I spent part of last week in Birmingham at the JISC CETIS Conference 2008. See my live-blogging for day 1 and day 2 for details, covering the introductions by CETIS staff, the opening keynote by Andrew Feenberg, the Learning Content Management Repository Virtual Environment system 2.0 and its future session on the first afternoon, the OER Programme Scoping session on the second morning and the closing keynote by Stuart Lee.

All good stuff.

I enjoyed both keynotes though I confess to finding parts of the first one difficult to keep up with while live-blogging. I don't know if anyone else felt the same way but I found there to be something of a discord between Andrew Feenberg's promotion of the value of face-to-face lecturer/student contact in the form of the traditional lecture (as opposed to textual renditions of the same, e.g. a streamed video) somewhat at odds with his own delivery style - which was basically to read out a written paper, a style that I find quite difficult to properly engage with. I also felt he underplayed the kind of collaborative learning that can take place, facilitated by social networks and/or virtual worlds, around streamed media. That said, his introduction of the "city vs. factory" metaphor for learning was genuinely valuable and the fact that his talk was constantly referenced during the two days undoubtedly shows the mark of a good keynote.

Stuart Lee was also thoroughly entertaining and thought provoking at the conference close.

I will also briefly mention the OER Programme Scoping session on the second morning which, for me, was probably the most interesting and useful part of the conference. OER stands for Open Educational Resources, a joint UK programme being run by the JISC and the HEA, and is described by John Selby of HEFCE as follows:

Significant investment has already been made in making educational resources widely available by digitising collections of materials and enabling people to reuse and adapt existing content to support teaching and learning.

This new initiative will test whether this can be done much more generally across higher education. If the pilots are successful, we will have demonstrated that we could significantly expand the open availability and use of free, high quality online educational content in the UK and around the world. This will give further evidence of the high quality of UK education and make it more widely accessible.

This, it seems to me, is a programme with huge potential to really change our cultural attitudes to the sharing of educational resources. However, doing so will not be easy.  We've seen significant activities like this in the past, the NOF-digi programme for example, that did not really succeed in bringing about such changes.  What's different now? Well, we have a more mature attitude to open content and the licences that go with it - Creative Commons in particular - so, undoubtedly, we are better placed now than we were then. On the other hand, we've been talking about the sharing of learning objects for some time with precious little success at anything other than the very granular level of individual images, videos and so on. So I think we've got to be realistic about what kind of content people want to share and re-use - we certainly don't want to be thinking about things like content packages for example - and you'll see from my live-blogged notes that I think such realism is having an impact.

Anyway, suffice to say that the OER Programme Scoping session was very informative and interesting with a good level of debate that could have gone on significantly longer than the time allowed. It seemed to me to have a buzz of excitement around it that I've not seen for a while.

Overall then... a very useful conference and I'm looking forward to next year's.

December 02, 2008

Digital students

Today's UK Guardian newspaper carries a special JISC supplement looking at the digital student and the way that:

technology has transformed education over the last decade. Sponsored by JISC to launch its 'Student experiences of technology' campaign, the supplement - 'Digital Student' - explores the achievements of institutions in this area and some of the future challenges as universities and colleges look to exploit technology and place the student experience at the heart of learning and teaching.

The online version of the supplement carries stories about Second Life, podcasting, iTunes U, SMS, accessibility, copyright, e-portfolios and more. Which reminds me... why doesn't our growing use of Apple's iTunes U attract more negative comment in the way that, say, Linden Lab's Second Life does? It seems to me that using iTunes U to host podcasts is significantly more closed than we'd really like it to be?

September 18, 2008

ARGs are the new black - discuss

Writing at Museum 2.0, Nina Simon describes the use of an alternate reality game at the Smithsonian American Art Museum (SAAM), An ARG at the Smithsonian: Games, Collections, and Ghosts.  I can't comment further in any detail since I'm not really into this kind of stuff but one of the things I sensed at ALT-C 2008 was a distinct and growing interest in ARGs as a learning tool - are we in the early part of another hype curve here?

Interesting to see museums playing in this space - are any UK museums doing this?

September 17, 2008

Thoughts on ALT-C 2008

A few brief reflections on ALT-C 2008, which took place last week.

Overall, I thought it was a good event.  Hot water in my halls of residence rooms would have been an added bonus but that's a whole other story that I won't bother you with here.

I particularly enjoyed the various F-ALT sessions (the unofficial ALT-C Fringe), which were much better than I expected.  Actually, I don't know why I say that, since I didn't really know what to expect, but whatever... it seemed to me that those sessions were the main place in the conference where there was any real debate (at least from what I saw).  Good stuff and well done to the F-ALT organisers.  I hope we see better engagement between the fringe and the main conference next year because this is something that has the potential to bring real value to all conference delegates.

I also enjoyed the conference keynotes, though I think all three were somewhat guilty of not sufficiently tailoring their material to the target audience and conference themes.  I also suspect that my willingness to just sit back and accept the keynotes at face value, particularly the one by Itiel Dror, shows what little depth of knowledge I have in the 'learning' space - I know there were people in the audience who wanted to challenge his 'cognitive psychologist' take on learning as we understand it.

I live-blogged all three, as well as some of the other sessions I attended:

I should say that I live-blog primarily as a way of keeping my own notes of the sessions I attend - it's largely a personal thing.  But it's nice when I get a few followers watching my live note taking, especially when they chip in with useful comments and questions that I can pass on to the speakers, as happened particularly well with the "identity theft in VLEs" session.

I should also mention the ALT-C 2008 social network which was delivered using Crowdvine and which was, by all accounts, very successful.  Having been involved with a few different approaches to this kind of thing, I think Crowdvine offers a range of functionality that is hard to beat.  At the time of writing, over 440 of the conference's 500+ delegates had signed up to Crowdvine!  This is a very big proportion, certainly in my experience.  But it's not just about the number of sign-ups... it's the fact that Crowdvine was actively used to manage people's schedules, engage in debates (before, during and after the conference) and make contacts that is important.  I think it would be really interesting to do some post-conference analysis (both quantitative and qualitative) about how Crowdvine was really used - not that I'm offering to do it you understand.  The findings would be interesting when thinking about future events.

The conference dinner was also a triumph... it was an inspired choice to ask local FE students to both cater for us and serve the meal, and in my opinion it resulted in by far the best conference meal I've had for a long time.  Not that the conference meal makes or breaks a conference - but it's a nice bonus when things work out well :-).  Thinking about it now, it seems to me that more academic/education conferences should take kind of approach - certainly if this particular meal was anything to go by - not just in terms of the meal, but also for other aspects of the event.  How about asking media students to use a variety of new media to make their own record of a conference for example.  These are win-win situations it seems to me.

Finally, the slides from my sponsor's session are now available on Slideshare:

As I mentioned previously, the point of the talk was to think out loud about the way in which the availability of notionally low-cost or free Web 2.0 services (services in the cloud) impacts on our thinking about service delivery, both within institutions and in community-based service providers such as Eduserv.  What is it that we (institutions and service providers 'within' the community) can offer that external providers can't (sustainability, commitment to preservation of resources, adherence to UK law, and so on)?  What do they offer that we don't, or that we find it difficult to offer?  I'm thinking particularly of the user-experience here! :-) How do we make our service offerings compelling in an environment where 'free' is also 'easy'?

In the event, I spent most time talking about Eduserv - which is not necessarily a bad thing since I don't think we are a well understood organisation - and there was some discussion at the end which was helpful (to me at least).  But I'm not sure that I really got to the nub of the issue.

This is a theme that I would certainly like to return to.  The Future of Technology in Education (FOTE2008) event being held in London on October 3rd will be one opportunity.  It's now sold out but I'll live-blog if at all possible (i.e. wireless network permitting) - see you there.

August 26, 2008

ARG (as opposed to Arghhh)

I'm not a big gamer and never have been (brief flirtations with Space Invaders and Pac-Man way back when, Tony Hawks Pro Skater on the PS2, Guitar Hero III on the Xbox 360 and a couple of other things aside).  But I do quite like the idea of alternate reality games (ARGs), if only because of the explicit merger of real-world and online activities (note that I said idea - I am very unlikely to ever actually play one of these things!):

An ARG is an interactive narrative in which players work together to solve puzzles and co-ordinate activities in the real world and online, using websites, GPS tracking devices, telephone lines, newspaper adverts and more.

On that basis, the JISC-funded ARGOSI project looks interesting, a collaboration between Manchester Metropolitan University and the University of Bolton that will use an ARG to support the student induction process. [Via play think learn]

August 01, 2008

Unleashing the Tribe

A final quickie before I go on leave for a week...

I just wanted to highlight Ewan Macintosh's keynote, Unleashing the Tribe, on the final day of the Institutional Web Managers Workshop (for which we were a sponsor), which is now available online.  This is a great talk and well worth watching.  (Note that the talk starts almost exactly 5 minutes into the video so you can skip the first bit).  The emphasis is very much on learning, which is fine, though we must not forget that most HE institutions also have a mission to carry our research.

Three comments, on mechanics rather than content, ...

Firstly, as a remote attendee on the day, the importance of having someone in the venue dedicated to supporting remote participants (or rather, a lack thereof) was highlighted very clearly.  Ewan chose to use Twitter as the back-channel for his presentation, ignoring the existing ScribbleLive channel.  That was his prerogative of course, though I happen to think that Twitter isn't particularly appropriate for this kind of thing because it is too noisy for Twitter followers who aren't interested in a particular event.  Whatever... the point is that having announced the change to Twitter verbally at the start of the session, those of us who missed the announcement needed to be informed of the change more permanently thru the ScribbleLive forum so that we could move as well.

Secondly, I note that the streamed video from the various sessions hasn't been made available thru blip.tv (or something like it).  Instead, it is being served directly by Aberdeen, the workshop hosts.  As a result, the streamed video can't be embedded here (or anywhere else for that matter) - at least, not as far as I can tell.  This seems slightly odd to me, since the whole theme of the event was around sharing and mashing content.

That said, apart from a minor gripe about the volume being too low, the quality of the camera work on the video stream was very good.

Thirdly, it'd be interesting to do a proper comparison between Coveritlive, which we used as part of our symposium this year, and ScribbleLive.  My feeling is that ScribbleLive makes better use of screen real-estate.  On the other hand, Coveritlive has better bells and whistles and more facilities around moderation (which can be good or bad depending on what you want to do).  In particular (and somewhat surprisingly), Coveritlive handles embedded URLs much better than ScribbleLive.  Overall, my preference is slightly twoards Coveritlive - though I could be swayed either way.

July 23, 2008

PsychoPod: conversations in cognitive psychology

At the beginning of 2007 we funded a small podcasting project called PsychoPod, undertaken jointly by Nigel Holt and Jim Crawley (Bath Spa University) and Ian Walker (University of Bath).  The intention was to develop a series of podcasts aimed at undergraduates on similar course modules in cognitive psychology at the two institutions and to undertake some survey work looking at how successful they were at augmenting a more traditional approach to course delivery.

The final report and copies of the resulting podcasts were delivered to us some time ago but I have just got round to doing something with them :-).  Four podcasts were produced, as follows:

I'm not sure how these were originally distributed to the students on the psychology courses but they were delivered to us as MP3 files on a CD-ROM.  So, what to do to make them available?  I asked around (using Twitter) for suggestions of a podcasting equivalent to Slideshare - i.e. a social network through which I could upload, host and share the podcasts.  Several people suggested Odeo, a service which turns out to be more like Technorati than Slideshare in the sense that it aggregates podcasting feeds from other sources rather than hosting the content directly itself.

So, I uploaded the MP3 files to the Eduserv Web server, created a simple RSS feed for the four podcasts and submitted it to Odeo.  I then waited a few days while the content got agreggated into an Odeo channel.  It was easy enough to do and seems to have worked fine.  I'm not sure how much 'educational' (by which I mean 'academic'... by which I mean 'university level') content there is on Odeo and it is possible that I could have made a better choice of service but the point really was to see how easy it was to make the stuff available so it doesn't matter too much.

As an alternative approach, I could also have added the content to iTunes or iTunes U I guess?  I didn't do so largely because I felt it was more appropriate for the universities concerned to do that directly themselves, rather than me doing it as a funder on their behalf (though one might make the same argument about my use of Odeo).

Suggestions for alternative (perhaps more overtly academic) podcast hosting and/or aggregating services are very welcome.

June 12, 2008

Great expectations?

Another interesting looking report from the JISC, this time focusing on how well UK universities are meeting the ICT expectations of new undergraduates, Great Expectations of ICT: how higher education institutions are measuring up.

There's a lot of material here, which I haven't looked at in detail yet, but the key findings are reported as follows:

  • General use of social networking sites is still high (91% use them regularly or sometimes). Frequency of use has increased now that they are at university with a higher proportion claiming to be regular users (80%) - up from 65% when they were at school/college.
  • 73% use social networking sites to discuss coursework with others; with 27% on at least a weekly basis.
  • Of these, 75% think such sites as useful in enhancing their learning.
  • Attitudes towards whether lecturers or tutors should use social networking sites for teaching purposes are mixed, with 38% thinking it a good idea and 28% not. Evidence shows that using these sites in education are more effective when the students set them up themselves; lecturer-led ones can feel overly formal.
  • Despite students being able to recognise the value of using these sites in learning, only 25% feel they are encouraged to use Web 2.0 features by tutors or lecturers.
  • 87% feel university life in general is as, or better than, expected especially in terms of their use of technology, with 34% coming from the Russell Group of universities saying their expectations were exceeded.
  • 75% are able to use their own computer on all of their university's systems with 64% of students from lower income households assuming that they are able to take their own equipment, perhaps due to lack of affordability and ownership.

The following comments, from towards the end of the JISC Web page for the report, also feel significant and present a somewhat less than positive view about staff attitudes to the innovative use of ICT for learning within the university sector:

Students do not perceive HEIs to be leading the way in developing new methods of learning. Their perception is that current technology training for students tends to focus on how to use different systems. There is little sense that the HEI has a remit to encourage these students to think differently about information, research and presentation.There is also emerging evidence that student-driven ICT, including the use Web 2.0 features is very beneficial in their learning despite relatively few feeling they are encouraged to use Web 2.0 features in this way. Attitudes as to whether social networking sites could be used in teaching are mixed, however, where social networking emerges organically among the students, it is shown to be more successful than networks put in place by the teacher...

May 16, 2008

Teach online to compete...

An article in Tuesday's Education Guardian, Teach online to compete, British universities told, caught my eye - not least because it appears to say very little about teaching online.  Rather, it talks about making course materials available online, which is, after all, very different.  To be fair, Carol Comer, academic development advisor (eLearning) at the University of Chester, does make this point towards the end of the article.

The report on which the story is based is "a paper for the latest edition of ppr, the publication of influential thinktank the Institute for Public Policy Research".  I'm not sure if the paper is currently finished - it doesn't really look finished to be honest - the fonts seem to be all over the shop but perhaps I'm being too picky.  Or perhaps the Guardian have got sight of it a little early?

The report suggests that the UK should:

  • establish a centralised online hub of diverse British open courseware offerings at www.ocw.ac.uk, presented in easily-readable formats and accessible to teachers, students and citizens alike
  • establish the right and subsequent capacity for non-students and non-graduates to take the same exam as do face-to-face students, through the provision of open access exam sessions
  • pass an Open Access Act through Parliament, establishing a new class of Open degree, achieved solely using open courseware
  • conduct a high-profile public information campaign, promoting the opportunities afforded open courseware and open access examinations and degrees, targeted at adult learners, excluded minorities and students at pre-university age

OK, I confess that I found the report quite long and I didn't quite get to the end (err, make that beyond halfway).  I'm as big a fan of open access as the next person, probably more so, so I don't have a problem with the suggestion that we should be making more courseware openly available.  I'm just not convinced that anyone could get themselves up to degree level simply by downloading / reading / watching / listening to a load of open access courseware - no matter how good it is.  The report makes reference to MIT's OpenCourseware and the OU's OpenLearn initiatives.  Call me a cynic, but I've always suspected that MIT makes its cousreware available online, not for the greater good of humanity but so that more students will enroll at MIT?  OK, I'm adopting an intentionally extreme position here and I'm sure people at MIT do have the best of intentions - but I think it is also the case that they don't see the giving away of courseware in any way harmful to their current business models.  The OU's OpenLearn initiative (treated somewhat unfairly by the parts of the report I read) is slightly different in any case since the OU is by definition a distance-based institution - or so it seems to me.

So, I should probably stop at this point - having not properly read the report fully.  If you think I've been very unfair when you read the report yourself, let me know by way of a comment.

May 15, 2008

Podcasting in teaching and learning

Andy Ramsden and Lindsay Jordan up at the University of Bath have made a nice little presentation available on Slideshare providing an introduction to the use of podcasting in education - it's short and sweet.  Like most presentations on Slideshare, it would benefit from the addition of an audio track - hint, hint - mind you, I never get round to adding audio to my own slide shows, so I don't really see why I should expect others to do it.

I did wonder if 'reflection' should have been added to the list of 'student created podcasts' on slide 19?

Note that this is part of a series of enhancing teaching through technology events (a.k.a. @eatbath on Twitter).

March 19, 2008

The 5 Ps of e-portfolios

I'm not sure whether this is helpful, and I'm possibly guilty of simply making up words for the sake of it, but having listened to Graham Attwell's excellent podcast on e-portfolio development yesterday I woke up this morning with 5 P-words in my head that try to capture what learners can do with their e-portfolio.  In no particular order:

plan
Graham refers to "personal development planning portfolios" in his podcast and it seems to me that this is one of the most important aspects of what an e-portfolio can enable.  Being able to assess where one is in a learning journey and, more importantly, being able to plan for what needs to come next is a critical learning skill and an e-portfolio is one of the tools that supports that process.
ponder
Such planning comes in part from being able to reflect on the learning that has already taken place.  I must admit that this P-word is probably the most contrived out of the five but it is no less important for that.  This reflective activity appears to fall within what Graham refers to as a "personal learning portfolio".
promote
There is a sense in which an e-portfolio becomes a self-promotion tool, functioning more or less like a curriculum vitae would do, either as part of getting a job, or during transition between different phases of education.  (Note: the P-word present, as in Graham's "presentation portfolio" would be an alternative here but for some reason I think that promote works better).
prove
Being able to prove that learning has taken place is an important function of the e-portfolio, either as evidence to support the assessment process (c.f. Graham's "assessment portfolio") or as part of the promote function (c.f. Graham's "presentation portfolio").
preserve
Finally, there is a life-long aspect to e-portfolios which, while it may not fall under a traditional interpretation of "digital preservation", it seems to me is a long enough period to give us significant headaches about how we manage digital material for that length of time, especially given that we are talking about personally managed information by and large.  An e-portfolio, and the systems around it, should help us to maintain a life-log record of our learning and, as I say, that is a non-trivial functional requirement to meet currently.

March 18, 2008

e-Portfolio development and implementation

This is quite old I think (middle of 2007?), but none the worse for that and well worth sharing here...

On the face of it, this video by Graham Attwell of Pontydysgu (created as part of the European Mosep project I think) allows him to share his thoughts on the fairly narrow topic of the development and implementation of e-portfolios.  The reality though is much broader - and the result is a very nice, and quite general, overview of how the learning agenda is evolving.

Despite being a firm believer that a picture is worth a thousand words, my only minor quibble with the video lies with the diagrams that Graham uses towards the end, neither of which I found overly compelling (particularly not the first which appears, at least at first glance, to position ELGG as a fairly central component of the learning landscape - not that I have anything against ELGG you understand... I just don't get a sense that anything needs to be positioned so centrally - perhaps it is just a layout thing?).

Anyway, putting that to one side, the video is well worth watching if you are interested in such things and have 30 minutes or so to spare.

March 04, 2008

P vs. P in a user-centric world

I'm currently doing some thinking around the 3 or 4 themes that I want to pull together for a talk at the UCISA 2008 Conference in Glasgow next week.  (Brian Kelly recently blogged about the same talk - it is a joint effort - under the title "IT Services Are Dead – Long Live IT Services 2.0!").

One of the themes I want to touch on is our general move towards user-centricity (is that a word?) and in particular the use of the word 'personal' in both Personal Learning Environment (PLE) and Personal Research Environment (PRE).  I've been laboring under what turns out to be a misapprehension that the P in PLE is used differently than the P in PRE.  Why did I think this?  Well, when I first read the PLE article by Scott Wilson et al, Personal Learning Environments: Challenging the dominant design of educational systems I must have particularly picked up on this paragraph:

While we have discussed the PLE design as if it were a category of technology in the same sense as the VLE design, in fact we envisage situations where the PLE is not a single piece of software, but instead the collection of tools used by a user to meet their needs as part of their personal working and learning routine. So, the characteristics of the PLE design may be achieved using a combination of existing devices (laptops, mobile phones, portable media devices), applications (newsreaders, instant messaging clients, browsers, calendars) and services (social bookmark services, weblogs, wikis) within what may be thought of as the practice of personal learning using technology.

At the same time I conveniently ignored the following paragraph:

However, for the design to reach equivalent or superior levels of efficiency to the VLE, as well as broader applicability, requires the further development of technologies and techniques to support improved coordination. Some initial investigations include the work of projects such as TenCompetence and the Personal Learning Environments work at the University of Bolton cited previously.

I really like the first of these two paragraphs, it sums up my view of the PLE as a way in which the learner can pick and mix from the wide range of [Web 2.0] services out there on the Web in order to get whatever task is at hand done most efficiently.

I tend to dislike the second, only because it puts one in mind of a portal-like approach, i.e. where the learner uses some kind of institutional or desktop tool as an access point to the range of external  services in which they are interested.  I'm afraid that I have a somewhat unjustified hatred of the 'portal' word/concept ever since I used it in the early days of the JISC Information Environment work and then had to spend 4 or 5 years explaining that I didn't really mean what people thought I meant!

Anyway... it seems to me that the P in PRE does tend to be used very much in the sense of 'research portal' - a single point of activity that brings together whatever combination of things it is that a researcher needs to do in order to undertake their research.

A couple of days ago, I asked my Twitter followers a question: is a PLE an approach or a bit of software?

To his credit, Scott replied, summing up the PLE concept rather nicely in 140 characters or less as follows:

@andypowe11: environment (web,society,family)+tools(sw, hw, process, technique)+disposition = PLE

I used to have a (regularly broken) rule of thumb that if you can't write something in one side of A4 or less then you haven't thought about it hard enough.  Seeing Scott's reply made me wonder whether that should be downsized to 140 characters - i.e. if you can't tweet it, don't bother!

I remain slightly disappointed that the notion of a PLE has to include some aspect of a tool to aggregate things together (and typically an institutional tool at that) though I suppose I have to grudgingly concede that such a thing is necessary, at least in as much as one needs to tie together assessment-related information based on the learning being undertaken in the PLE.

In terms of the talk, the theme remains pertinent I think.  We are now quite used to using the term 'user-centric' in the context of identity management (particularly OpenID).  But, of course, this trend is more pervasive, covering all kinds of activities and including both learning and research.  Whether there is an in-house aggregation layer (a portal, or PLE, or PRE, or whatever one chooses to call it) to bring the outputs of distributed learning and research activities back together is largely a moot point.  The point is that those activities are increasingly likely to be carried out using services outside the institution and where the institution has varying degrees of control over service level agreements, data protection, and the like.

And despite my negativity, one of the advantages of having that in-house aggregation layer is that it gives the institution some way of pulling external content created by its members back inside the institution where it can be retained as part of the scholarly record or for QAA type purposes, or whatever.

JISC ITTs: lifelong identity management and the role of e-portfolios in assessment

The JISC have a couple of calls for funding out at the moment, which I mention here only because they are very relevant to our own areas of interest within the Eduserv Foundation:

January 30, 2008

Learning Materials & FRBR

JISC is currently funding a study, conducted by Phil Barker of JISC CETIS, to survey the requirements for a metadata application profile for learning materials held by digital repositories. Yesterday Phil posted an update on work to date, including a pointer to a (draft) document titled Learning Materials Application Profile Pre-draft Domain Model which 'suggests a "straw man" domain model for use during the project which, hopefully, will prove useful in the analysis of the metadata requirements'.

The document outlines two models: the first is of the operations applied to a learning object (based on the OAIS model) and the second is a (very outline) entity-relational model for a learning resource - which is based on a subset of the Functional Requirements for the Bibliographic Record (FRBR) model. As far as I can recall, this is the first time I've seen the FRBR model applied to the learning object space - though of course at least some of the resources which are considered "learning resources" are also described as bibliographic resources, and I think at least some, if not many, of the functions to be supported by "learning object metadata" are analogous to those to be supported by bibliographic metadata.

I do have some quibbles with the model in the current draft. Without a fuller description of the functions to be supported, it's difficult to assess whether it meets those requirements - though  I recognise that, as I think the opening comment I cited above indicates, there's an element of "chicken and egg" involved in this process: you need to have at least an outline set of entity types before you can start talking about operations on instances of those types. Clearly a FRBR-based approach should facilitate interoperability between learning object repositories and systems based on FRBR or on FRBR-derivatives like the Eprints/Scholarly Works Application Profile (SWAP). I have to admit the way "Context" is modelled at present doesn't look quite right to me, and I'm not sure about the approach of collapsing the concepts of an individual agency and a class of agents into a single "Agent" entity type in the model. (For me the distinguishing characteristic of what the SWAP calls an "Agent" is that, while it encompasses both individuals and groups, an "Agent" is something which acts as a unit, and I'm not sure that applies in the same way to the intended audience for a resource.) The other aspect I was wondering about is the potential requirement to model whole-part relationships, which, AFAICT, are excluded from the current draft version. FRBR supports a range of variant whole-part relations between instances of the principal FRBR entity types, although in the case of the SWAP, I don't think any of them were used.

But I'm getting ahead of myself here really - and probably ending up sounding more negative than I intend! I think it's a positive development to see members of the "learning metadata community" exploring - critically - the usefulness of a model emerging from the library community. I need to read the draft more carefully and formulate my thoughts more coherently, but I'll be trying to send some comments to Phil.

January 15, 2008

White bread?

Via Emma Place and The Times Online I note that:

Google is "white bread for the mind", and the internet is producing a generation of students who survive on a diet of unreliable information, a professor of media studies will claim this week.

Good grief.  Emma is right to say that this is an important issue and I completely agree that "Internet research skills should be actively taught as a formal part of the university curriculum. Students may well be savvy when it comes to using new Internet technologies, but they need help and guidance on finding and using Web resources that are appropriate for academic work" but the debate isn't helped much by sound bites.

Blaming the Internet for "a generation of students who survive on a diet of unreliable information" is a bit like blaming paper for the Daily Star.  How about blaming an education system that hasn't kept up with the times?

The Internet, Google and Wikipedia are tools - no more, no less.  Let's help people understand how to use them effectively.

January 02, 2008

Rethinking the Digital Divide

The 2008 Association for Learning Technology Conference, Rethinking the Digital Divide, will be in Leeds between 9 and 11 September 2008.  Keynote speakers will include: David Cavallo, Chief Learning Architect for One Laptop per Child, and Head of the Future of Learning Research Group at MIT Media Lab; Dr Itiel Dror, Senior Lecturer in Cognitive Neuroscience at the University of Southampton; and Hans Rosling, Professor of International Health, Karolinska Institute, Sweden, and Director of the Gapminder Foundation.

The closing date for submissions of full research papers for publication in the peer-reviewed Proceedings of ALT-C 2008, and abstracts for demonstrations, posters, short papers, symposia and workshops is 29 February 2008.

The conference will focus on the following dimensions of learning:

Global or local - for example: What are the dichotomies between global and local interests in, applications of and resources for learning technology? How can experience in the developing world inform the developed world, and vice-versa? Will content and services be provided by country-based organisations or by global players?

Institutional or individual - for example: How can the tensions between personal and institutional networks, and between formal and informal content, be resolved?

Pedagogy or technology - for example: How do we prevent technology and the enthusiasms of developers from skewing things away from the needs of learners? Are pedagogic problems prompting new ways of using technology? Are learners’ holistic experiences of learning technologies shifting the emphasis away from ‘pedagogy’ and into learner-centred technology?

Access or exclusion - for example: How can learning technology enable access rather than cause exclusion? If digital access is improving quickly for those with least, do widening gaps between rich and poor matter, and if yes, what needs to be done?

Open or proprietary - for example: Can a balance be struck, or will the future be open source (and/or open access)?

Private or public - for example: What are the respective roles of the private and public sectors in the provision of content and services for learning? Is the privacy of electronic data still under threat? Are there ongoing problems with identity, surveillance and etiquette regarding private/public personae in social software?

For the learner or by the learner - for example: How can technology empower learners and help them take ownership of their learning? How can it help to negotiate between conflicting demands and respond to multiple voices?

December 05, 2007

Meanwhile in the real-world...

Over here on eFoundations we like to think about the important issues in life such as how Web 2.0 impacts elearning in institutions, what the Web architecture has to say about repository design, whether the Semantic Web does anything for the future of library and museum systems, the trust issues around OpenID, ... that kind of thing.

Meanwhile, in the real-world [tm], I've just been trying to help a friend of mine via MSN (during my lunch hour I hasten to add) who is having difficulties printing out one of her distance learning course documents from within Moodle.  Her considered opinion after a couple of hours of failure?

i think elearning is rubbish and designed by men

'Nuff said :-)

November 27, 2007

On the road again

Both Pete and I have been on the road a lot over the last few days, hence the lower than usual number of blog entries... for which, apologies.

My travels started last week with the JISC CETIS conference in Birmingham and my somewhat abortive attempt at a video blog entry (see previous blog entry).  My original plan was to video blog both days, but the blunt realisation that some people would rather not have their photos made available online (even without any association with their name) and the ensuing gap since the conference finished means I won't bother.  I don't think you are missing much to be honest (and even I have to confess that I'm already bored by the photo transitions available on Animoto!).

The conference was very enjoyable and it was particularly good to meet Sarah Robbins and Mark Bell who had come over from the US to speak at the event, both of whom I had only previously met in Second Life.  It was very nice to be able to meet with virtual friends in a real-life pub and warm beer kind of way.  Both gave very interesting presentations in the virtual worlds session at the event (as did Dan Livingstone, who spoke in the same session), my only major comment being that it was a shame that the audience for both was relatively small.  It is also worth noting that, as far as I could tell, the network at the conference venue did not support Second Life connections, so no live demoing was possible.

My other lasting thought (I confess that I only brought away a few scrappy notes, so any kind of detailed blog is out of the question) was the apparent gulf between the somewhat conservative computing services view of the world, as presented by Iain Stinson (University of Liverpool), and what I perceived to be the rather more cutting edge view of the conference more generally.  I don't mean that in a derogatory way to either viewpoint, since we probably need some of both... but the gap between the two struck me as pretty startling and I think that ultimately we have to find ways of bringing them together to take any kind of sensible path forward.

The following day I traveled to London to speak at the UKSG event, Caught up in Web 2.0? I had been asked to speak about Second Life, something I'm always happy to do, though in this particular case I spent some time explaining what I saw as the similarities and differences between SL and Web 2.0.  It is also worth noting that I'd arrived armed only with a very thin presentation, expecting to be able to demo Second Life live to the assembled masses.  Unfortunately, the venue's firewall prevented this from happening, meaning that I had to spend the first two talks re-purposing a previous set of slides :-(.  Despite that distraction, I found the other presentations on the day very interesting.

There's a small theme emerging here... Second Life is technically advanced enough that being able to use it in any given venue is not guaranteed.  It was therefore with some trepidation that I went back to Birmingham yesterday for UKOLN's workshop on blogs and social networks which I had, somewhat madly, agreed to try streaming into Second Life with no real knowledge of what kind of network was going to be available.

I'll blog the event separately on the grounds that there are some useful lessons to be learned, but suffice to say that things went less smoothly than they might have, though not necessarily for the reasons I was concerned about before I went!

November 21, 2007

JISC CETIS conference - day 1

A short 'video' blog of day one of the JISC CETIS conference, using the photos I took during the opening plenaries in the morning and the MUVE session after lunch, peppered with words and phrases that I noted popping up...

2007-11-21: Video link removed temporarily.  A delegate asked me not to make their photo available on the Web and I have no sure way of knowing yet whether their image was in one or more of the audience shots that I used in the video.  I've therefore taken it down again.  Apologies to all concerned.

2007-11-27: OK, I've re-instated the video, having spent some time checking thru the images it contains...

November 20, 2007

Semantic structures for teaching and learning

I'm attending the JISC CETIS conference at Aston University in Birmingham over the next couple of days.  One of the sessions that I've chosen to attend is on the use of the semantic Web in elearning, Semantic Structures for Teaching and Learning.  A couple of days ago a copy of all the position papers by the various session speakers came thru for people to read - hey, I didn't realise I was actually going to have to do some work for this conference! :-)

The papers made interesting reading, all essentially addressing the question of why the semantic Web hasn't had as much impact on elearning as we might have hoped it would a few years back, all taking a variety of viewpoints and perspectives.

Reading them got me thinking...

Some readers will know that I have given a fair few years of my recent career to metadata and the semantic Web, and to Dublin Core in particular.  I've now stepped back from that a little, partly to allow me to focus on other stuff... but partly out of frustration with the lack of impact that these kinds of developments seem to be having.

Let's consider the area of resource discovery for a moment, since that is probably what comes to mind first and foremost when people talk about semantic Web technologies.  Further, let's break the world into three classes of people - those who have content to make available, those who want to discover and use the content provided by others, and those that are building tools to put the first two groups in touch with each other.  Clearly the are significant overlaps between these groups and I realise that I'm simplifying things significantly but bear with me for a second.

The first group is primarily interested in the effective disclosure and use of their content.  They will do whatever they need to do to ensure that their content gets discovered by people in the second group, choosing tools supplied by the third group that they deem to be most effective and balancing the costs of their exposure-related efforts against the benefits of what they are likely enable in terms of resource discovery.  Clearly, one of the significant criteria in determining which tools are 'effective' has to do with critical mass (how many people in the second group are using the tool being evaluated).

It's perhaps worth noting that sometimes things go a bit haywire.  People in the first group put large amounts of effort into activities related to resource discovery where there is little or no evidence of tools being provided by the third group to take advantage of it.  Embedding Dublin Core metadata into HTML Web pages strikes me as an example of this - at least in some cases.  I'm not quite clear why this happens, but suspect that it has something to do with policy drivers taking precedence over the natural selection of what works or doesn't.

People in the second group want to discover stuff and are therefore primarily interested in the use of tools developed by the third group that they feel are most useful.  Their choices will be based on what they perceive to work best for resource discovery, balanced against other factors such as usability.  Again, critical mass is important - tools need to be comprehensive (within a particular area) to be deemed effective.

The third group need users from the other groups to use their tools - they want to build up a user-base.  The business drivers for why they want to do this might vary (ad revenue, subscription income, preparing for the sale of the business as a whole, kudos, etc.), but, often, that is the bottom line.  They will therefore work with the first group to ensure that users in the second group get what they want.

Now, when I use the phrase 'work with' I don't mean in a formal business arrangement kind of way - as a member of the first group, I don't 'work with', say, Google in that sense.  But I do work within the framework given to me by Google (or whoever) to ensure that my content gets discovered.  I'll optimise my content according to agreed best-practices for search-engine optimisation.  I'll add my content to del.icio.us and similar tools in order to improve its Google-juice.  I'll add a Google site map to my site.  And so on and so forth...

I'll do this because I know that Google has the attention of people in the second group.  The benefits in terms of resource discovery of working within the Google framework outweigh the costs of what I have to do to take part.  In truth, the costs are relatively small and the benefits relatively large.

Overall, one ends up with a loosely coupled cooperative system where the rules of engagement between the different parties are fairly informal, are of mutual benefit, evolve according to natural selection, and are endorsed by agreed conventions (sometimes turning into standards) around best-practice.

I've made this argument largely in terms of resource discovery tools and services but I suspect that the same can be said of technologies and other service areas.  The reasons people adopt, say, RSS have to do with low cost of implementation, high benefit, critical mass and so on.  Again, there is a natural selection aspect at play here.

So, what about the Semantic Web?  Well, it suffers from a classic chicken and egg problem.  Not enough content is exposed by members of the first group in a form suitable for members of the third group to develop effective tools for members of the second group.  Because the tools don't exist, the potential benefits of 'semantic' approaches aren't fully realised.  Members of the second group don't use the tools because they aren't felt to be good or comprehensive enough.  As a result, members of the first group perceive the costs of exposing richer Semantic Web data to outweigh any possible benefits because of lack of critical mass.

Can we break out of this cycle?  I don't know.  I would hope so... and Eduserv continue to put work into Semantic Web technologies such as the Dublin Core on the basis that we will.  On the other hand, I've felt that way for a number of years and it hasn't happened yet!  In rounding up the position papers in her blog, Lorna Campbell quotes David Millard, University of Southampton:

the Semantic Web hasn’t failed, it just hasn’t succeeded enough.

That's one way of looking at it I suppose and it's probably a reasonable view for now.  That said, I'm not convinced that it is a position that can reasonably be adopted forever and, with reference to my earlier use of the phrase "natural selection" it hardly makes one think of the survival of the fittest!?

What do I conclude from this?  Nothing earth shattering I'm afraid.  Simply that for semantic approaches to succeed they will need to be low cost to implement, of high value, and adopted by a critical mass of parties in all parts of the system.  I suspect that means we need to focus our semantic attention on things that aren't already well catered for by the very clever but essentially brute-force approaches across large amounts of low-semantic Web data that work well for us now... i.e. there's no point in simply inventing a semantic Web version of what Google can already do for us.  One of the potential problems with activities based on the Dublin Core is that one gets the impression that is what people are trying to do.

Again, I'm not trying to argue against the semantic Web, metadata, Dublin Core or other semantic approaches here... just suggesting that we need to be clearer about where their strengths lie and how they most effectively fit into the overall picture of services on the Web.

November 14, 2007

JISC CETIS gets a new look Web site

JISC CETIS (I think that's what I'm now supposed to call them - though I have to say that I much prefer plain ol' CETIS) have announced a new look Web site:

The new site http://jisc.cetis.ac.uk/ gives us the flexibility  to select and publish news and features for a variety of audiences through the front page and Domain pages. This  is enabled by a system of rss aggregation accompanied by an administrative tool, built in-house, which provides lots of editorial controls.

More on the detail and background to this change are available from Sarah Holyfield and Sam Easterby-Smith.

About

Search

Loading
eFoundations is powered by TypePad