« HTML5, document metadata and Dublin Core | Main | Data.gov.uk, Creative Commons and the public domain »

February 02, 2010

Second Life, scalability and data centres

Interesting article about the scalability issues around Second Life, What Second Life can teach your datacenter about scaling Web apps. (Note: this is not about the 3-D virtual world aspects of Second Life but about how the infrastructure to support it is delivered.)

Plenty of pixels have been spilled on the subject of where you should be headed: to single out one resource at random, Microsoft presented a good paper ("On Designing and Deploying Internet-Scale Services" [PDF]) with no less than 71 distinct recommendations. Most of them are good ("Use production data to find problems"); few are cheap ("Document all conceivable component failure modes and combinations thereof"). Some of the paper's key overarching principles: make sure all your code assumes that any component can be in any failure state at any time, version all interfaces such that they can safely communicate with newer and older modules, practice a high degree of automated fault recovery, auto-provision all resources. This is wonderful advice for very large projects, but herein lies a trap for smaller ones: the belief that you can "do it right the first time." (Or, in the young-but-growing scenario, "do it right the second time.") This unlikely to be true in the real world, so successful scaling depends on adapting your technology as the system grows.


TrackBack URL for this entry:

Listed below are links to weblogs that reference Second Life, scalability and data centres:


The comments to this entry are closed.



eFoundations is powered by TypePad