How do you design an art exhibition around sound? Sennheiser and curators of “David Bowie Is” have found a way.
The Metropolitan Museum of Art has launched a year-long web series, 82nd & Fifth. In 100 two-minute videos, which will be posted two at a time every Wednesday through December 25, curators talk about “art that changed the way they see the world.”
Moves to allow the digitization of ‘orphan works’ and free up the metadata around 20 million cultural objects will benefit the public and could inspire a new wave of apps and web services. But the underlying motivation is fundamentally political.
DuraSpace, a not-for-profit consortium of universities, libraries and museums, just launched a SaaS solution to simplify the preservation of digitized cultural objects in the cloud. Using public cloud resources from both Amazon and Rackspace, DuraCloud hides the complexity of ensuring that valuable resources are preserved for the future. This launch of a subscription service takes the DuraSpace organization in new directions — and possibly to the private cloud.
University and national libraries, as well as museums and archives, have been digitizing their collections since the earliest days of the web. This work both increases access to rare and delicate material and serves to preserve something for future generations if disaster should befall the original work. Digital copies of cultural artifacts — and the metadata used to describe them — have typically been stored in digital repositories such as DuraSpace’s DSpace and Fedora, or the UK’s ePrints. For richly funded institutions such as MIT, Columbia or Cambridge, these systems have worked well. But in smaller institutions, projects have been more likely to use software like Microsoft’s Access database. Although less suitable for the task, these simple databases have been easier to use than the free but complex repository systems offered by DuraSpace and others. DuraCloud promises to take capabilities previously reserved for the rich and well-staffed institutions and make them available in a web browser to anyone.
Packaged as a hosted service that removes the need to configure hardware or patch software, DuraCloud initially appears expensive, costing $375 per month. This includes an Amazon or Rackspace virtual machine (worth about $70) and 500 GB of storage (worth $60–$70), as well as support and updates. Additional storage is billed at your chosen cloud provider’s list price and is added to the DuraCloud invoice. DuraSpace CEO Michele Kimpton sees this as one way that DuraCloud delivers real value to subscribers. Purchasing rules in many libraries, for example, prevent the use of credit cards. Invoices from DuraCloud are far easier for libraries to deal with, as they fit entrenched processes based on purchase orders, approvals and invoices in ways that a traditional SaaS application’s use of credit cards or PayPal does not.
And let’s not forget redundancy, a key principle of digital archiving: The more copies of a document, the less chance there is of losing something forever. However, many institutions struggle to achieve this in a cost-effective manner. DuraCloud’s management interface offers a solution to the problem by letting institutions redundantly store data in multiple Amazon regions or replicate across both Amazon and Rackspace. A sync service ensures that copies remain identical and notifies administrators if data loss occurs. Copies held in other regions could replace lost data.
As well as supporting Amazon and Rackspace, DuraCloud will soon add Microsoft’s Windows Azure. There is also an adapter for Eucalyptus, and Kimpton says she is “looking for a partner” interested in running a Eucalyptus-powered option. She is also interested in OpenStack and tracks Rackspace’s transition to OpenStack code. A UK project exploring the feasibility of running centralized cloud infrastructure for universities might be one place in which an OpenStack-based DuraCloud installation could be tested. As a dedicated academic private cloud, it should drive costs lower than the DuraCloud service itself can, by hosting its own version of the (open-source) DuraCloud software on virtual machines and storage that can then be rented to partners at lower rates than commercial cloud services can match.
As private academic clouds like the OpenStack-powered one at the San Diego Supercomputer Center (SDSC) begin to appear, DuraCloud would be wise to evaluate the cost of basing future services on a network of similar installations at big cultural and academic institutions, rather than depend on the more commercial public cloud services. Shared — but private — cloud infrastructure running in a small number of larger cultural institutions might be capable of reaching sufficient scale to cost-effectively compete with the public infrastructure upon which DuraCloud relies today. Given the scale of the cultural sector and its long-term perspective on preservation, could this be a case in which the private cloud proves better than the public?
Question of the week
The proponents of G.hn should be excited. After all, the new triple-wire home networking standard is inching ever closer to market, what with ITU approval of the specification last year, the availability of the world’s first G.hn chipset this year and a fairly successful plugfest in Geneva this week.
But even after all that, the outlook for G.hn is still cloudy. Why? Because while the technology makes a lot of sense in theory —one chipset, three wires — so far only two service providers (BT and AT&T) have publicly committed to using it. And while G.hn may eventually make it to the retail sales channel, the wide adoption of Wi-Fi and HomePlug in home networks today makes for a pretty formidable one-two punch that will be hard to avoid.
So does G.hn stand a chance? While (naturally) some of the backers of MoCA (the standard for home-coax networking) and HomePlug (the standard for powerline home networking) would say no, there are still viable advantages that continue to make G.hn attractive:
- One technology and three wires should mean (eventually) lower costs. One MAC/PHY to work over three wires will mean only one gateway SKU, and this should mean lower costs to service providers both in hardware and training.
- The standard is an approved International Telecommunications Union (ITU) standard, which means it should be widely available and free of the IP licensing snags that crop up with proprietary technologies.
- It is next-generation standard — meaning it has a data throughput of up to 1 gbps — making it competitive with the next-generation MoCA (MoCA 2.0) and HomePlug (AV2) standards, which are both promising gigabit per second speeds.
With these advantages, why is G.hn still appearing to struggle? One reason is that the technology is late to market for many. In North America, MoCA has become the de facto multimedia networking technology for service providers, while HomePlug has gained significant traction at retail as an alternative to Wi-Fi and has seen some success overseas.
Perhaps most importantly, one of the biggest critiques of G.hn is that it’s not backwards-compatible with HomePlug or MoCA. It makes sense for a new standard to essentially work from a “clean slate” technologically, but for those service providers with millions of MoCA boxes in the field or for those with a HomePlug network, moving to G.hn would require a “forklift upgrade” of the home network.
Are things hopeless for G.hn? No, but the technology needs to find momentum quickly, both with additional service provider commitments and the shipping of hardware. Today silicon has already shipped from the likes of Sigma and Lantiq to hardware manufacturers, but there are no production units of G.hn in the field. For a standard that was completed a year ago, to just start holding plugfests and have no production hardware is a sign that things are not moving quickly.
Once hardware is available, it is likely that some service providers will field trial G.hn. Field trials take time, and all the while, MoCA and HomePlug keep marching on, with devices being deployed and next-generation technologies on the cusp of delivery.
Bottom line: If G.hn hopes to stand a chance, it needs to start showing results in the marketplace, and fast.
Question of the week
Brewster Kahle spoke in the same session as me at an event near Amsterdam today. In amongst a number of interesting points, Kahle mentioned in passing that the Internet Archive offers cloud storage services to public institutions, with contributions of around $2,000 ‘endowing’ a terabyte of storage in perpetuity. This is an intriguing model, and cautious archives, libraries and museums may be happier to trust Kahle than strictly commercial providers. Does this relatively safe and straightforward first step make it easier or harder for those institutions to subsequently adopt mainstream services, and might they bring pressure to bear on the Archive to offer a fuller range of cloud services that compete more directly with Amazon et al?
Wi-Fi home networks are no longer the sole domain of the tech-savvy, while more and more non-PC devices — be they game consoles or iPod touches — are connecting to the network. But while the home network has, in fact, evolved, we’re not anywhere near that utopian vision of the digital home. As any of us who have a home network can attest, half the time it feels like it’s hanging together with Band-Aids and silly putty, a temperamental creation in which devices can’t connect, the router needs rebooting, and if we’re lucky enough to make video streaming from the PC to the TV work, chances are it won’t tomorrow. In short, for all the advances of the home network, the transition to the full-fledged, seamlessly connected media network remains a distant vision. So what’s the deal? Why is the reality of the digital home so hard to achieve?