The New York Times today finally got around to noticing that when web sites go down, people are increasingly likely to get mad and generally react the way I might if I drove to my favorite bar and found it closed for a private party. I might be miffed and share a few choice words with members of my party before deciding on a new locale. However, when we write blogs or tweets (if Twitter is up), the inconvenience and our subsequent vitriol is archived forever and transmitted around the world rather than just to our friends. And because millions of other people want to go to that same bar, the chorus of curses grows quickly.
We’ve written about how hard it is to create a 99.999 percent up time championed by the telecommunications industry, but suffice to say there are a ton of moving parts involved in keeping a site visible to the end users; the list begins with the network architecture and ends with the internet connection of a consumer in Austin. Along the way there are software upgrades, server shortages, DNS issues, cut cables, corporate firewalls, carriers throttling traffic and infected machines.
The Times notes that downtime is more than just inconvenient: As more data is stored online and cloud computing becomes more prevalent for businesses, it’s less like a bar closing for a night than a bank closing for a day. But it will never be possible to keep all sites across the entire web up 99.999 percent of the time. Knowing that, architecting for failure, and more services such as downforeveryoneorjustme.com (I would really love a more memorable name for this site) and helpful 404 pages would be appreciated.