Download All The Things, Round II

Those of you who have been reading this blog for a while may recall Download All The Things!, where I investigated the feasibility of downloading the entire Internet (lolcats included, of course).  I’ve decided to revisit this, but with one small (or not so small) difference: change our estimation of the size of the internet.

For Round II, I’m going with one yottabyte (or “yobibyte” to keep the SI religious happy).  This is a massive amount of data: 1024 to the power 8 (or 2 to the power 80) bytes (and no, I’m not typing the full figure out on account of word-wrapping weirdness); it’s just short of 70,000 times the size of our previous estimate.  To give a more layman-friendly example: you know those 1 terrabyte external hard drives that you can pick up at reasonable prices from just about any computer store these days?  Well, one yottabyte is equivalent to one trillion said drives.  A yottabyte is so large that, as yet, no-one has yet coined a term for the next order of magnitude.  (Suggestion for those wanting to do so: please go all Calvin and Hobbes on us and call 1024 yottabytes a “gazillabyte”!)

There’s two reasons why I wanted to do this:

  • Since writing the original post, I’ve long suspected that my initial estimate of 15 EB, later revised to 50 EB, may have been way, way too small.
  • In March 2012, it was reported that the NSA was planning on constructing a facility in Utah capable of storing/processing data in the yottabyte range.  Since Edward Snowden’s revelations regarding NSA shenanigans, it’s a good figure to investigate for purposes of tin foil hat purchases.

Needless to say, changing the estimated size of the internet has a massive effect on the results.

You’re not going to download 1 YB via conventional means.  Not via ADSL, not via WACS, not via the combined capacity of every undersea cable.  (It will take you several hundred thousand years to download 1 YB via the full 5.12 Tbps design capacity of WACS.)  This means that, this time around, we’re going to have to go with something far more exotic.

What would work is Internet Protocol over Avian Carriers — and yes, this is exactly what you think it is.  However, the avian carriers described in RFC 1149 won’t quite cut it out, so we’ll need to submit a new RFC which includes a physical server in the definition of “data packet” and a Boeing 747 freighter in the definition of “avian carrier”.  While this is getting debated and approved by the IETF, and we sort out the logistical requirements around said freighter fleet, we can get going on constructing a data centre for the entire internet.

As for the data centre requirements, we can use the NSA’s Utah DC for a baseline once more.  The blueprints for the data centre indicate that around 100,000 square feet of the facility will be for housing the data, with the remainder being used for cooling, power, and making sure that us mere mortals can’t get our prying eyes on the prying eyes.  Problem is, once the blueprints were revealed/leaked/whatever, we realised that such a data centre would likely only be able to hold a volume of data in the exabyte range.

Techcrunch told us just how far out the yottabyte estimate was:

How far off were the estimates that we were fed before? Taking an unkind view of the yottabyte idea, let’s presume that it was the implication that the center could hold the lowest number of yottabytes possible to be plural: 2. The smaller, and likely most reasonable, claim of 3 exabytes of storage at the center is directly comparable.

Now, let’s dig into the math a bit and see just how far off early estimates were. Stacked side by side, it would take 666,666 3-exabyte units of storage to equal 2 yottabytes. That’s because a yottabyte is 1,000 zettabytes, each of which contain 1,000 exabytes. So, a yottabyte is 1 million exabytes. The ratio of 2:3 in our example of yottabytes and exabytes is applied, and we wrap with a 666,666:1 ratio.

I highlight that fact, as the idea that the Utah data center might hold yottabytes has been bandied about as if it was logical. It’s not, given the space available for servers and the like.

Yup, we’re going to need to build a whole lot of data centres.  I vote for building them up in Upington, because (1) there’s practically nothing there, and (2) the place conveniently has a 747-capable runway.  Power is going to be an issue though: each data centre is estimated to use 65 MW of power.  Multiply this by 666,666, and… yeah, this is going to be a bit of a problem.  Just short of 44 terawatts are required here, and when one considers that xkcd’s indestructible hair dryer was “impossibly” consuming more power than every other electrical device on the planet combined when it hit 18.7 TW, we’re going to have to think outside of the box.  (Pun intended for those who have read the indestructible hair dryer article.)

Or not… because this means that our estimate of one yottabyte being the size of the internet is way too high.  So, we can do this in phases: build 10,000-50,000 or so data centres, fill them up, power them up, then rinse and repeat until we’ve got the entire Internet.  You’ll have to have every construction crew in the world working around the clock to build the data centres and power stations, every electrical engineer in the world working on re-routing power from elsewhere in the world — especially when one considers that, due to advancements in technology (sometimes, Moore’s Law is not in our favour), the size of the Internet will be increasing all the time while we’re doing this.  But it might just about be possible.

That said: even due to the practical impossibility of the task, don’t underestimate the NSA.

Or, for that matter, Eskom:

Cloud computing, a tad too literally

I couldn’t help but chuckle when this Slashdot post popped up:

The Register carries the funniest, most topical IT story of the year: ‘Facebook’s first data center ran into problems of a distinctly ironic nature when a literal cloud formed in the IT room and started to rain on servers. Though Facebook has previously hinted at this via references to a ‘humidity event’ within its first data center in Prineville, Oregon, the social network’s infrastructure king Jay Parikh told The Reg on Thursday that, for a few minutes in Summer, 2011, Facebook’s data center contained two clouds: one powered the social network, the other poured water on it.

Someone had better explain to Facebook that having an actual cloud in the data centre isn’t really what cloud computing is all about!

That said, this has happened before.  Boeing’s Everett factory (initially constructed to assemble the 747, and now assembles all of their widebody jets) used to have clouds forming near the ceiling, before the installation of a specialised air circulation system sorted that little problem out.  Evidently, some are destined to repeat history’s mistakes…

Download all the things!

One of my friends over on my crappy little forum recently received the following support ticket (and, quite understandably, facepalmed):

Can you please download internet on my system?

Rather than partake in some sympathetic facepalming of my own, I thought I’d come up with a quite literal answer, in xkcd’s “What If?” style.*

The first question we have to answer is: what is the size of the Internet?  Any answer will be an estimate at best (and wild speculation at worst), because the cold, hard truth is that no-one knows.  That’s because of the distributed nature of the Internet (as well as the underlying TCP/IP protocol suite that the Internet is built on) — with quite possibly millions of servers connected over the world, it’s hard to measure for sure.  The other problem: what would count towards the size requirement?  Certainly content served over HTTP/HTTPS would count, but FTP? SMTP? NNTP? Peer to peer filesharing?  And would any content accessible indirectly (such as data stored in a backend database) count?

The only thing that we have to go on is an estimate that Eric Schmidt, Google’s executive chairman, made back in 2005; at the time, he put the estimate at around five million terabytes (while I’ll round up to 5 exabytes).  At the time, Google only indexed 200 terabytes of data, so Schmidt’s estimate probably took e-mail, newsgroups, etc. into consideration.  Due to our world becoming more connected in the interceding 8 years, that figure has likely shot up, particularly with sites such as YouTube, Facebook, Netflix, The Pirate Bay et al coming into the equation.  I’m going to throw a rough guestimate together and put the figure at 15 EB today, based on my gut feeling alone.  (Yes, I know it’s not terribly scientific, and I’ve probably shot way too low here, but let’s face it — what else gives?)

Currently, down here on the southern tip of Africa, our fastest broadband connection is 10 Mbps ADSL.  In reality, our ISPs would throttle the connection into oblivion if one were to continually hammer their networks trying to download the Internet like that (contention ratios causing quality of service for everyone else to be affected and all of that), but let’s assume that, for the purposes of this exercise, we can sweet-talk them into giving us guaranteed 10 Mbps throughput.  15 EB of data works out to a staggering 138,350,580,552,821,637,120 bits of data, and given that we can download 10,000,000 of those bits every second (in reality, it will be lower than this due to network overhead, but let’s leave this out of the equation), it would take almost 440,000 years to download the Internet over that connection.

But actually, with that length of time, you’d never be able to download the Internet.  Considering that the Internet went into widespread public use in the early 1990s (not considering the decades before when the Internet was pretty much a research plaything), the Internet is growing at a faster rate than one can download it using a 10 Mbps connection.  Plus, given the timeframe involved, the constant status update requests on the support ticket would drive all involved to suicide, even if (actually, particularly if) we discover a way of making human immortality a possibility in the interim.  Clearly, we need something a lot faster.

Enter the WACS cable system.  It’s a submarine cable that links us up to Europe via the west coast of Africa, cost US$650 million to construct, and has a design capacity of 5.12 Tbps.  If we could secure the entire bandwidth of this cable to download the Internet, we could do it in a little over 10 months.  While we may still have the aforementioned suicide problem, this is far more like it.

But of course, what point would we have downloading the Internet if we can’t store the data we just downloaded?

Currently, the highest capacity hard drives have a capacity of 4 TB (here’s an enterprise-level example from Western Digital).  We’d need a minimum of 3,932,160 such drives to store the Internet (in the real world, we’d need more for redundancy, but once again, let’s not worry about that here).  Our enterprise-level drives use 11.5 watts of power each, so we’d need ~45 MW of power to simply power the hard drives alone; we’d need plenty more (and I’m thinking around 10 to 15 times more!) to power the hardware to connect all of this up, the building where this giant supercomputer will be housed, and the cooling equipment to keep everything running at an acceptable temperature.  We’d need to build a small power plant to keep everything running.

So yes, you can download the Internet.  You just need a major submarine communications cable, tens of millions of hard drives, and a small power plant to provide enough electricity to run it all.  If you get started now, you can give someone the present of One Internet** when next Christmas rolls around.  The question of dealing with bandwidth and electricity bills is one that I will leave to the reader.

Now get going, dammit!

* Randall, if you’ve somehow stumbled upon this and you think you could do a better job than myself, go for it!

** Though, depending on who the recipient is, you may or may not want to include 4chan’s /b/ board.

UPDATE #1: I was asked to up it to 50 EB, which on retrospection may be a more realistic size for the Intranet than the 15 EB I put forward earlier.  That would take almost 3 years to download on WACS and would require 13,107,200 hard drives with a significantly increased power requirement.  The Koeberg Nuclear Power Station (not too far away from the WACS landing site at Yzerfontein) has two reactors, each capable of producing 900 MW, so if we take Koeberg off the national grid (which will cause the rest of the country to experience rolling blackouts, but hey, it’s in the name of progress!) and use the entire nuke plant’s capacity to power our supercomputer and related infrastructure, that should just about do it.