SSL fun and games

Due mainly to the Heartbleed bug (and for the non-technical readers, here’s a newbie-friendly explanation, courtesy of xkcd), I’ve been tweaking some of the SSL settings on here.  A quick list of changes:

  • Naturally, OpenSSL has been patched against Heartbleed.  I’m in the process of getting the site’s SSL certificate revoked and reissued.
  • SSLv3 is now disabled, as it is considered insecure.  (SSLv2 was disabled already.)
  • The cipher suites have been altered to support forward secrecy (in most browsers; Internet Explorer running on Windows XP is the exception, but should be able to fall back to a lower protocol version).  For the technically minded, here’s how to deploy it.
  • The server now supports HTTP Strict Transport Security with long duration.

Happily, Qualys SSL Labs now gives this site an A+ rating.  Probably overkill for a small, personal site such as this one, but it’s still nice to know.

The changes made shouldn’t have broken anything (touch wood), but if they have, please get hold of me so that I can fix things up.

(Incidentally, MyBroadband has a list of various South African sites where one would expect good security, and compiled their ratings by Qualys SSL Grade.  Of concern are Standard Bank’s Internet banking servers scoring an F due to supporting insecure renegotiation; Standard Bank has yet to comment on the issue.  A notable absent entry is SANRAL, as the site is inaccessible internationally and thus unable to be tested.)

Food for thought (or WhatsApp messages)

After the news this morning that Facebook will be acquiring WhatsApp for US$ 16 billion, I got thinking.

  • The price of a McDonalds Happy Meal in these parts is ~R25.  With the current exchange rate, that’s around $2.25 (not sure of actual US pricing though).
  • The current world population is estimated to be ~7.2 billion.
  • Given all of the above, Facebook could buy everyone on the planet one McDonalds Happy Meal and — for one day, at least — solve world hunger.

Some food for thought.

Delphi, Java and Visual Basic! Oh my!

Yesterday, MyBroadband published an editorial piece regarding the Department of Basic Education’s choice of programming languages for the high school curriculum (and for the coders who come here, it’s well worth a read).  Specifically, the direction seems to be Delphi — this resulted in a storm of comments describing Delphi as “outdated”, “obsolete”, “antiquated” and similar.  While those are perfectly valid points, there is something important that those commentators have overlooked.

Delphi is an object-oriented derivative of the Pascal language, created in the late 1960s, and what a lot of people don’t realize is that Pascal (and hence, by extension, Delphi) was primarily created as a language to teach students structured programming.  The language lacks features that would make it useful in a commercial/production environment (I certainly wouldn’t use it to pay the bills!), but it’s just fine for teaching basic concepts — perhaps not the absolute best choice (as I’ll go into a little later), but a solid choice nonetheless.  True, it’s not what’s being used out there in the Real World, but as the DBE (correctly, in my opinion) puts it, their aim is not vocational training, but “to lay a solid foundation to enable a learner to pursue further education at [a higher education institution] in the IT field”.  Delphi, Pascal, and several other languages do just that.

This is something that I can most definitely attest to with my personal experience.  I took my first foray into programming with Turbo Pascal, when my age was still in the single figures, and it was the language that I used when I completed high school in 2002.  (We were the last class to use Turbo Pascal though, the 2003 class were on Delphi.)  I have neither seen nor written a line of Pascal code since, but the concepts taught served me well when I moved on to “Real World” languages (C, C++, C#, Java, PHP and plenty of others).  A few years later, when working as an instructor at a private college, I noticed a distinct pattern: the people who had those concepts instilled into them in high school generally handled the subject matter satisfactorily (it was mainly Pascal and Delphi folks filtering though), whereas the people who hadn’t were jumping straight into C#, Java and Visual Basic, and finding themselves well out of their depths.

The last sentence above is worthy of further elaboration and dissection, as a lot of people over on the MyBroadband thread believe Java to be a worthy first language.  I strongly disagree, and I’m not the only one.  In January 2008, Dr. Robert B.K. Dewar and Dr. Edmond Schonberg published in the Journal of Defense Software Engineering a piece entitled “Computer Science Education: Where Are the Software Engineers of Tomorrow?” (freely downloadable as a PDF here), in which Java comes in for some particularly savage mauling (search the paper for “The Pitfalls of Java as a First Programming Language”).  As they brutally put it, Java “encourages the [first time] programmer to approach problem-solving like a plumber in a hardware store: by rummaging through a multitude of drawers (i.e. packages) we will end up finding some gadget (i.e. class) that does roughly what we want”.  There’s a lot of boilerplate code that one has to write in Java around a simple “Hello World!” program: there were a few folks over on the MyBroadband thread lamenting the fact that they had to parrot-learn “public static void Main()” without understanding what “public”, “static” and “void” did and, more importantly, why they were important.  It’s perfectly fine if you have the concepts already and are using this in a production environment.  Not so fine though when you’re learning how to program the first time.

Eric S. Raymond, in his “How To Become A Hacker” essay, makes a point that I find very hard to disagree with:

There is perhaps a more general point here. If a language does too much for you, it may be simultaneously a good tool for production and a bad one for learning. It’s not only languages that have this problem; web application frameworks like RubyOnRails, CakePHP, Django may make it too easy to reach a superficial sort of understanding that will leave you without resources when you have to tackle a hard problem, or even just debug the solution to an easy one.

Having said that however, I have some concerns about the Department of Basic Education’s approach. From the MyBroadband article, it looks like the curriculum will be primarily based on using wizards; I may be a bit old-school, but this approach makes me uncomfortable. To me, it’s just a different type of boilerplate (just a different iteration of “public static void Main()” in a way) — great for production, where time is a factor, but for learning and educational purposes, you want people to know (0) what the wizard is doing, and (1) why it’s doing what it’s doing. Nothing that I read in the original article gives me any confidence that pupils will be taught this.

Finally, while I consider Pascal/Delphi good teaching languages, I don’t consider them to be the best.  That accolade, to me, goes to Python.  From a beginner point of view, it’s cleanly designed, well documented and, compared to a lot of other languages out there, relatively kind to beginners — and yet, the language itself is powerful, flexible and scalable to far larger projects.  Moreover, the language is free (both free as in freedom and free as in beer), which was one of the original requirements of the Department of Basic Education but which seems to have been kicked to the sidewalk at some point.  For those interested, ESR has written a detailed critique of Python, and the Python website itself has some very good tutorials.

Download All The Things, Round II

Those of you who have been reading this blog for a while may recall Download All The Things!, where I investigated the feasibility of downloading the entire Internet (lolcats included, of course).  I’ve decided to revisit this, but with one small (or not so small) difference: change our estimation of the size of the internet.

For Round II, I’m going with one yottabyte (or “yobibyte” to keep the SI religious happy).  This is a massive amount of data: 1024 to the power 8 (or 2 to the power 80) bytes (and no, I’m not typing the full figure out on account of word-wrapping weirdness); it’s just short of 70,000 times the size of our previous estimate.  To give a more layman-friendly example: you know those 1 terrabyte external hard drives that you can pick up at reasonable prices from just about any computer store these days?  Well, one yottabyte is equivalent to one trillion said drives.  A yottabyte is so large that, as yet, no-one has yet coined a term for the next order of magnitude.  (Suggestion for those wanting to do so: please go all Calvin and Hobbes on us and call 1024 yottabytes a “gazillabyte”!)

There’s two reasons why I wanted to do this:

  • Since writing the original post, I’ve long suspected that my initial estimate of 15 EB, later revised to 50 EB, may have been way, way too small.
  • In March 2012, it was reported that the NSA was planning on constructing a facility in Utah capable of storing/processing data in the yottabyte range.  Since Edward Snowden’s revelations regarding NSA shenanigans, it’s a good figure to investigate for purposes of tin foil hat purchases.

Needless to say, changing the estimated size of the internet has a massive effect on the results.

You’re not going to download 1 YB via conventional means.  Not via ADSL, not via WACS, not via the combined capacity of every undersea cable.  (It will take you several hundred thousand years to download 1 YB via the full 5.12 Tbps design capacity of WACS.)  This means that, this time around, we’re going to have to go with something far more exotic.

What would work is Internet Protocol over Avian Carriers — and yes, this is exactly what you think it is.  However, the avian carriers described in RFC 1149 won’t quite cut it out, so we’ll need to submit a new RFC which includes a physical server in the definition of “data packet” and a Boeing 747 freighter in the definition of “avian carrier”.  While this is getting debated and approved by the IETF, and we sort out the logistical requirements around said freighter fleet, we can get going on constructing a data centre for the entire internet.

As for the data centre requirements, we can use the NSA’s Utah DC for a baseline once more.  The blueprints for the data centre indicate that around 100,000 square feet of the facility will be for housing the data, with the remainder being used for cooling, power, and making sure that us mere mortals can’t get our prying eyes on the prying eyes.  Problem is, once the blueprints were revealed/leaked/whatever, we realised that such a data centre would likely only be able to hold a volume of data in the exabyte range.

Techcrunch told us just how far out the yottabyte estimate was:

How far off were the estimates that we were fed before? Taking an unkind view of the yottabyte idea, let’s presume that it was the implication that the center could hold the lowest number of yottabytes possible to be plural: 2. The smaller, and likely most reasonable, claim of 3 exabytes of storage at the center is directly comparable.

Now, let’s dig into the math a bit and see just how far off early estimates were. Stacked side by side, it would take 666,666 3-exabyte units of storage to equal 2 yottabytes. That’s because a yottabyte is 1,000 zettabytes, each of which contain 1,000 exabytes. So, a yottabyte is 1 million exabytes. The ratio of 2:3 in our example of yottabytes and exabytes is applied, and we wrap with a 666,666:1 ratio.

I highlight that fact, as the idea that the Utah data center might hold yottabytes has been bandied about as if it was logical. It’s not, given the space available for servers and the like.

Yup, we’re going to need to build a whole lot of data centres.  I vote for building them up in Upington, because (1) there’s practically nothing there, and (2) the place conveniently has a 747-capable runway.  Power is going to be an issue though: each data centre is estimated to use 65 MW of power.  Multiply this by 666,666, and… yeah, this is going to be a bit of a problem.  Just short of 44 terawatts are required here, and when one considers that xkcd’s indestructible hair dryer was “impossibly” consuming more power than every other electrical device on the planet combined when it hit 18.7 TW, we’re going to have to think outside of the box.  (Pun intended for those who have read the indestructible hair dryer article.)

Or not… because this means that our estimate of one yottabyte being the size of the internet is way too high.  So, we can do this in phases: build 10,000-50,000 or so data centres, fill them up, power them up, then rinse and repeat until we’ve got the entire Internet.  You’ll have to have every construction crew in the world working around the clock to build the data centres and power stations, every electrical engineer in the world working on re-routing power from elsewhere in the world — especially when one considers that, due to advancements in technology (sometimes, Moore’s Law is not in our favour), the size of the Internet will be increasing all the time while we’re doing this.  But it might just about be possible.

That said: even due to the practical impossibility of the task, don’t underestimate the NSA.

Or, for that matter, Eskom:

Have a break, have… an Android?

Fresh from this morning’s Slashdot news feed is this:

Today Google revealed that the next major version of the Android mobile operating system will be called ‘KitKat.’ The naming convention has always used sugary snacks in alphabetical order — Jelly Bean (4.1 – 4.3) followed Ice Cream Sandwich (4.0), which followed Honeycomb (3.1 – 3.2), which followed Gingerbread (2.3), and so on. Unlike the previous releases, KitKat is named after an actual product, rather than a generic treat. Thus, Google contacted Nestle, who was happy to jump on board and take advantage of the cross-marketing opportunities. According to an article at the BBC, the Android team was originally going to use ‘Key Lime Pie,’ but they decided it wasn’t familiar enough to most people. After finding some KitKat bars in the company fridge, they made the choice to switch. Nestle was on board ‘within an hour’ of hearing the idea.

Naturally, I couldn’t resist finding and posting this:

I'm not sure who to credit for this image, since it's propagated through the intarwebs at a rapid rate of knots.  If you do know, or if it's you, please get in touch.

OpenMediaVault 101

For those of you who don’t know what OpenMediaVault is, it’s a free (as in both speech and beer) operating system for media storage (which a lot of us geeky types seem to be building nowadays, including one of my slacker housemates), based on Debian and with a storage system not unlike FreeNAS.  If you want to learn more, I suggest running off to their website.

However, if you want to learn how to set it up, Monty over at my forum has documented his own experiences with getting it done (complete with an updated kernel, UPS drivers, and more).  If this sort of thing interests you, I really, really recommend that you go check it out.

Moving to a new, improved home

After a run-in with my shared hosting’s PHP mod_security settings (it throws a hissy fit whenever people over on my forum submit a large image post, which happens fairly frequently), I’ve finally decided to take the plunge and do what I’ve been considering for a while now: procure a virtual private server and move this site over to it.  (This is only going to cost me an extra R20 per month, so I say bring it on!)

Why?  Because, unlike a shared hosting environment, I’ll end up with pretty much full control over the underlying website infrastructure.  If/when something breaks, I’ll be in a position to fix it myself rather than rely on it being fixed for me, plus I’ll be able to leverage more bleeding-edge type of stuff (in particular, shunning MySQL).

Stuff that I’d like to do and would now be able to:

  • Purchase an SSL certificate and make the site SSL only.  In the wake of recent revelations regarding the United States National Security Agency, this is more of a user privacy measure than a security measure, but it’s one that I feel I owe to this site’s users to take.  Granted, I could have done this already, but what’s stopped me from doing so is the inability thus far to do much about mixed content warnings (not so much a problem here, but would be problematic for my forum users): with root control, I can set up a Camo server (if I can figure out their rather cryptic documentation!) and solve that little issue.
  • IPv6 support.  My tingling geek sense demands that this be done!
  • Drop MySQL for PostgreSQL and/or MariaDB, as I share the open source community’s concerns for MySQL’s future and what Oracle is currently doing with it.
  • Potentially some other cool stuff as the need/desire/lust for cool stuff arises.

I’m still setting up, installing and configuring everything, but I’ll put a follow-up post once I’m ready to move things over.  Stay tuned for further details…

UPDATE: I’m posting updates in this forum thread to avoid polluting the blog’s RSS feed too much.

In search of paradigm shifts

The ironic thing about me putting up a “fighting against the tide” post yesterday is that the same thing is happening in the protection paladin community at the moment.  However, there’s a lot more of us available for said fighting.

I’ll simply let my post on the official World of Warcraft forums explain it all.  Note that it is very protection paladin specific: if you don’t play World of Warcraft, much less play a protection paladin within the game, you’ll likely be either uninterested or confused — so, I’ve hidden it behind the break.

Continue reading

Luser attitudes

My intense dislike of lusers was rekindled this morning.

Recently, a request thread was started on the PCFormat/G3AR forums that requested some extra user profile fields — which I considered entirely reasonable.  So, I got into my usual routine of getting information, hearing user suggestions and so forth, when our luser decides to ask a question that I had, in fact, just answered.

This isn’t too bad in itself, but redundant questions like that don’t sit at all well with me.  I had already answered the question, and therefore viewed this as a time sink — and I don’t at all sit well with time sinks.  Time sinks take without giving back; they waste time that could have been spent on more interesting questions and more worthy querents.  This point of view may not be apparent to some, so let me explain a bit: to understand the world that experts on a particular field of expertise live in, think of their expertise as an abundant resource, but their time to response as a scarce one — and that therefore, the less of a time commitment you implicitly ask for, the more likely that you’ll get the answer you wanted.

So, I replied with a terse answer while thinking “stupid question…”, and hoping that the experience of getting what one deserved rather than what one needed would have taught our luser a lesson.  This consequently set off a tirade of whining, posted in one of the general chat threads and posted in Afrikaans; presumably, the luser thought that I wouldn’t notice it that way.  I did.  Hence this blog post in response.

Eric S. Raymond, in his essay “How To Ask Questions The Smart Way“, deals with how not to act like a luser in one section of said essay:

Odds are you’ll screw up a few times on hacker community forums — in ways detailed in this article, or similar. And you’ll be told exactly how you screwed up, possibly with colourful asides. In public.

When this happens, the worst thing you can do is whine about the experience, claim to have been verbally assaulted, demand apologies, scream, hold your breath, threaten lawsuits, complain to people’s employers, leave the toilet seat up, etc. Instead, here’s what you do:

Get over it. It’s normal. In fact, it’s healthy and appropriate.

Community standards do not maintain themselves: They’re maintained by people actively applying them, visibly, in public. Don’t whine that all criticism should have been conveyed via private e-mail: That’s not how it works. Nor is it useful to insist you’ve been personally insulted when someone comments that one of your claims was wrong, or that his views differ. Those are loser attitudes.


Remember: When that hacker tells you that you’ve screwed up, and (no matter how gruffly) tells you not to do it again, he’s acting out of concern for (1) you and (2) his community. It would be much easier for him to ignore you and filter you out of his life. If you can’t manage to be grateful, at least have a little dignity, don’t whine, and don’t expect to be treated like a fragile doll just because you’re a newcomer with a theatrically hypersensitive soul and delusions of entitlement.

One can immediately see why the aforementioned and aforelinked whining tirade does not help the luser at all.  The luser attitude demonstrated in this case of (1) wasting other people’s time and (2) whining because other people expressed dissatisfaction of it simply results in the luser losing all respect within the community, which does not help the luser case at all.  (We have long memories; it can take a while — years, even — for such blunders to be lived down.)  I’ve seen this all before on the various project mailing lists that I’ve sat on over the years.

You don’t want to be a luser, nor do you want to seem like one.  You want a winning attitude in order to be treated as an equal and welcomed into our culture — and we would really want to do this (so if you think of our attitude obnoxious, condescending or arrogant, please revisit your faulty assumptions).  The underlying issue at hand is that it’s extremely inefficient to try to help people who aren’t willing to help themselves.

Unfortunately, I can’t do much about lusers pestering me.  Hopefully, some will read this and realise what they need to change/fix to stop being one.