The prisoner’s dilemma and Newcomb’s problem

The Prisoner’s dilemma describes any situation where two players have to independently choose one of two actions—usually referred to as “cooperate” and “defect”—such that each player’s rewards for the possible outcomes are ordered like this:

(me defect, you cooperate) > (both cooperate) > (both defect) > (me cooperate, you defect)

Think of two prisoners accused of killing someone; they are kept in separate cells, and both are given the chance to betray the other by testifying against her. If just one of them betrays the other, the betrayer walks free while the betrayed is sentenced to 10 years; if neither betrays the other, they’re both sentenced to 2 years; and if both betrays the other, they each get 8 years (10 years for murder minus 2 years for cooperating with the investigation).

The interesting thing about this situation is that while it would at first glance seem like they should cooperate, it’s easy to see that since they can’t influence each other (so that the other’s decision behaves like a constant), each prisoner is in fact faced with either of these two situations—they just don’t know which one:

if the other cooperates: (me defect, you cooperate) > (both cooperate)

if the other defects: (both defect) > (me cooperate, you defect)

And crucially, in both cases the prisoner is clearly better off if she defects. So the rational choice for both of them is to defect, netting them each 8 years in jail, compared to 2 years if both had cooperated.

Now consider what would happen if the two players are very similar—for example, if they’re two instances of the same computer program—and they know this fact. Then the cases of them not making the same decision go away, leaving us with just

(both cooperate) > (both defect)

So in this case, the rational choice is to cooperate! It can work even if the other player has a less than 100% chance of mirroring your decision—it depends on what the probability is, and the relative rewards for the four cases.

Compare this to Newcomb’s problem, where an adversary has placed a small reward in one box (“A”) and has placed a large reward in another box (“B”) if and only if it has predicted that you will not open the first box; you are then given the choice of opening either or both boxes and taking what’s inside. This looks like a prisoner’s dilemma for you:

(open both, B full) > (open just B, B full) > (open both, B empty) > (open just B, B empty)

No matter what the adversary har put in box B, you get a bigger reward if you open both boxes (known as “two-boxing”) than if you open just one. However, the problem states that the adversary is very good at predicting what you will do, which eliminates the (open both, B full) and (open just B, B empty) cases, leaving us with just

(open just B, B full) > (open both, B empty)

So if the adversary can predict you perfectly, you should one-box instead of two-box! And just like in the prisoner’s dilemma, if the adversary predicts your action correctly less than 100% of the time, the right choice depends on that probability and the relative rewards in box A and B.

How to buy audio books from Audible

I won’t try to sell you the concept of audio books. I’ll just note that I like them enough to read (or listen to—the nomenclature being one more thing this post isn’t about) a few dozen of them per year, and that it’s really inconvenient to read a non-audio book while you’re doing the dishes.

I get my audio books from Audible, because they have a large enough catalog. I don’t have the actual statistics, but I’d say about half of the books I want to read are available. There’s a convenient Android app that lets you download any book you’ve ever bought, and allows you to adjust the playback speed. There’s also convenient DRM that makes sure that you don’t accidentally do anything else with the books except listen to them using approved hardware and/or software.

But that’s not what this post is about either. It’s about how to avoid paying three times as much as you have to for your Audible books.

Say you’ve heard good things about Seth Dickinson’s The Traitor Baru Cormorant, and would like to listen to it. Its Audible page lists it at $28.34, about twice what Amazon charges for the paperback and Kindle versions of the same book. If you don’t have a lot of money, or need to buy new books often, this sort of pricing may seem very off-putting.

But wait! Audible also has membership plans that will let you pick one or two books each month, priced at $14.95 (“Gold”) and $22.95 (“Platinum”), respectively. Especially the latter, at $11.48 per book, is much better. But the downside is that you commit to buying the specified number of books each month—you may save up to six months’ worth of credits, but after that they start to expire. And if you stop being a member, any remaining credits expire immediately. (Books that you have bought never expire, though.)

However, if you can afford to pay for (and aren’t afraid to commit to) a year all at once in advance, you get 12 months for the price of 10, which works out to a per-book price of $12.46 (Gold) or $9.56 (Platinum). And crucially, you also get all the credits (12 or 24 of them) at once, and may accumulate 18 months’ worth of unused credits before they start to expire.

I’m on the Platinum annual plan, so every time it renews, I pay $229.50 and receive 24 credits, which I then use to buy 24 books at an uneven rate over the next 6-9 months. And then... I don’t get to read any more audio books for the rest of the year? No, because Audible lets you renew the annual plans early. Since I always use up 24 credits in less than a year, the net effect is simply that I pay in advance every time I want a new batch of 24 credits at $9.56 apiece.

There are ways to go even lower. Audible will periodically have sales where they offer a small subset of their catalog at e.g. $4.95 per book or 2 books for 1 credit, but I seldom find books I really want to read there. It’s a good idea to keep your wish list up to date, though, because Audible will tell you when a book on your wish list is on sale.

I wish it wasn’t this complicated. Buying books for e.g. the Kindle is much more straightforward: find the book, look at the price tag, and click once to buy. Why is that model not good enough for Audible?

Star Wars: The Force Awakens

(Spoiler warning.)

I finally saw the new Star Wars movie last week. Somehow, I’d managed to not read, hear, or see any spoilers except the official trailer, which offhandedly hinted that Finn (the stormtrooper) would be learning to use the Force, since it showed him fighting with a lightsaber. Which would have continued the Star Wars tradition of having few female roles and pretty much no female Jedi. (This film has more women than its predecessors, but still not nearly enough; for example, of the 13 major characters listed on its Wikipedia page, only three are women.)

So it came as a pleasant surprise that Rey (the scavenger) was the one with the Force, and seems to be the most central of the main characters.

Though I must say that compared to how hard mastering the Force was for Anakin and Luke in the previous movies, Rey had it ridiculously easy. That did give her the means to escape captivity on her own rather than having to wait for the (all-male) rescue party, but still... I’m hoping this will turn out to be because she’s actually had some instruction as a child before being left on Jakku by her family or something, but considering other similar lapses, I’m guessing no. There are a bunch of them; for example:

  • The Resistance learn of the First Order’s Starkiller Base, which is like the Death Star except bigger and badder, and immediately embark on a whimsical plan to destroy it, rather than do the sensible thing and try to escape. The last time they attacked a planet-sized battle station with a handful of X-wings, they at least had a plan for how to destroy the thing... Unsurprisingly, though, a sequence of miracles lets them succeed after all.
  • During said fight against the Starkiller Base, Finn has claimed he knows how to disable its shields. Once inside, he confesses to Han Solo that he doesn’t, he just wanted to go there to rescue Rey. But rather than being upset because Finn’s lie will very likely lead to the death of all the rebels, including Leia, Han cheers him on. This is in character for him in this film, though—he generally behaves as though he knows he’s a main character and cannot die. Possibly this is intentional, to make his death come as a surprise, but it’s still very annoying until then, and that’s near the end.
  • Finn and Rey are both able to lightsaber with Kylo Ren and not lose immediately. This is particularly improbable in Finn’s case, since Ren earlier in the film demonstrated the ability to use the Force to immobilize an opponent who can’t use the Force in defense. But Rey is completely untrained in the Force and in lightsabering, so it makes no sense that she should be able to hold her own either.

So, didn’t I like the film? Yes, I did. It was good enough to survive all of the above and still be enjoyable. And compared to the previous films, it’s miles ahead in gender equality. It’s just quite visibly still not all the way there yet.


Once upon a time, I had no children.

Then I had one. I’d known all along that children take more time and are more work than you’d expect, and just so: It was more work and took more time than I had expected. Think of it as constantly holding a ball: it’s surprisingly difficult to get anything done with just one hand free.

Then I had two. This time, I wasn’t expecting to be surprised; I was, after all, already a parent, and already knew how much time and work goes into taking care of a child. The lesson I should have learned, however, was that it’s more work and takes more time than you expect. With two balls to hold, just arranging for one hand to be free is a major undertaking.

And then I had three. I can spot a trend when smacked in the head with one, so I fully expected to be surprised by how much time it would take, and how much work it’d be, to have three children. And I was. One ball, you can hold and have one hand free. Two balls, you can hold safely, one in each hand. Three balls, you have to juggle.

But they’re such lovely little balls. Round and brightly colored, flying and spinning.

And best of all, we’re two hands who juggle them together.

Goodreads: Useful after all

I recently realized that all I’d posted about Goodreads was that it wasn’t useful to me due to the lack of automatic recommendations and poor to-read list tools, so I thought it’d be fair to mention that I’ve been a heavy user of the site for the past several years. They have automatic recommendations now, and they’re good enough to be really useful. All the features I wanted are still not there, but I’ve learned to work around some of it and live with the rest.

It may not be a delight to use, but it saves me a ton of work finding and keeping track of what to read next.

Gnome thumbnailer for Canon .cr2 files

The default Ubuntu package for creating thumbnails for Canon .cr2 files (a.k.a. raw files) is gnome-raw-thumbnailer. It works well, but it’s slow unless you have a fast computer with a fast disk. I found this page, which describes how to create a thumbnailer based on ufraw-batch --embedded-image, which executes really quickly because it just reads the thumbnail image that’s already embedded in the .cr2 file. The recipe there didn't work for me out of the box, though; but when I replaced the mime type list with the one that gnome-raw-thumbnailer uses, it worked like a charm and was super fast!

For the record, the complete procedure is

  1. Install ufraw-batch.
  2. Create a text file called /usr/share/thumbnailers/raw.thumbnailer (I guess the “raw” part can be replaced with any name you like) with the following contents:
    [Thumbnailer Entry]
    Exec=ufraw-batch --silent --size %s --out-type=png --noexif --output=%o --overwrite --embedded-image %i

I Am Not A Serial Killer

I just finished the John Cleaver series by Dan Wells: I Am Not A Serial Killer, Mr. Monster, and I Don’t Want To Kill You. They’re supposedly horror, but they lack the thing that normally makes me dislike horror books: point-of-view characters that just react to events, and spend half the book being afraid. All the trappings of horror are present—a contemporary, run-down environment, dead bodies, blood and gore, and a scary (possibly supernatural) killer closing in on you—but it feels like fantasy. The protagonist is actively trying to achieve things, and you soon realize that this isn’t the kind of story where he’ll ultimately fail. But it is the kind of story where “protagonist” is at times a worryingly inaccurate synonym for “hero”. See the book titles.

Highly recommended.

How much entropy in that password?

On his Security Now podcast, Steve Gibson recently disagreed (transcript; search for “xkcd”) with the way Randall Munroe calculated the number of bits of entropy of passwords in this excellent XKCD comic:

XKCD: Password Strength

The disagreement boils down to Randall giving a lower bound on the entropy, and Steve giving an upper bound (and not realizing—or neglecting to mention—the difference). Randall is right, and Steve is wrong. Let me explain:

Lower bound

In both passwords featuring in the comic, Randall assumes a particular password template—in effect, an automatic procedure where you input some number of random bits and get a password. An example, similar to (but simpler than) the first of the comic's password templates: Pick a six-letter dictionary word (in lowercase), capitalize one of the letters, and stick a number at the end. The number of possible passwords generated by this template is the number of words in the dictionary times six times ten; the number of bits of entropy is the base-2 logarithm of this number.

This is a lower bound of the entropy in the passwords generated by this template, because we assume that the adversary knows everything—in particular, knows which template the password was made from—except the random numbers we used to pick the particular dictionary word, the letter to capitalize, and the last character. Any particular adversary may know less, but no attacker will know more. The lower bound on the entropy describes how many guesses even a maximally well-informed adversary must make.

Of course, there are caveats: In particular, where the template asks for randomness (such as in “pick a dictionary word”), you are expected to use high-quality randomness such as produced by coin flips, dice, or good computer programs. Just trying to think of a random word won’t cut it; you’ll be more likely to think of some words than others, and thus get less entropy than you thought.

Upper bound

An upper bound on the entropy of a given password can be obtained by finding a simple template compatible with the password (i.e., such that the password could have been generated from that template) and then counting how many bits of randomness the template uses to produce the password. For example, given the password “zoMbie8”, one candidate template is “pick seven characters that are either lower- or uppercase letters or numbers”; this gives an upper bound of (26+26+10)^7 = 3,521,614,606,208 possible passwords, or about 42 bits.

Another candidate template is “pick a six-letter dictionary word (in lowercase), capitalize any one letter, and add a digit at the end”. The dictionary on my computer has 9300 six-letter words, times six possible positions for the capitalized letter, times ten possible digits at the end, gives 558,000 possible passwords, or about 19 bits.

Now, we can all agree that 19 bits is quite a bit less than 42—meaning that the former is a tighter bound. But how low can we go? Given a specific enough template, the entropy becomes arbitrarily small; consider e.g. the template “take the word ‘zombie8’ and choose at random whether to capitalize the third letter”, which has just two possible passwords, for a single bit of entropy. How can we meaningfully speak of an “upper bound”, if we can make it as small as we want? (Specifically, consider the case where the password was in fact generated by the second template, and the lower bound is 19 bits. If our upper bound can be lower than the lower bound, we have a problem.)

The catch is the provision that the template be “simple”, as I sneakily specified in the first sentence describing the upper bound. The precise meaning of “simple” is “describable with few bits”, because strictly speaking, the upper bound is the sum of the number of bits needed to describe the template and the number of random bits used by the template. (This what the parenthesis in the first panel of the comic is talking about.) As long as the latter is much larger than the former we don’t make much of an error by omitting it, but as we make the template more and more specific, the error grows.

A useful rule of thumb for evaluating the simpleness of a template is to consider how many other templates just like it there are. The first template, with seven random characters, has very few variants (you could choose to include other characters than letters and digits or change the length, but that’s about it). The second template has more, but still not many (you could e.g. change the number of capitalized characters, and change the position of the digit). The third template obviously has a great many very similar variants—just choose another word or another digit!

So, which one of them is the password strength?

Which number do you go by when choosing a password? A lower bound, which describes how strong your password is guaranteed to be, or an upper bound, which describes how strong it isn’t?

Which number is more useful printed on elevators? The number of people they’re guaranteed to hold, or the number of people guaranteed to be enough to make the cable snap?

Password strength meters

So, the lower bound on the entropy is the useful number when creating a pasword. But which number do you get when testing your password in a password strength meter (such as this one)? An upper bound! There’s a simple reason for this: when calculating the lower bound, you need to know what template was used; just seeing one particular password produced by the template isn’t enough.

The password strength meter tries a number of templates. If one of them is close enough to what was actually used, its entropy estimate can be quite good; but if not, it can be way, way off (and it’s always an overestimate, never an underestimate).

Password strength meters are useful for identifying weak passwords, but they can’t guarantee that a given password isn’t weak. That can only be done by generating the password from a template that uses enough random bits.

And that’s why Randall is right and Steve is wrong—Randall gets his numbers from the amount of random bits poured into the password template to generate the password (lower bounds), while Steve gets his numbers by feeding Randall’s passwords to a password strength meter (upper bounds). It’s not that Steve isn’t doing his math right, it’s that he isn’t doing the right math.

How hard is it to crack?

The whole point of a password is that it shouldn’t be guessable, so in a very real sense, the strength of a password is defined by how hard it is to crack.

Password cracking programs work essentially the same way as password strength meters, but backwards: using a succession of templates, they try every combination of random bits that can be fed to that template and see if the generated password is the right one. They start by using slightly complex templates (such as ones based on a random dictionary word), and if those fail, fall back to simpler templates (based on trying every possible password of some bounded length). If the template actually used to generate the password is close to one of the templates the cracker tries, the number of tries needed won’t be all that much more than what’s guaranteed by the lower bound on the password’s entropy. But if the actual template isn’t close to any of the ones the cracker tries, it needs many more tries.

Consider for example the password “zoMbie8”; if the cracker tries templates based on a single dictionary word with a few simple tweaks, it should be able to find it in only millions of tries, but if it has to resort to trying raw combinations of letters and digits, the tries will number in the millions of millions.

This means that there are two ways to make a secure password: use a template the password crackers don’t know about (or don’t bother to try, because so few people use it for their passwords), or use any old template and feed it with enough random bits. The former strategy relies on outwitting smart people who spend much of their time coming up with better ways to crack passwords; the latter just takes more coin flips. It’s security by obscurity vs. real security.

(Excercise for the reader: What’s the problem with Steve’s Password Haystack scheme?)

Postscript: The actual entropy

I’ve talked a lot about lower and upper bounds on a password’s entropy, so you might be wondering the obvious: What’s the actual entropy, and why aren’t I talking about it?

The entropy of a string of characters (which is just what a password is) is closely related to its Kolmogorov complexity—in a sentence, the entropy of your password is equal to the length of the shortest computer program that generates it. However, this definition is useless to us, since it can be shown that Kolmogorov complexity isn’t computable.

A password template, being essentially just a string, has its entropy computed the same way, which is why I was just waving my hands when discussing it earlier. Approximating it by zero as we do gives lower bounds that are lower than strictly necessary, and upper bounds that are lower than they should be and thus run the risk of not being correct (but nothing much in the analysis depends on them being correct).

Bitcoin: Trust can shorten the transaction delay

I’ve touched on this before, but it just occurred to me again: Normally, the receiver of a Bitcoin payment can’t trust that everything’s OK just because she sees the transaction (which is signed with the sender’s private key); until she sees the transaction becoming part of the main block chain, there is always the possibility that the sender is cheating by double-spending her coins. However, since only the account owner (who has the private key) can create transactions that withdraw coins from an account, it’s possible to build up trust over time: if Alice has made several payments to Bob in the past, and they all worked out, Bob might start to acknowledge Alice’s transactions before the block chain proves that she didn’t cheat, cutting the delay to almost zero. This would make Bitcoin practical for a number of applications where a half-hour transaction verification time is unacceptable, such as micropayments in peer-to-peer networks.

Using a web of trust or similar, Bob could even start trusting Alice without having to have a personal history with her—she just needs to have a history of non-cheating with someone he already trusts.

Interestingly, if Alice ever does try to double-spend, the conflicting transactions she produces will (when seen together) provide absolute proof of this, so a trust protocol would probably be complemented by a protocol for gathering these proofs of double-spending. Since these proofs don’t rely on trust, they will be effective enough that no account can be used to attempt double-spending more than once.

Drink without a name

I invented a new drink today, and the result was good enough that I thought I’d share the recipe:

For six drinks 4 dl coconut milk 1 lime (juice and zest) 3 tbsp sugar 6 dl grapefruit soda ice

  1. Pour the coconut milk into a small bowl. Add lime zest, lime juice, and sugar. (I used brown sugar, but I doubt it matters much.)
  2. Stir vigorously.
  3. Split the mix between six glasses.
  4. Add the ice.
  5. Add the grapefruit soda.

I haven’t tried it, but I guess light rum or vodka might be the sort of thing you could add to this drink if you wanted an alcoholic variant.

Since this drink only has like three ingredients, I figure there’s virtually no chance I’m the first to invent it. If anyone knows one or more names for it, I’d be interested to hear it.

Making your program testable

Ian Lance Taylor has a nice post about DejaGNU, the test harness used for GCC and GDB. One of his main points—the one I found the most interesting—is that it was a mistake to try and test the actual user interaction with GDB, with a test script that essentially pretends to be a human typing commands and reading GDB’s responses. What should have been done instead is to create a programmatic interface. This would have been much easier to write test scripts for; in particular, parsing the replies would have been much easier and much less fragile. (Naturally, it’d still be prudent to have some tests for the user interaction, but for testing the bulk of GDB’s functionality a programmatic interface would be superior.) This perfectly matches my experience writing tests for Simics, where we have both programmatic and interactive interfaces; testing the former is such a pleasure compared to testing the latter.

I guess the leasson to learn here is that testing gets so much easier if you have a programmatic interface to write your tests against that it’s even worth it to create such an interface if it didn’t already exist, just to be able to use it in tests. That might involve serious redesigning, but testing is that important!

Bitcoin and peer-to-peer networks

Recently, it occurred to me that Bitcoin could be used for nearly overhead-free, arbitrarily small payments between clients in a peer-to-peer network. The thing that made BitTorrent succeed where other peer-to-peer filesharing protocols failed was the fact that receivers could pay senders for data chunks they wanted by offering to send them other data chunks in return (“I’ve got A and C, and want B and D; you’ve got B and D, and want A and C; so let’s swap”). This pretty much solved the problem of freeloading that other protocols suffered from.

The way BitTorrent does this is clever: when n receivers want to download a file, the node that seeds the file gives each of them one nth of the file, and tells them to swap chunks among each other until they all have all n chunks. If you want all the chunks of the file, you have to give chunks to the other receivers.

However, the node that seeds the file does so without getting anything in return. And in most peer-to-peer networks (not just filesharing networks), that’s the way it is for all nodes: it works only so long as enough nodes aren’t selfish. But Bitcoin could change that, by enabling nodes to simply charge for services: sending data chunks, storing data chunks (such as for peer-to-peer backup), internet access (your neighbor’s wireless access point), and so on.

Many of these uses would of course be prohibited: your ISP typically doesn’t allow you to resell your bandwidth, and pirating movies is even more illegal if you make money off of it. But that hasn’t stopped peer-to-peer network innovation in the past, and a lot of use cases would be perfectly legal.

One technical challenge is that Bitcoin payments take many minutes to be sufficiently verified, so protocols can’t rely on immediate payments. But that’s just the same situation we have in the real world, with invoices and credit cards (and I hear that they still use cheques in some countries); it involves keeping track of who’s been paying their bills on time in the past, requiring advance payments from people you don’t know, and so on.


I just finished reading Frankenstein by Mary Shelley—the classic tale of a man who gives life to a monster. I guess it’s the prototypical mad scientist story. Those two sentences pretty much sum up my total knowledge of the book prior to reading it, so I had a few surprises lined up: (Warning: mild spoilers)

  • Victor Frankenstein, the creator of the monster, isn’t your typical mad scientist. He’s very social, rather young, mentally balanced, and generally happy.
  • The monster is initially a monster only by way of his hideous appearence; he is described as being very intelligent, kind, and patient, and only becomes a true monster because he can’t stand that everyone he meets is frightened and disgusted by the way he looks. Victor Frankenstein himself starts off by running away from the newly created monster, and doesn’t even consider the question of his mental qualities until years later, when it’s far too late.
  • The book intentionally glosses over the entire process of the monster’s creation. It mentions body parts from dead people—I think—but even electricity isn’t mentioned. This is because Frankenstein is telling his story to a man who rescued him from a sheet of ice, and he doesn’t want to risk anyone following in his scientific footsteps.

I get the sense that the book’s argument is that Frankenstein was foolish to try to play God, and was made to suffer for it—at least that’s what he himself seems to believe. But it seems to me that the only real problem was that he completely abandoned a newborn being instead of taking care of him; as I said, the monster was initially very benign. In other words, the problem isn’t that Frankenstein tries to play God and fails, it’s that he fails to even try being a parent.

As a science fiction-minded reader, it also bothered me that the scientist Frankenstein never even considered a technological solution to his problems. He developed the process of creating a mentally perfect but physically flawed sentient being in just a few years—who’s to say what a few years further research could’ve accomplished?

Nevertheless, I enjoyed reading the book, but I did feel much more sorry for the monster than for Victor Frankenstein.


I just tried out Goodreads, a site that lets you rate the books you’ve read and list the books you plan to read. It’s got functions for users to recommend books to each other, but, crucially, not the automatic recommendations I was expecting: the standard “people who likes the same books you do also liked...” thing as seen on Amazon and other sites. I’m not sure how useful the site is going to be without that; it’s the only reason I can think of that would make it worth the trouble of actually telling a site about the books I’ve already read and don’t plan to read again. My second complaint is that Goodreads doesn’t seem all that good at handling the list of books I plan to read. For starters, there’s no way to tell it that I’m planning to re-read books that I’ve already read without removing the ratings I’ve assigned to them (I’ve read the first ten Wheel of Time books, and I plan to re-read them immediately before reading the remaing four, once the series is finished). Second, there is a system of assigning books to one or more “shelves” (equivalent to what most other systems call “tags”), but the search capabilities are very weak: you can’t list all books belonging to shelf X byt not shelf Y, all the books on shelf Z by author A, etc. My reading list on Goodreads is currently about a hundred books long, and most of it is series—or even several series that have to be read in order—and I need those search functions in order to make sense of the list. Otherwise, it’s simpler to just keep using a text document where I can group and order books under different headings for different authors and series.

I could probably have lived with either of these shortcomings (using the sites just as a recommendation tool, or just as a to-read list), but as it is, Goodreads doesn’t seem to actually solve any problem for me. Are there better book sites out there?

I can’t remember my Gmail password!

I started using LastPass a few weeks ago; it’s a browser plugin that remembers your passwords for you, but unlike the password managers built into the various web browsers it syncs the password database with the cloud, so that the same set of passwords is available on all the computers I use. In order for this to be safe, the passwords are only ever decrypted locally on your machine; when stored in the cloud, they’re encrypted with a master password. The nice thing about this arrangement is that you only have to remember that one master password; all the other passwords can be long, hard-to-remember but secure random sequences of letters and digits such as “hT69xFuEADwdE9be”, because you don’t have to remember or even look at them. This is a big step up from using the same easy-to-remember password almost everywhere like I used to, for two reasons:

  1. Easy to remember means easy to guess. My password wasn’t quite as bad as a dictionary word, but not far from it. Any competent password cracking program could have guessed it.
  2. If any one of the dozens of sites I used the same password on had been compromised to the point that the attacker got hold of my password there, they could’ve used it to log in as me almost everywhere.

And now, I’ve replaced my last web password with a secure one that I don’t remember: my Gmail login. I waited quite a while before doing this, partly because it’s the most important one, partly because this is the one password I can’t just reset by asking them to e-mail me a new one. I even tested the function where Google sends a password reset code to your phone via SMS (and was almost convinced it didn’t work, because the message took more than an hour to arrive).

What are we doing wrong?

Looking back, it’s not difficult to spot something that looks like moral progress: abolition of slavery, increasing gender equality, democracy, and so on. (Though as the linked article points out, it’s not obvious that we’re not just changing at random, and what looks like progress is just motion toward our current position.) One thing we might ask ourselves is, “What will future societies think we were doing wrong?” Assuming the changes to commonly accepted morality that take place between now and then constitute “real” progress, these are things that really are wrong, and we’re just doing them because we don’t yet know better.

Here are three guesses about things we’re currently doing wrong:

My first guess is that killing and/or imprisoning animals in order to obtain meat, eggs, fur, etc. will be considered unethical. After all, in the future these things can probably be grown in a vat, and it’ll be hard not to just conclude it’s animal cruelty when you’re making animals suffer when there’s a substitute that doesn’t involve any suffering. But we don’t have to wait—that argument can be used today, albeit with substitutes that aren’t indistinguishable from the original. Future generations will look back and ask why we weren’t all vegetarians, just as we look back at our ancestors and wonder why they didn’t stop keeping slaves.

My second guess, maybe for the more distant future, is that leaving nature alone and not interfering will be as unethical as finding a dying person in the street and not doing anything about it. The reason we currently think that the right thing to do is to limit our influence on the environment as much as possible, is that historically, we’ve been shortsighted and stupid. For example, we’ve allowed people to pollute the atmosphere for free, leading to a tragedy of the commons. And so we’ve concluded that nature is best left untouched by humans, or as close to it as possible.

I’m proposing that this will cease to be a good idea once we’re capable enough. Judicious interventions could make ecosystems healthier than they’d be if unattended. “Natural” animal suffering could be reduced (or, with sufficiently large changes, eliminated). And there is a huge universe out there which is currently not filled with life, a vast but seemingly finite supply of negentropy being wasted when it could be put to good use (fueling civilizations of happy conscious beings, for example). The only reason not touching anything seems like good ethics at the moment is that like a first-semester medical student, we’re not yet skilled enough to do more good than harm except by accident. But someday we will be, and then it’ll become apparent that “don’t touch anything” was only a means to an end, not an end in itself.

My third guess is that in the future, ethics is going to become a real science. After all, if we as a civilization grow ever more capable of shaping the world around us, and if change can be preferable to leaving nature alone, then we’ll have a huge burden to actually do the right thing. Simply going by what feels right at the time has been decidedly suboptimal in the past, so I’m not hopeful it’s going to magically get better in the future. Today, doing what feels right is a virtue; in the future, it won’t be. (Well, hopefully our gut feeling is a sufficiently good approximation to the hypothetical “true” ethics that they won’t clash all the time, but sometimes they will, and then the gut feeling will have to yield. Compare the situation in present-day medicine, where we have odd and counterintuitive treatments involving cutting people open in order to make them healthier. You’d naively think that avoiding being sliced open would be good for your health all the time, but that turns out not to be the case.)


I just found out about the decentralized digital cash system Bitcoin, by way of an LWN article. It looks very interesting, but as I understand it it can’t scale very far in its current form due to the requirement that a log of all transactions be sent to all clients. One interesting point is that publishing the complete list of transactions doesn’t imply complete transparency, since accounts aren’t linked to users. An account is just a public/private key pair, and any user may create however many accounts she likes. The existence of an account isn’t announced to the world, except implicitly when the account is on the sending or receiving end of a transaction. (Since all transactions are published, however, it’s possible to data mine the flow of money, so the actual anonymity is likely to be limited.)