Thursday, September 20, 2007

computer security and end users, part 4

This is part four in a conversation with Meteorplum. I will post additional exchanges if and when they occur.


From False Data:

I think I need to make an implicit assumption explicit. I'm assuming that, if you don't take steps to educate users, they'll tend to do that which is most convenient. Since I've also assumed that education is expensive, and that we probably want to minimize the amount required to operate a computer, I'm trying to make security convenient for the end user. That's the real reason behind much the design I sketched out for the password wallet.

For example, the wallet's designed to allow the security department of the service to which the user is connecting to choose the "password" the user uses when connecting to that service. That way, we don't have to worry about users choosing secure passwords, because they're not choosing passwords at all (except the master password, as I said--that's going to be one weak point in the system. But I think we have a better shot at helping someone pick and remember one strong password than to pick and remember many.) In fact, if I were the security department, I wouldn't use a password at all. Instead, I'd probably do something like issue a private key to the user and keep a public key for myself. Then the authentication process would involve having the user digitally sign a randomly-generated message. (I'd also include a second set of keys so the user could verify that it's actually my service.) Now, getting someone who knows nothing about public key cryptography to use a system like that means that, ideally, the person shouldn't see the key exchange at all. Instead, he or she unlocks the wallet using the master password and it does the rest. (You could also use a one-type pad, along the lines of the bank system you described, if you wanted even stronger security.)

The wallet's a hardware implementation because laptops are just too bulky. It needs to be portable. In fact, it should probably go on the key ring to remind users to treat it like any other set of keys. Additional factors, like biometrics, are optional.

You could implement a clunky version of the wallet using Password Safe on a USB key, but few people would use it because it's confusing and adds additional login steps, with all that copying and pasting and such.

SecureID is worse, because you have to have a separate fob for each service you want to log into (one for the company VPN, another for a different company, and so on), and those things are just too bulky to put more than one on a key ring. And the whole timing issue of having to wait for the number to change and dealing with clock drift between the fob and the server just add to the inconvenience. (You should hear Coppertop's commentary when she has to use one.)

I'm pretty much convinced that convenience is key.

You also mentioned that "the other externality, which is actually not external at all, would be lost productivity on the users' end." One thing to keep in mind is that the losses may not fall uniformly on the users. For example, suppose my home system gets infected and starts pumping out stock pump-and-dump spam. The loss to me, personally, is probably fairly small--a sharp reduction in the apparent speed of my home computer, but that's not such a big deal if all I'm using it for is web, e-mail, and word processing. Other people, though, might suffer significant financial losses because of the stock scheme (even if they don't participate in it--for example, the company that's the subject of the scheme is going to have its stock price bounce around unpredictably). Or an ISP might suffer higher capital costs because it has to buy more filtering equipment to deal with the extra spam. It will try to pass that cost along to its subscribers--which might not include me--in the form of higher fees. Now, that ISP's users might also get infected and send spam my ISP's way, but there's no guarantee the losses will be symmetric.

We seem to both agree, though, that in general users have poor information about the security of the system they are using and lack information linking the consequences of their choices to the software they chose. The "security tax" idea I considered but am not crazy about is one way to communicate this additional cost to the user. Another would be an independent rating lab, like Underwriters Laboratories or Consumer Reports, but for software security. A third mechanism might be to address the problem the way we do it in construction, by establishing "building codes" for software.

No comments: