From False Data:
1. Did you deliberately misspell your handle on your comment? The link is correct, but you're listed as "Fales Data".
No, but I'm sorry I didn't think of it--it would've been clever. It was
just a simple typo, though.
2. How do *you* think the burden of securing PCs could be shifted?
Too much regulation too early can kill an industry, so I'd be very
careful about creating a licensing scheme, at least for users, unless
it's absolutely necessary. That says we'll be most successful if we
consider the root causes of the problem before designing a solution,
just as we would have to look the threat model before proposing a
security regime.
One of the main reasons PCs are insecure is inadequate software
engineering, partly because we don't know how to do it well (compared to
our ability to engineer large structures like bridges or cruise ships),
partly because too many engineers have excessive egos (example:
resistance to code reviews and religious objections to languages that
limit buffer overruns), and partly because it's cheaper to skimp on QA.
The possibility of it being cheaper needs more investigation. Real,
honest-to-goodness qualitative research.
For example, it might actually be cheaper for society as a whole to have
bad QA--maybe the cost of dealing with security issues is less than what
it would cost us to prevent them in the first place or to educate users
(along with the lost opportunity costs that come with a licensing
scheme) to avoid them. If so, then we shouldn't even try to change the
status quo until our engineering know-how improves enough to shift the
cost structures.
Or it might be the case that it's overall cheaper to fix the security
flaws than to live with them, but that end users are not the ones
bearing most of the cost of security issues. That would be a classic
externality: the end user makes the decision of which software to buy,
enjoying the lower up-front cost that comes with insecure code, but
someone else has to bear the higher cost of the break-ins.
(Technically, the higher cost will eventually trickle down to the end
users one way or another, maybe because an ISP has to charge higher
rates because it has to have more routers to deal with virus scanning,
but that trickle-down may be so far removed from the original purchase
decision that it can't influence the end user's original choice of
software.) If this is what's going on, then the market-based solutions
Schneier's libertarian readership often advocates aren't likely to work
very well, if they even work at all.
Education is expensive. You have the up-front cost of teaching stuff to
people, and a lost opportunity cost because they're learning whatever it
is you're teaching them instead of things that might be more relevant to
their lives--little Janie's learning how to choose a good password or
how to patch her system instead of studying biology or civics.
So you don't reach for education unless it's less expensive than fixing
the problem you're trying to educate people around. And I'm not at all
convinced that's the case.
My gut says at most we have an externality going on, but more likely
people are just poorly informed about the actual cost of software.
Maybe we should start charging a "poor security" tax on the software's
purchase price so their purchase decisions better reflect the quality of
the code. (The fly in the ointment here is trying to fix the problem
without killing the open source movement. Without some very careful
legal footwork, a "poor security" tax might prevent anyone from
distributing free software.)
Assuming it's not overall economically cheaper to have shoddy code, I
put a lot of the blame in the software engineering camp. For example, a
lot of problems happen through buffer overflow attacks. In this day and
age, I can't think of any good excuse for having a buffer overflow bug.
We have programming languages that make buffer overflows extremely
unlikely. I don't care if your program will be 10-20% slower because you
wrote it in Java or used std::string and std::cin instead of
scanf--Moore's law will deal with the slowness eventually, or you can
find a place to cut algorithmic complexity--bottom line is you shouldn't
be using tools that are going to create a security flaw if you screw
up. Of course, if you're writing control code for an aircraft system or
nuclear plant you might not have the option of using one of the more
complex languages, but in that case you adjust your design, coding, and
QA style appropriately because bug reduction and security are even more
important.
Next problem: patching systems. If you solve the engineering problem, a
lot of this issue goes away (as it should because patches inevitably
create a window of opportunity for attackers), but let's assume there's
a transition period where manufacturers are still pushing out patches
every other week, or every week, or (shudder) every day.[1] I think
Microsoft has a good idea here: users shouldn't have to know anything
about how to patch their system beyond allowing it to install the
patch. If it's an operating system you're patching, the version of the
OS on the shrinkwrap DVD should be incapable of accepting a network
connection, or of making one to anywhere except the update distribution
site, until it has checked at least once for updates. And for goodness'
sake, it shouldn't have to reboot every time it installs a patch,
because that annoys users and keeps them from patching. While it's true
that automatic patching can create a security vulnerability, if someone
commandeers the auto-patch system to push out a trojan, the proper way
to deal with that issue is to fix the software engineering process so
there's no need to patch in the first place. It's not to use manual
patching, because manual patching requires user training which, as I
said, is expensive.
Next: configuring systems. I like the OLPC folks' approach here, that
it's a fundamental design principle that an operating system should, by
default, have a secure configuration. In other words, a user should not
have to lock down the operating system. If anything, the user might
have to unlock parts of the operating system. Also, I'm not yet sure
about this, but I'd also consider limiting the ability of software
manufacturers to disclaim consequential damages from any configuration
changes their software makes that reduce security.
Next: bad choice of passwords. You could solve this problem if users
don't get to choose their passwords. Of course, with today's
infrastructure, if you give them a password they're just going to write
it on a sticky note next to their monitor, but we can solve that issue.
Give each person one hardware password wallet. They unlock it using a
master password (which you can drill into them that they never, ever
give away with no exceptions--that's a concession I'll make to
education.) Then individual services can issue and change passwords,
which are probably actually public/private key pairs, to their hearts'
content and the user never sees them because they go in the wallet. The
wallet should have the ability to back up its contents, and it probably
should let the user share passwords for individual services even if the
individual service wants to prohibit password sharing, because otherwise
the user will have an incentive to take the greater risk of sharing the
master password.
Next: the Lost Laptop problem. Not an educational problem for end users
but for IT departments and manufacturers. Laptops should ship with
encrypted filesystems.
Last but not least, social engineering attacks. It might be necessary
to use an educational campaign to address these attacks. We can also
cut off some of these problems through infrastructure changes. An
attack like "I'm the CEO and I've lost my password wallet, could you
please unlock X for me?" won't work if it's trivial to get a new wallet
and restore it. "I've lost my master password" won't work if everyone
has to use their master password several times a day, so everyone knows
that everyone else inevitably remembers their passwords.
[1] Remember that the number of bugs, and therefore down-time, is
proportional to the number of changes. Bug fixes can introduce bugs.
The more the code changes, the greater a chance of introducing a bug.
The more simultaneous patching there is, the greater the chance of
introducing an interaction bug.
No comments:
Post a Comment