Thursday, May 03, 2007

unsafe at any (clock) speed?

Bruce Schneier has posted an essay which he wrote for Wired in which he faults the IT industry for insecure products:
If the IT products we purchased were secure out of the box, we wouldn't have to spend billions every year making them secure.

Aftermarket security is actually a very inefficient way to spend our security dollars; it may compensate for insecure IT products, but doesn't help improve their security. Additionally, as long as IT security is a separate industry, there will be companies making money based on insecurity -- companies who will lose money if the internet becomes more secure.

Fold security into the underlying products, and the companies marketing those products will have an incentive to invest in security upfront, to avoid having to spend more cash obviating the problems later.
One of the comment writers points out the parallel to the automotive industry: several years ago, cars were unsafe because manufacturers didn't have an incentive to make them safe. Then came some government regulation and popular attention (including Ralph Nader's book, Unsafe at Any Speed) and one sea change later the manufacturers are now competing with each other on safety features.

The same seems likely to happen in the IT industry. Either the industry will fix the problem itself, or there's going to be a big enough security breach that there'll be government regulation and a lot of popular attention.

Could the industry fix the problem? Completely fixing it seems out of reach: a 100% provably secure computer would be somewhere up there in price with a 100% provably safe automobile. Improving the security seems doable, though.

It'd probably take a paradigm shift in the way we write software and build networks. Back in the old days, people wrote software as unstructured code (sometimes derisively called "spaghetti code" because if you tried to trace how the code worked, you wound up with paths that looked like a plate full of spaghetti.) Unstructured code is too complex for a programmer to grasp--it has too many interacting parts--so programmers would hit a "wall," a point where the program got too large and they couldn't keep it working. It might be at 30,000 lines or, in my case, a much smaller 5,000 lines, but everyone eventually hit it.

Software engineering solved that problem by developing structured programming, a way of programming that breaks the program into fairly neat bits. It's kind of like the difference between writing this blog post as one long run-on sentence, and writing it in individual paragraphs made of separate sentences: breaking it into coherent paragraphs and sentences makes it easier to understand because you don't have to try to absorb the whole thing at once.

But structured programming brought with it some major changes. People had to write new programming languages, and they had to think about programs in a whole new way in order to use this new technique. Also, programs got a bit bigger and a bit slower, because structured programming eliminates some of the shortcuts you can take with the unstructured stuff.

I think we need that same kind of shift in IT security. We need to go from unstructured security to structured security. There must be a new way, a set of patterns, to think about designing programs, operating systems, networks, and networking protocols that makes them inherently more secure. I don't know what it is, but it will probably require new programming languages, or at least libraries, new operating systems, and new patterns of building networks.

Hopefully, someone's working on it right now. And hopefully they'll finish it soon. Otherwise, there's likely to be some political scrutiny and a book on the way that'll lead to a sea change in the IT industry.

No comments: