This is the third installment in the conversation with Meteorplum:
My general feeling about externalities is that the industry just doesn't keep good numbers (by active or passive omission) on the cost of security flaws. M$ certainly now has enough data to say what the cost is for every bug they fix and patch (engineering, data transfer, PR). There may even be a way of using historical data on bug rates and estimate the amount of additional original engineering time (which includes QA, dammit!)/money it would've cost and compare that to the post-ship costs.
The other externality, which is actually not external at all, would be lost productivity on the users' end. However, users rarely associate this loss with the appropriate software responsible for the problem. And as Schneier pointed out, they often blame themselves--or worse yet, get blamed by "security experts"--for not having done something that ought to be done automatically (or at least would have to be an opt-out item).
It seems like the only times average users realize that security problems with their hardware/software have direct costs on them are when they lose money directly through some form of fraud (411 scams and the like) or through identity/data theft (credit card info and worse). And of course, it is way too late by then to secure that information; that horse is out of the barn. Time to get a new horse and think about some sensible locks (or at least start using the crappy ones that the cut rate contractor installed).
I'll address three of your points briefly, then end with a couple of ideas of my own, though not elaborations on the car/driver model. (I should note that I listen to the Security Now podcast with Steve Gibson and Leo Leporte, and that's influenced me as much as Bruce Schneier, so I'll be referencing stuff that they've talked about, as well as my own spin on their ideas.)
This is incredibly convenient for the software/hardware maker and user, no question. The problem is doing it automatically and online. Patching is essentially a backdoor process, and the chain of trust is pretty fragile, in my opinion. Just look at the recent stealth patch M$ did to Windows Update [see here--ed]. I totally understand why a bunch of engineers would think "of course we need to to automatically patch the app that manages the patch process, otherwise how can we make sure it's always working as well as possible?". The problem is that if there is any validity to a user's choice to manually update, then nothing should be automatically updated. And of course, there is no guarantee that a given "critical" patch does not itself cause problems, not to mention any number of corporate IS departments which would not take it kindly to stealth software rollouts, no matter how benign the reasons, if they cause configuration problem down the line.
But the most egregious problem with patching is that it is vulnerable to two kinds of attacks:
1. Reverse engineering of the patching mechanism to allow malware to insert itself into the software to be patched.
2. "Man in the middle" attacks where a third party pretends to be the source of the patch and either delivers an unwanted payload viz. #1 above or uses the "trusted" connection to the user's machine to insert other software.
If the objective is to secure the patching process, then it would actually make sense to start using serial numbers on software. That way, requests for patches and the patches themselves could be encrypted using serial numbers, so that both the user and the software provider can be authenticated. The other way is to forget patching altogether and have providers make full installations of updated versions available by physical or online delivery after an authenticated request (mailing in a reg card, secure login, etc.). I know that sites can be spoofed, but that's a problem now, so this wouldn't change anything there.
Assuming that this is linked to anonymous mode while using Google, an additional check for a strong password would be to feed a potential candidate *to* Google and see if it returns any results. I just tried "rumblefingertag", which returned no results, though it doesn't have any non-alphas or digits. ("rumblefinger" returned six results and a suggestion of "nimble finger"). While it is conceivable that dictionary attacks might go as far as generating every possible two-word combination, three or more would strain hardware and software for at least the next couple of years.
The possibility you mentioned about the password wallet thing already exists in the form of SecureID fobs as part of initiatives to shift to two-factor authentication. I had one from AOL and employees had a secondary security screen when logging into their work accounts that required entering the current, six digit number from the fob. This number would change pseudo-randomly every thirty seconds, so even if my password got hacked, they would still need the current fob number. PayPal/eBay has rolled out a similar feature where you can get their SecureID fob for $5 and link it to your account. Thereafter, you have to add the current SecureID number to you password to login, or else you get a longer set of authentication questions (first pet, mother's maiden name, etc.). They're using VeriSign's implementation of OpenID as the back end, and VeriSign is selling their own fobs (though at higher prices). There is also discussions of making the SecureID software available as cell phone apps, turning a truly personal and ubiquitous object into the source of the second factor.
This doesn't obviate the need for using good passwords (and keeping them secure) but it goes a long way towards making online transaction more secure without introducing highly complicated bits of tech. On a similar front, Karin gets a page of (probably pseudo-random) numbers from her/our bank for online transactions. Whenever she's doing a online funds transfer, she has to enter one of these numbers, which is only usable once. Each sheet only has something like 25-30 numbers, and I don't know if she gets a new sheet at regular intervals of if she has to request new numbers when the old ones start to run out, but this is a slightly old fashioned way of providing a second factor for authentication that is even lower tech than the fob.
As for having a separate device that manages this, I'm not sure if I see it as a need or as a convenience, but I can imagine something like a USB key/thumb drive that's encrypted and contains hardware/firmware that acts like a SecureID key. The decryption key can be entered using software (assuming some sort of universal support under a TPM-like configuration) or hardware like these new USB drives that can be unlocked by typing in numbers on their built-in keypads (like those combo locks on car doors). The key would contain a list of passwords, and the built-in SecureID-like software/hardware would generate the required numeric authentication credential.
Granted, this would make the "master" password a weak point, but this would be true for any system which allowed for a single "master key" of any sort.