I saw this interesting sponsored link come across gmail this evening:
A sponsored link for "e-mail marketing"? I guess it's a good thing there's a "report spam" button, but isn't that kind of like putting the humidifier and the dehumidifier in the same room and letting them fight it out?
Thursday, September 27, 2007
Saturday, September 22, 2007
Red Bull Air Race
The Red Bull Air Race had its penultimate race today. My video clips seem to be giving Google indigestion, but here are some still images.
Thursday, September 20, 2007
computer security and end users, part 4
This is part four in a conversation with Meteorplum. I will post additional exchanges if and when they occur.
From False Data:
I think I need to make an implicit assumption explicit. I'm assuming that, if you don't take steps to educate users, they'll tend to do that which is most convenient. Since I've also assumed that education is expensive, and that we probably want to minimize the amount required to operate a computer, I'm trying to make security convenient for the end user. That's the real reason behind much the design I sketched out for the password wallet.
For example, the wallet's designed to allow the security department of the service to which the user is connecting to choose the "password" the user uses when connecting to that service. That way, we don't have to worry about users choosing secure passwords, because they're not choosing passwords at all (except the master password, as I said--that's going to be one weak point in the system. But I think we have a better shot at helping someone pick and remember one strong password than to pick and remember many.) In fact, if I were the security department, I wouldn't use a password at all. Instead, I'd probably do something like issue a private key to the user and keep a public key for myself. Then the authentication process would involve having the user digitally sign a randomly-generated message. (I'd also include a second set of keys so the user could verify that it's actually my service.) Now, getting someone who knows nothing about public key cryptography to use a system like that means that, ideally, the person shouldn't see the key exchange at all. Instead, he or she unlocks the wallet using the master password and it does the rest. (You could also use a one-type pad, along the lines of the bank system you described, if you wanted even stronger security.)
The wallet's a hardware implementation because laptops are just too bulky. It needs to be portable. In fact, it should probably go on the key ring to remind users to treat it like any other set of keys. Additional factors, like biometrics, are optional.
You could implement a clunky version of the wallet using Password Safe on a USB key, but few people would use it because it's confusing and adds additional login steps, with all that copying and pasting and such.
SecureID is worse, because you have to have a separate fob for each service you want to log into (one for the company VPN, another for a different company, and so on), and those things are just too bulky to put more than one on a key ring. And the whole timing issue of having to wait for the number to change and dealing with clock drift between the fob and the server just add to the inconvenience. (You should hear Coppertop's commentary when she has to use one.)
I'm pretty much convinced that convenience is key.
You also mentioned that "the other externality, which is actually not external at all, would be lost productivity on the users' end." One thing to keep in mind is that the losses may not fall uniformly on the users. For example, suppose my home system gets infected and starts pumping out stock pump-and-dump spam. The loss to me, personally, is probably fairly small--a sharp reduction in the apparent speed of my home computer, but that's not such a big deal if all I'm using it for is web, e-mail, and word processing. Other people, though, might suffer significant financial losses because of the stock scheme (even if they don't participate in it--for example, the company that's the subject of the scheme is going to have its stock price bounce around unpredictably). Or an ISP might suffer higher capital costs because it has to buy more filtering equipment to deal with the extra spam. It will try to pass that cost along to its subscribers--which might not include me--in the form of higher fees. Now, that ISP's users might also get infected and send spam my ISP's way, but there's no guarantee the losses will be symmetric.
We seem to both agree, though, that in general users have poor information about the security of the system they are using and lack information linking the consequences of their choices to the software they chose. The "security tax" idea I considered but am not crazy about is one way to communicate this additional cost to the user. Another would be an independent rating lab, like Underwriters Laboratories or Consumer Reports, but for software security. A third mechanism might be to address the problem the way we do it in construction, by establishing "building codes" for software.
From False Data:
I think I need to make an implicit assumption explicit. I'm assuming that, if you don't take steps to educate users, they'll tend to do that which is most convenient. Since I've also assumed that education is expensive, and that we probably want to minimize the amount required to operate a computer, I'm trying to make security convenient for the end user. That's the real reason behind much the design I sketched out for the password wallet.
For example, the wallet's designed to allow the security department of the service to which the user is connecting to choose the "password" the user uses when connecting to that service. That way, we don't have to worry about users choosing secure passwords, because they're not choosing passwords at all (except the master password, as I said--that's going to be one weak point in the system. But I think we have a better shot at helping someone pick and remember one strong password than to pick and remember many.) In fact, if I were the security department, I wouldn't use a password at all. Instead, I'd probably do something like issue a private key to the user and keep a public key for myself. Then the authentication process would involve having the user digitally sign a randomly-generated message. (I'd also include a second set of keys so the user could verify that it's actually my service.) Now, getting someone who knows nothing about public key cryptography to use a system like that means that, ideally, the person shouldn't see the key exchange at all. Instead, he or she unlocks the wallet using the master password and it does the rest. (You could also use a one-type pad, along the lines of the bank system you described, if you wanted even stronger security.)
The wallet's a hardware implementation because laptops are just too bulky. It needs to be portable. In fact, it should probably go on the key ring to remind users to treat it like any other set of keys. Additional factors, like biometrics, are optional.
You could implement a clunky version of the wallet using Password Safe on a USB key, but few people would use it because it's confusing and adds additional login steps, with all that copying and pasting and such.
SecureID is worse, because you have to have a separate fob for each service you want to log into (one for the company VPN, another for a different company, and so on), and those things are just too bulky to put more than one on a key ring. And the whole timing issue of having to wait for the number to change and dealing with clock drift between the fob and the server just add to the inconvenience. (You should hear Coppertop's commentary when she has to use one.)
I'm pretty much convinced that convenience is key.
You also mentioned that "the other externality, which is actually not external at all, would be lost productivity on the users' end." One thing to keep in mind is that the losses may not fall uniformly on the users. For example, suppose my home system gets infected and starts pumping out stock pump-and-dump spam. The loss to me, personally, is probably fairly small--a sharp reduction in the apparent speed of my home computer, but that's not such a big deal if all I'm using it for is web, e-mail, and word processing. Other people, though, might suffer significant financial losses because of the stock scheme (even if they don't participate in it--for example, the company that's the subject of the scheme is going to have its stock price bounce around unpredictably). Or an ISP might suffer higher capital costs because it has to buy more filtering equipment to deal with the extra spam. It will try to pass that cost along to its subscribers--which might not include me--in the form of higher fees. Now, that ISP's users might also get infected and send spam my ISP's way, but there's no guarantee the losses will be symmetric.
We seem to both agree, though, that in general users have poor information about the security of the system they are using and lack information linking the consequences of their choices to the software they chose. The "security tax" idea I considered but am not crazy about is one way to communicate this additional cost to the user. Another would be an independent rating lab, like Underwriters Laboratories or Consumer Reports, but for software security. A third mechanism might be to address the problem the way we do it in construction, by establishing "building codes" for software.
computer security and end users, part 3
This is the third installment in the conversation with Meteorplum:
From Meteorplum:
My general feeling about externalities is that the industry just doesn't keep good numbers (by active or passive omission) on the cost of security flaws. M$ certainly now has enough data to say what the cost is for every bug they fix and patch (engineering, data transfer, PR). There may even be a way of using historical data on bug rates and estimate the amount of additional original engineering time (which includes QA, dammit!)/money it would've cost and compare that to the post-ship costs.
The other externality, which is actually not external at all, would be lost productivity on the users' end. However, users rarely associate this loss with the appropriate software responsible for the problem. And as Schneier pointed out, they often blame themselves--or worse yet, get blamed by "security experts"--for not having done something that ought to be done automatically (or at least would have to be an opt-out item).
It seems like the only times average users realize that security problems with their hardware/software have direct costs on them are when they lose money directly through some form of fraud (411 scams and the like) or through identity/data theft (credit card info and worse). And of course, it is way too late by then to secure that information; that horse is out of the barn. Time to get a new horse and think about some sensible locks (or at least start using the crappy ones that the cut rate contractor installed).
I'll address three of your points briefly, then end with a couple of ideas of my own, though not elaborations on the car/driver model. (I should note that I listen to the Security Now podcast with Steve Gibson and Leo Leporte, and that's influenced me as much as Bruce Schneier, so I'll be referencing stuff that they've talked about, as well as my own spin on their ideas.)
Patching
This is incredibly convenient for the software/hardware maker and user, no question. The problem is doing it automatically and online. Patching is essentially a backdoor process, and the chain of trust is pretty fragile, in my opinion. Just look at the recent stealth patch M$ did to Windows Update [see here--ed]. I totally understand why a bunch of engineers would think "of course we need to to automatically patch the app that manages the patch process, otherwise how can we make sure it's always working as well as possible?". The problem is that if there is any validity to a user's choice to manually update, then nothing should be automatically updated. And of course, there is no guarantee that a given "critical" patch does not itself cause problems, not to mention any number of corporate IS departments which would not take it kindly to stealth software rollouts, no matter how benign the reasons, if they cause configuration problem down the line.
But the most egregious problem with patching is that it is vulnerable to two kinds of attacks:
1. Reverse engineering of the patching mechanism to allow malware to insert itself into the software to be patched.
2. "Man in the middle" attacks where a third party pretends to be the source of the patch and either delivers an unwanted payload viz. #1 above or uses the "trusted" connection to the user's machine to insert other software.
If the objective is to secure the patching process, then it would actually make sense to start using serial numbers on software. That way, requests for patches and the patches themselves could be encrypted using serial numbers, so that both the user and the software provider can be authenticated. The other way is to forget patching altogether and have providers make full installations of updated versions available by physical or online delivery after an authenticated request (mailing in a reg card, secure login, etc.). I know that sites can be spoofed, but that's a problem now, so this wouldn't change anything there.
Password/Wallet
Assuming that this is linked to anonymous mode while using Google, an additional check for a strong password would be to feed a potential candidate *to* Google and see if it returns any results. I just tried "rumblefingertag", which returned no results, though it doesn't have any non-alphas or digits. ("rumblefinger" returned six results and a suggestion of "nimble finger"). While it is conceivable that dictionary attacks might go as far as generating every possible two-word combination, three or more would strain hardware and software for at least the next couple of years.
The possibility you mentioned about the password wallet thing already exists in the form of SecureID fobs as part of initiatives to shift to two-factor authentication. I had one from AOL and employees had a secondary security screen when logging into their work accounts that required entering the current, six digit number from the fob. This number would change pseudo-randomly every thirty seconds, so even if my password got hacked, they would still need the current fob number. PayPal/eBay has rolled out a similar feature where you can get their SecureID fob for $5 and link it to your account. Thereafter, you have to add the current SecureID number to you password to login, or else you get a longer set of authentication questions (first pet, mother's maiden name, etc.). They're using VeriSign's implementation of OpenID as the back end, and VeriSign is selling their own fobs (though at higher prices). There is also discussions of making the SecureID software available as cell phone apps, turning a truly personal and ubiquitous object into the source of the second factor.
This doesn't obviate the need for using good passwords (and keeping them secure) but it goes a long way towards making online transaction more secure without introducing highly complicated bits of tech. On a similar front, Karin gets a page of (probably pseudo-random) numbers from her/our bank for online transactions. Whenever she's doing a online funds transfer, she has to enter one of these numbers, which is only usable once. Each sheet only has something like 25-30 numbers, and I don't know if she gets a new sheet at regular intervals of if she has to request new numbers when the old ones start to run out, but this is a slightly old fashioned way of providing a second factor for authentication that is even lower tech than the fob.
As for having a separate device that manages this, I'm not sure if I see it as a need or as a convenience, but I can imagine something like a USB key/thumb drive that's encrypted and contains hardware/firmware that acts like a SecureID key. The decryption key can be entered using software (assuming some sort of universal support under a TPM-like configuration) or hardware like these new USB drives that can be unlocked by typing in numbers on their built-in keypads (like those combo locks on car doors). The key would contain a list of passwords, and the built-in SecureID-like software/hardware would generate the required numeric authentication credential.
Granted, this would make the "master" password a weak point, but this would be true for any system which allowed for a single "master key" of any sort.
From Meteorplum:
My general feeling about externalities is that the industry just doesn't keep good numbers (by active or passive omission) on the cost of security flaws. M$ certainly now has enough data to say what the cost is for every bug they fix and patch (engineering, data transfer, PR). There may even be a way of using historical data on bug rates and estimate the amount of additional original engineering time (which includes QA, dammit!)/money it would've cost and compare that to the post-ship costs.
The other externality, which is actually not external at all, would be lost productivity on the users' end. However, users rarely associate this loss with the appropriate software responsible for the problem. And as Schneier pointed out, they often blame themselves--or worse yet, get blamed by "security experts"--for not having done something that ought to be done automatically (or at least would have to be an opt-out item).
It seems like the only times average users realize that security problems with their hardware/software have direct costs on them are when they lose money directly through some form of fraud (411 scams and the like) or through identity/data theft (credit card info and worse). And of course, it is way too late by then to secure that information; that horse is out of the barn. Time to get a new horse and think about some sensible locks (or at least start using the crappy ones that the cut rate contractor installed).
I'll address three of your points briefly, then end with a couple of ideas of my own, though not elaborations on the car/driver model. (I should note that I listen to the Security Now podcast with Steve Gibson and Leo Leporte, and that's influenced me as much as Bruce Schneier, so I'll be referencing stuff that they've talked about, as well as my own spin on their ideas.)
Patching
This is incredibly convenient for the software/hardware maker and user, no question. The problem is doing it automatically and online. Patching is essentially a backdoor process, and the chain of trust is pretty fragile, in my opinion. Just look at the recent stealth patch M$ did to Windows Update [see here--ed]. I totally understand why a bunch of engineers would think "of course we need to to automatically patch the app that manages the patch process, otherwise how can we make sure it's always working as well as possible?". The problem is that if there is any validity to a user's choice to manually update, then nothing should be automatically updated. And of course, there is no guarantee that a given "critical" patch does not itself cause problems, not to mention any number of corporate IS departments which would not take it kindly to stealth software rollouts, no matter how benign the reasons, if they cause configuration problem down the line.
But the most egregious problem with patching is that it is vulnerable to two kinds of attacks:
1. Reverse engineering of the patching mechanism to allow malware to insert itself into the software to be patched.
2. "Man in the middle" attacks where a third party pretends to be the source of the patch and either delivers an unwanted payload viz. #1 above or uses the "trusted" connection to the user's machine to insert other software.
If the objective is to secure the patching process, then it would actually make sense to start using serial numbers on software. That way, requests for patches and the patches themselves could be encrypted using serial numbers, so that both the user and the software provider can be authenticated. The other way is to forget patching altogether and have providers make full installations of updated versions available by physical or online delivery after an authenticated request (mailing in a reg card, secure login, etc.). I know that sites can be spoofed, but that's a problem now, so this wouldn't change anything there.
Password/Wallet
Assuming that this is linked to anonymous mode while using Google, an additional check for a strong password would be to feed a potential candidate *to* Google and see if it returns any results. I just tried "rumblefingertag", which returned no results, though it doesn't have any non-alphas or digits. ("rumblefinger" returned six results and a suggestion of "nimble finger"). While it is conceivable that dictionary attacks might go as far as generating every possible two-word combination, three or more would strain hardware and software for at least the next couple of years.
The possibility you mentioned about the password wallet thing already exists in the form of SecureID fobs as part of initiatives to shift to two-factor authentication. I had one from AOL and employees had a secondary security screen when logging into their work accounts that required entering the current, six digit number from the fob. This number would change pseudo-randomly every thirty seconds, so even if my password got hacked, they would still need the current fob number. PayPal/eBay has rolled out a similar feature where you can get their SecureID fob for $5 and link it to your account. Thereafter, you have to add the current SecureID number to you password to login, or else you get a longer set of authentication questions (first pet, mother's maiden name, etc.). They're using VeriSign's implementation of OpenID as the back end, and VeriSign is selling their own fobs (though at higher prices). There is also discussions of making the SecureID software available as cell phone apps, turning a truly personal and ubiquitous object into the source of the second factor.
This doesn't obviate the need for using good passwords (and keeping them secure) but it goes a long way towards making online transaction more secure without introducing highly complicated bits of tech. On a similar front, Karin gets a page of (probably pseudo-random) numbers from her/our bank for online transactions. Whenever she's doing a online funds transfer, she has to enter one of these numbers, which is only usable once. Each sheet only has something like 25-30 numbers, and I don't know if she gets a new sheet at regular intervals of if she has to request new numbers when the old ones start to run out, but this is a slightly old fashioned way of providing a second factor for authentication that is even lower tech than the fob.
As for having a separate device that manages this, I'm not sure if I see it as a need or as a convenience, but I can imagine something like a USB key/thumb drive that's encrypted and contains hardware/firmware that acts like a SecureID key. The decryption key can be entered using software (assuming some sort of universal support under a TPM-like configuration) or hardware like these new USB drives that can be unlocked by typing in numbers on their built-in keypads (like those combo locks on car doors). The key would contain a list of passwords, and the built-in SecureID-like software/hardware would generate the required numeric authentication credential.
Granted, this would make the "master" password a weak point, but this would be true for any system which allowed for a single "master key" of any sort.
computer security and end users, part 2
This is the second part of an e-mail conversation with Meteorplum.
From False Data:
No, but I'm sorry I didn't think of it--it would've been clever. It was
just a simple typo, though.
Too much regulation too early can kill an industry, so I'd be very
careful about creating a licensing scheme, at least for users, unless
it's absolutely necessary. That says we'll be most successful if we
consider the root causes of the problem before designing a solution,
just as we would have to look the threat model before proposing a
security regime.
One of the main reasons PCs are insecure is inadequate software
engineering, partly because we don't know how to do it well (compared to
our ability to engineer large structures like bridges or cruise ships),
partly because too many engineers have excessive egos (example:
resistance to code reviews and religious objections to languages that
limit buffer overruns), and partly because it's cheaper to skimp on QA.
The possibility of it being cheaper needs more investigation. Real,
honest-to-goodness qualitative research.
For example, it might actually be cheaper for society as a whole to have
bad QA--maybe the cost of dealing with security issues is less than what
it would cost us to prevent them in the first place or to educate users
(along with the lost opportunity costs that come with a licensing
scheme) to avoid them. If so, then we shouldn't even try to change the
status quo until our engineering know-how improves enough to shift the
cost structures.
Or it might be the case that it's overall cheaper to fix the security
flaws than to live with them, but that end users are not the ones
bearing most of the cost of security issues. That would be a classic
externality: the end user makes the decision of which software to buy,
enjoying the lower up-front cost that comes with insecure code, but
someone else has to bear the higher cost of the break-ins.
(Technically, the higher cost will eventually trickle down to the end
users one way or another, maybe because an ISP has to charge higher
rates because it has to have more routers to deal with virus scanning,
but that trickle-down may be so far removed from the original purchase
decision that it can't influence the end user's original choice of
software.) If this is what's going on, then the market-based solutions
Schneier's libertarian readership often advocates aren't likely to work
very well, if they even work at all.
Education is expensive. You have the up-front cost of teaching stuff to
people, and a lost opportunity cost because they're learning whatever it
is you're teaching them instead of things that might be more relevant to
their lives--little Janie's learning how to choose a good password or
how to patch her system instead of studying biology or civics.
So you don't reach for education unless it's less expensive than fixing
the problem you're trying to educate people around. And I'm not at all
convinced that's the case.
My gut says at most we have an externality going on, but more likely
people are just poorly informed about the actual cost of software.
Maybe we should start charging a "poor security" tax on the software's
purchase price so their purchase decisions better reflect the quality of
the code. (The fly in the ointment here is trying to fix the problem
without killing the open source movement. Without some very careful
legal footwork, a "poor security" tax might prevent anyone from
distributing free software.)
Assuming it's not overall economically cheaper to have shoddy code, I
put a lot of the blame in the software engineering camp. For example, a
lot of problems happen through buffer overflow attacks. In this day and
age, I can't think of any good excuse for having a buffer overflow bug.
We have programming languages that make buffer overflows extremely
unlikely. I don't care if your program will be 10-20% slower because you
wrote it in Java or used std::string and std::cin instead of
scanf--Moore's law will deal with the slowness eventually, or you can
find a place to cut algorithmic complexity--bottom line is you shouldn't
be using tools that are going to create a security flaw if you screw
up. Of course, if you're writing control code for an aircraft system or
nuclear plant you might not have the option of using one of the more
complex languages, but in that case you adjust your design, coding, and
QA style appropriately because bug reduction and security are even more
important.
Next problem: patching systems. If you solve the engineering problem, a
lot of this issue goes away (as it should because patches inevitably
create a window of opportunity for attackers), but let's assume there's
a transition period where manufacturers are still pushing out patches
every other week, or every week, or (shudder) every day.[1] I think
Microsoft has a good idea here: users shouldn't have to know anything
about how to patch their system beyond allowing it to install the
patch. If it's an operating system you're patching, the version of the
OS on the shrinkwrap DVD should be incapable of accepting a network
connection, or of making one to anywhere except the update distribution
site, until it has checked at least once for updates. And for goodness'
sake, it shouldn't have to reboot every time it installs a patch,
because that annoys users and keeps them from patching. While it's true
that automatic patching can create a security vulnerability, if someone
commandeers the auto-patch system to push out a trojan, the proper way
to deal with that issue is to fix the software engineering process so
there's no need to patch in the first place. It's not to use manual
patching, because manual patching requires user training which, as I
said, is expensive.
Next: configuring systems. I like the OLPC folks' approach here, that
it's a fundamental design principle that an operating system should, by
default, have a secure configuration. In other words, a user should not
have to lock down the operating system. If anything, the user might
have to unlock parts of the operating system. Also, I'm not yet sure
about this, but I'd also consider limiting the ability of software
manufacturers to disclaim consequential damages from any configuration
changes their software makes that reduce security.
Next: bad choice of passwords. You could solve this problem if users
don't get to choose their passwords. Of course, with today's
infrastructure, if you give them a password they're just going to write
it on a sticky note next to their monitor, but we can solve that issue.
Give each person one hardware password wallet. They unlock it using a
master password (which you can drill into them that they never, ever
give away with no exceptions--that's a concession I'll make to
education.) Then individual services can issue and change passwords,
which are probably actually public/private key pairs, to their hearts'
content and the user never sees them because they go in the wallet. The
wallet should have the ability to back up its contents, and it probably
should let the user share passwords for individual services even if the
individual service wants to prohibit password sharing, because otherwise
the user will have an incentive to take the greater risk of sharing the
master password.
Next: the Lost Laptop problem. Not an educational problem for end users
but for IT departments and manufacturers. Laptops should ship with
encrypted filesystems.
Last but not least, social engineering attacks. It might be necessary
to use an educational campaign to address these attacks. We can also
cut off some of these problems through infrastructure changes. An
attack like "I'm the CEO and I've lost my password wallet, could you
please unlock X for me?" won't work if it's trivial to get a new wallet
and restore it. "I've lost my master password" won't work if everyone
has to use their master password several times a day, so everyone knows
that everyone else inevitably remembers their passwords.
[1] Remember that the number of bugs, and therefore down-time, is
proportional to the number of changes. Bug fixes can introduce bugs.
The more the code changes, the greater a chance of introducing a bug.
The more simultaneous patching there is, the greater the chance of
introducing an interaction bug.
From False Data:
1. Did you deliberately misspell your handle on your comment? The link is correct, but you're listed as "Fales Data".
No, but I'm sorry I didn't think of it--it would've been clever. It was
just a simple typo, though.
2. How do *you* think the burden of securing PCs could be shifted?
Too much regulation too early can kill an industry, so I'd be very
careful about creating a licensing scheme, at least for users, unless
it's absolutely necessary. That says we'll be most successful if we
consider the root causes of the problem before designing a solution,
just as we would have to look the threat model before proposing a
security regime.
One of the main reasons PCs are insecure is inadequate software
engineering, partly because we don't know how to do it well (compared to
our ability to engineer large structures like bridges or cruise ships),
partly because too many engineers have excessive egos (example:
resistance to code reviews and religious objections to languages that
limit buffer overruns), and partly because it's cheaper to skimp on QA.
The possibility of it being cheaper needs more investigation. Real,
honest-to-goodness qualitative research.
For example, it might actually be cheaper for society as a whole to have
bad QA--maybe the cost of dealing with security issues is less than what
it would cost us to prevent them in the first place or to educate users
(along with the lost opportunity costs that come with a licensing
scheme) to avoid them. If so, then we shouldn't even try to change the
status quo until our engineering know-how improves enough to shift the
cost structures.
Or it might be the case that it's overall cheaper to fix the security
flaws than to live with them, but that end users are not the ones
bearing most of the cost of security issues. That would be a classic
externality: the end user makes the decision of which software to buy,
enjoying the lower up-front cost that comes with insecure code, but
someone else has to bear the higher cost of the break-ins.
(Technically, the higher cost will eventually trickle down to the end
users one way or another, maybe because an ISP has to charge higher
rates because it has to have more routers to deal with virus scanning,
but that trickle-down may be so far removed from the original purchase
decision that it can't influence the end user's original choice of
software.) If this is what's going on, then the market-based solutions
Schneier's libertarian readership often advocates aren't likely to work
very well, if they even work at all.
Education is expensive. You have the up-front cost of teaching stuff to
people, and a lost opportunity cost because they're learning whatever it
is you're teaching them instead of things that might be more relevant to
their lives--little Janie's learning how to choose a good password or
how to patch her system instead of studying biology or civics.
So you don't reach for education unless it's less expensive than fixing
the problem you're trying to educate people around. And I'm not at all
convinced that's the case.
My gut says at most we have an externality going on, but more likely
people are just poorly informed about the actual cost of software.
Maybe we should start charging a "poor security" tax on the software's
purchase price so their purchase decisions better reflect the quality of
the code. (The fly in the ointment here is trying to fix the problem
without killing the open source movement. Without some very careful
legal footwork, a "poor security" tax might prevent anyone from
distributing free software.)
Assuming it's not overall economically cheaper to have shoddy code, I
put a lot of the blame in the software engineering camp. For example, a
lot of problems happen through buffer overflow attacks. In this day and
age, I can't think of any good excuse for having a buffer overflow bug.
We have programming languages that make buffer overflows extremely
unlikely. I don't care if your program will be 10-20% slower because you
wrote it in Java or used std::string and std::cin instead of
scanf--Moore's law will deal with the slowness eventually, or you can
find a place to cut algorithmic complexity--bottom line is you shouldn't
be using tools that are going to create a security flaw if you screw
up. Of course, if you're writing control code for an aircraft system or
nuclear plant you might not have the option of using one of the more
complex languages, but in that case you adjust your design, coding, and
QA style appropriately because bug reduction and security are even more
important.
Next problem: patching systems. If you solve the engineering problem, a
lot of this issue goes away (as it should because patches inevitably
create a window of opportunity for attackers), but let's assume there's
a transition period where manufacturers are still pushing out patches
every other week, or every week, or (shudder) every day.[1] I think
Microsoft has a good idea here: users shouldn't have to know anything
about how to patch their system beyond allowing it to install the
patch. If it's an operating system you're patching, the version of the
OS on the shrinkwrap DVD should be incapable of accepting a network
connection, or of making one to anywhere except the update distribution
site, until it has checked at least once for updates. And for goodness'
sake, it shouldn't have to reboot every time it installs a patch,
because that annoys users and keeps them from patching. While it's true
that automatic patching can create a security vulnerability, if someone
commandeers the auto-patch system to push out a trojan, the proper way
to deal with that issue is to fix the software engineering process so
there's no need to patch in the first place. It's not to use manual
patching, because manual patching requires user training which, as I
said, is expensive.
Next: configuring systems. I like the OLPC folks' approach here, that
it's a fundamental design principle that an operating system should, by
default, have a secure configuration. In other words, a user should not
have to lock down the operating system. If anything, the user might
have to unlock parts of the operating system. Also, I'm not yet sure
about this, but I'd also consider limiting the ability of software
manufacturers to disclaim consequential damages from any configuration
changes their software makes that reduce security.
Next: bad choice of passwords. You could solve this problem if users
don't get to choose their passwords. Of course, with today's
infrastructure, if you give them a password they're just going to write
it on a sticky note next to their monitor, but we can solve that issue.
Give each person one hardware password wallet. They unlock it using a
master password (which you can drill into them that they never, ever
give away with no exceptions--that's a concession I'll make to
education.) Then individual services can issue and change passwords,
which are probably actually public/private key pairs, to their hearts'
content and the user never sees them because they go in the wallet. The
wallet should have the ability to back up its contents, and it probably
should let the user share passwords for individual services even if the
individual service wants to prohibit password sharing, because otherwise
the user will have an incentive to take the greater risk of sharing the
master password.
Next: the Lost Laptop problem. Not an educational problem for end users
but for IT departments and manufacturers. Laptops should ship with
encrypted filesystems.
Last but not least, social engineering attacks. It might be necessary
to use an educational campaign to address these attacks. We can also
cut off some of these problems through infrastructure changes. An
attack like "I'm the CEO and I've lost my password wallet, could you
please unlock X for me?" won't work if it's trivial to get a new wallet
and restore it. "I've lost my master password" won't work if everyone
has to use their master password several times a day, so everyone knows
that everyone else inevitably remembers their passwords.
[1] Remember that the number of bugs, and therefore down-time, is
proportional to the number of changes. Bug fixes can introduce bugs.
The more the code changes, the greater a chance of introducing a bug.
The more simultaneous patching there is, the greater the chance of
introducing an interaction bug.
computer security and end users, part 1
Meteorplum and I have been having an interesting e-mail exchange, prompted by a provocative article Bruce Schneier posted, in which he compared insecure end-user computers to a public health hazard and suggested requiring ISPs to provide technical support. I posted a very brief comment, and the next day our exchange began. With Meteorplum's permission, I'll post it here.
From Meteorplum:
1. Did you deliberately misspell your handle on your comment? The link is correct, but you're listed as "Fales Data".
2. How do you think the burden of securing PCs could be shifted?
I'm thinking something along the car/driving model myself, with some form of mandatory PC user education/course and a test. It doesn't prevent anyone without a license from buying or using a PC, but they'd be fully on the hook for what the PC did if they don't have a license. In home environments, parents would be liable for their children's (mis)usage unless the kids are themselves licensed. Libraries and internet cafes will certainly change their policies to something akin to car rental agreements. I can certainly foresee much opposition, but no less so than early car owners (and some current gun owners).
From Meteorplum:
1. Did you deliberately misspell your handle on your comment? The link is correct, but you're listed as "Fales Data".
2. How do you think the burden of securing PCs could be shifted?
I'm thinking something along the car/driving model myself, with some form of mandatory PC user education/course and a test. It doesn't prevent anyone without a license from buying or using a PC, but they'd be fully on the hook for what the PC did if they don't have a license. In home environments, parents would be liable for their children's (mis)usage unless the kids are themselves licensed. Libraries and internet cafes will certainly change their policies to something akin to car rental agreements. I can certainly foresee much opposition, but no less so than early car owners (and some current gun owners).
realizing just how far out of touch I am
I'm slowly realizing just how far out of touch with current events I've become. For example, I'm only now learning about the propellant explosion at Scaled Composites that occurred back in July. I've been following progress at Scaled because they're one of the companies leading the charge for significant technological progress in space travel. And because Burt Rutan's work is, well, just plain cool. Now, I have sort of an excuse--July 27 was the day after the bar exam, so I was slowly crawling out of my little legal foxhole and beginning to pick up the many things that fell on the floor during the studying process--but it's not a very good excuse because I learned about Japan's Kaguya lunar probe mission on September 14, the day they launched it. So now I'm faced with the need to figure out how to better plug into current events in the space industry. And probably in the rest of the world as well. I guess I have a lot of learning to do.
Do they make Cliff Notes for current events? (I guess there's Google News, but somehow it lacks the breadth of coverage that I'd like to have.)
Do they make Cliff Notes for current events? (I guess there's Google News, but somehow it lacks the breadth of coverage that I'd like to have.)
Monday, September 10, 2007
getting to the desktop while away from home
While traveling, one of the problems I frequently run into is the inability to do more than access a web site from a remote application. For instance, I might drop into a coffee shop with Internet access only to find out they've locked down everything except port 80.
I now have a partial solution to the problem. We have a home Linux server with sshd and vncserver installed. Based on this post, I've configured our setup to allow ssh connections on ports 22 and 80 and tunneled VNC over those connections. I've also put a copy of porta-putty, a portable ssh client, on a USB thumb drive so I can run it from a borrowed machine for those times when I don't have my own machine available. (In that case, I can connect a web browser to http://localhost:5801 and get the Java-based VNC viewer.)
Here's how to do it:
Configuring Access on Ports 22 and 80
First, make sure you have sshd installed on your Linux server. Most of them already do. Edit /etc/ssh/sshd_config. Uncomment the "Port 22" command, and just below it add "Port 80". Note that you can't run both a web server on port 80 and sshd on port 80, which is fine for us because this host handles internal stuff like e-mail and printing--for security reasons, any web pages are on a different host.
You may have to allow firewall access to ports 22 and/or 80. On our slightly older Fedora machine, you can set the firewall permissions via System->Administration->Security Level and Firewall. Allow TCP connections on port 22 (which it probably already does if you installed sshd) and 80 (where you'd normally have web access.)
If you have an external hardware firewall, you'll also need to set up any necessary port forwarding for those ports to reach your server.
Configuring Porta-Putty
Next, get a copy of portaputty and unpack it onto your USB drive or, if you're a bit more cautious, put it in the hard drive folder you use as a master image for your USB drive.
Run putty.exe. For the host name, type the name you use to connect to your Linux server from the Internet. Leave the port at the default 22. Then give this session a name in the "Saved Sessions" box, something like "Server (22)."
Click on Connection->SSH and check "Enable Compression." (I suggest leaving "Don't start a shell or command at all" un-checked so you have a convenient way to launch the VNC server manually. That way, you can bring up the server only when you need it.)
Click on Connection->SSH->Tunnels. Put "5901" in the Source Port box and "localhost:5901" in the Destination box and click "Add." Then put "5801" in the Source Port box and "localhost:5801" in the Destination box and click "Add."
Click on Session and click "Save."
Now create a second configuration that's identical except that it connects on port 80. Change the Port to 80, change "Saved Sessions" to something like "Server (80)," and click "Save."
Using It
To use the arrangement, first launch putty.exe. Depending on whether you have access on port 22 or port 80, double-click the appropriate session name to open the connection.
Once the ssh terminal starts and you've logged in, run vncserver to launch your VNC server. Remember that VNC changes the port number based on the display name: if it puts your display on :1, then it'll use ports 5901 and 5801, but if it puts you on :2 it'll use 5902 and 5802, etc. The port forwarding above assumes you'll always get :1, so it's a good idea to verify that's where VNC did, indeed, put your display.
If you're running putty on a machine with a VNC viewer installed, launch the VNC viewer and connect to localhost:5901. You should then be able to enter your password and get a VNC display.
Otherwise, if all you have is a web browser, launch the web browser and connect to http://localhost:5801. (Note: 5801 for http, 5901 for VNC viewers.) The VNC server will feed your browser a VNC viewer in the form of a Java applet. Wait a moment for Java to load, and then you should be able to log in and run.
One caveat: So far, I've been testing from inside our network by bouncing the connection through the router. I haven't had time yet to retreat to the local firewalled coffee shop for a test from there. I'll post a follow-up if further testing requires any configuration changes.
I now have a partial solution to the problem. We have a home Linux server with sshd and vncserver installed. Based on this post, I've configured our setup to allow ssh connections on ports 22 and 80 and tunneled VNC over those connections. I've also put a copy of porta-putty, a portable ssh client, on a USB thumb drive so I can run it from a borrowed machine for those times when I don't have my own machine available. (In that case, I can connect a web browser to http://localhost:5801 and get the Java-based VNC viewer.)
Here's how to do it:
Configuring Access on Ports 22 and 80
First, make sure you have sshd installed on your Linux server. Most of them already do. Edit /etc/ssh/sshd_config. Uncomment the "Port 22" command, and just below it add "Port 80". Note that you can't run both a web server on port 80 and sshd on port 80, which is fine for us because this host handles internal stuff like e-mail and printing--for security reasons, any web pages are on a different host.
You may have to allow firewall access to ports 22 and/or 80. On our slightly older Fedora machine, you can set the firewall permissions via System->Administration->Security Level and Firewall. Allow TCP connections on port 22 (which it probably already does if you installed sshd) and 80 (where you'd normally have web access.)
If you have an external hardware firewall, you'll also need to set up any necessary port forwarding for those ports to reach your server.
Configuring Porta-Putty
Next, get a copy of portaputty and unpack it onto your USB drive or, if you're a bit more cautious, put it in the hard drive folder you use as a master image for your USB drive.
Run putty.exe. For the host name, type the name you use to connect to your Linux server from the Internet. Leave the port at the default 22. Then give this session a name in the "Saved Sessions" box, something like "Server (22)."
Click on Connection->SSH and check "Enable Compression." (I suggest leaving "Don't start a shell or command at all" un-checked so you have a convenient way to launch the VNC server manually. That way, you can bring up the server only when you need it.)
Click on Connection->SSH->Tunnels. Put "5901" in the Source Port box and "localhost:5901" in the Destination box and click "Add." Then put "5801" in the Source Port box and "localhost:5801" in the Destination box and click "Add."
Click on Session and click "Save."
Now create a second configuration that's identical except that it connects on port 80. Change the Port to 80, change "Saved Sessions" to something like "Server (80)," and click "Save."
Using It
To use the arrangement, first launch putty.exe. Depending on whether you have access on port 22 or port 80, double-click the appropriate session name to open the connection.
Once the ssh terminal starts and you've logged in, run vncserver to launch your VNC server. Remember that VNC changes the port number based on the display name: if it puts your display on :1, then it'll use ports 5901 and 5801, but if it puts you on :2 it'll use 5902 and 5802, etc. The port forwarding above assumes you'll always get :1, so it's a good idea to verify that's where VNC did, indeed, put your display.
If you're running putty on a machine with a VNC viewer installed, launch the VNC viewer and connect to localhost:5901. You should then be able to enter your password and get a VNC display.
Otherwise, if all you have is a web browser, launch the web browser and connect to http://localhost:5801. (Note: 5801 for http, 5901 for VNC viewers.) The VNC server will feed your browser a VNC viewer in the form of a Java applet. Wait a moment for Java to load, and then you should be able to log in and run.
One caveat: So far, I've been testing from inside our network by bouncing the connection through the router. I haven't had time yet to retreat to the local firewalled coffee shop for a test from there. I'll post a follow-up if further testing requires any configuration changes.
Saturday, September 08, 2007
factfinding excursion to Anza-Borrego for Sunrise Powerlink
The Sunrise Powerlink is a controversial project to send a 500 kilovolt transmission line to San Diego. Part of the line's proposed route would go through the middle of the Anza-Borrego Desert State Park. Because the proposal's so controversial, it's hard to get solid information amidst the chaff. So Coppertop and I decided we'd figure out the power line's proposed route and then head over to the desert and see where it goes for our selves. And take pictures along the way.
It's the off season for the park right now, and with good reason: temps were in the 100F range, and the desert isn't especially exciting now what with the lack of rain and high heat. Apparently wildflower season is February. Also, the rangers recommended against hiking due to the heat, and against driving on the dirt roads due to our lack of four wheel drive, their surplus of ruts and rocks, and the unlikelihood of anyone else being on those roads to give us a lift if we broke down. So we drove a number of highways that intercept the proposed paths and took pictures along them.
OK, hopefully this'll work correctly: if you want to follow along, you can find a Google Map here.
California Highway 78 and County Highway S3
You can get a real sense of the vastness of the desert landscape here.
(map)
Along Highway 78 (33° 8'35.98"N, 116°16'28.22"W)
There is an existing transmission line that goes through the park, along with some distribution lines, but they're much lower voltage and made of wood, not the steel towers necessary to carry a 500 kV line. You can see some of the poles to the left of the road in this shot.
The park, at least in this area, seems to become flatter as you move East.
(map)
Mid-way to Highway 2
You can see the foothills begin to build as you move west. It was around this point that it started becoming hard to imagine steel transmission line towers interrupting that sky view.
(map)
Highway 78 and County Highway S2
Here's what you see when you get to Highway 2.
(map)
County Highway 2 approaching CA-79
We're now moving north, approaching CA-79.
Mountains and big sky in the background, existing power line in the foreground.
(map)
County Highway 2 at CA-79
Finally, we've reached CA-79. This is one of the two existing substations we saw, and the distribution lines converging on it.
(map)
It's the off season for the park right now, and with good reason: temps were in the 100F range, and the desert isn't especially exciting now what with the lack of rain and high heat. Apparently wildflower season is February. Also, the rangers recommended against hiking due to the heat, and against driving on the dirt roads due to our lack of four wheel drive, their surplus of ruts and rocks, and the unlikelihood of anyone else being on those roads to give us a lift if we broke down. So we drove a number of highways that intercept the proposed paths and took pictures along them.
OK, hopefully this'll work correctly: if you want to follow along, you can find a Google Map here.
California Highway 78 and County Highway S3
You can get a real sense of the vastness of the desert landscape here.
(map)
Along Highway 78 (33° 8'35.98"N, 116°16'28.22"W)
There is an existing transmission line that goes through the park, along with some distribution lines, but they're much lower voltage and made of wood, not the steel towers necessary to carry a 500 kV line. You can see some of the poles to the left of the road in this shot.
The park, at least in this area, seems to become flatter as you move East.
(map)
Mid-way to Highway 2
You can see the foothills begin to build as you move west. It was around this point that it started becoming hard to imagine steel transmission line towers interrupting that sky view.
(map)
Highway 78 and County Highway S2
Here's what you see when you get to Highway 2.
(map)
County Highway 2 approaching CA-79
We're now moving north, approaching CA-79.
Mountains and big sky in the background, existing power line in the foreground.
(map)
County Highway 2 at CA-79
Finally, we've reached CA-79. This is one of the two existing substations we saw, and the distribution lines converging on it.
(map)
Sunday, September 02, 2007
report from the front lines: Operation Japan Trade proceeds apace
To: Directorate, Operation Japan Trade
From: False Data
Re: Report from Front Line Operations
As I compose this message on a Toshiba computer while reading about Toyota Motor Corporation's successes in automobile sales and the increasing United States trade deficit, a cursory inspection would seem to show a retreating trade position. However, I am pleased to report significant advances in the Grand Plan of Operation Japan Trade.
The integration of Starbucks into the Japanese economy is nearly complete, and integration into the nation's culture is making significant strides.
Furthermore, Eddie Bauer now has a strong presence.
However, the Directorate will be justly proud to learn of the success of its most audacious move: please note the popularity of the recently-introduced Krispy Kreme:
With strides such as this, it is only a matter of time before the Directorate can declare Operation Japan Trade a resounding success.
From: False Data
Re: Report from Front Line Operations
As I compose this message on a Toshiba computer while reading about Toyota Motor Corporation's successes in automobile sales and the increasing United States trade deficit, a cursory inspection would seem to show a retreating trade position. However, I am pleased to report significant advances in the Grand Plan of Operation Japan Trade.
The integration of Starbucks into the Japanese economy is nearly complete, and integration into the nation's culture is making significant strides.
Furthermore, Eddie Bauer now has a strong presence.
However, the Directorate will be justly proud to learn of the success of its most audacious move: please note the popularity of the recently-introduced Krispy Kreme:
With strides such as this, it is only a matter of time before the Directorate can declare Operation Japan Trade a resounding success.
Please Enjoy Japanese Public Toilet
The meaning of the word "enjoy" gets stretched a bit by tour guides. "Thanks to hydroelectric power, we enjoy electricity in our homes." "Please enjoy this promotional video." However, the architecture of some of the public restrooms is surprisingly enjoyable.
Here's one in Tokyo shaped like a fountain:
I think I may have posted it earlier, but it's the one that got us looking at others, and started a running joke of "collecting" them. Also in Tokyo:
In the mountains, somewhere around Nikko:
and Kegon Waterfall:
In Takayama:
Finally, this one's only semi-public. It's actually an outhouse attached to the farmhouse I wrote about earlier:
Except for the outhouse, which has a plank and a hole, inside a public toilet you'll find a commode that's either the more familiar western-style sit-upon variety or the Japanese style which is set in the ground and you squat over it. Sorry, I don't have a picture of the commode itself.
Soap for hand washing is fairly rare as are hand drying facilities which, when present, are inevitably electric hand driers rather than paper towels. (They have a high pressure hand drier here which is considerably more effective than the "press button, wipe hands under warm air" variety common in the U.S.)
Here's one in Tokyo shaped like a fountain:
I think I may have posted it earlier, but it's the one that got us looking at others, and started a running joke of "collecting" them. Also in Tokyo:
In the mountains, somewhere around Nikko:
and Kegon Waterfall:
In Takayama:
Finally, this one's only semi-public. It's actually an outhouse attached to the farmhouse I wrote about earlier:
Except for the outhouse, which has a plank and a hole, inside a public toilet you'll find a commode that's either the more familiar western-style sit-upon variety or the Japanese style which is set in the ground and you squat over it. Sorry, I don't have a picture of the commode itself.
Soap for hand washing is fairly rare as are hand drying facilities which, when present, are inevitably electric hand driers rather than paper towels. (They have a high pressure hand drier here which is considerably more effective than the "press button, wipe hands under warm air" variety common in the U.S.)
Kyoto
Ah, Kyoto, my other favorite city in Japan. A thousand years of history in one glorious jumble. It's a shame we had only a day to tour the city--it's like trying to tour Rome in a day. I would much rather have had two weeks and a small library of Japanese history books to properly appreciate the city.
Only an hour to visit Nijo Castle! They don't allow photos (or sketching) inside, but I could get pictures of parts of the outside of the castle.
And of the surrounding garden:
The castle features the famous "nightingale floors," floors purposely made to chirp as you walk over them to deter assassins and spies. For the mechanically inclined, here's a close-up of the mechanism that does the chirping when there's pressure on the floorboard above:
Then it was off to the shrine and temple grab bag. Kitano Tenmangu:
Kinkakuji, the Golden Pavilion:
(Yes, that's gold leaf.)
Heianjingu:
and the garden around it:
Sanjusangen-do, filled with a thousand and one golden statues of the Buddhist deity Juichimen-senjusengen Kanzeon:
too bad they don't allow pictures inside.
And finally Kiyomizudera Temple:
where we stopped for a much-needed shaved ice.
Only an hour to visit Nijo Castle! They don't allow photos (or sketching) inside, but I could get pictures of parts of the outside of the castle.
And of the surrounding garden:
The castle features the famous "nightingale floors," floors purposely made to chirp as you walk over them to deter assassins and spies. For the mechanically inclined, here's a close-up of the mechanism that does the chirping when there's pressure on the floorboard above:
Then it was off to the shrine and temple grab bag. Kitano Tenmangu:
Kinkakuji, the Golden Pavilion:
(Yes, that's gold leaf.)
Heianjingu:
and the garden around it:
Sanjusangen-do, filled with a thousand and one golden statues of the Buddhist deity Juichimen-senjusengen Kanzeon:
too bad they don't allow pictures inside.
And finally Kiyomizudera Temple:
where we stopped for a much-needed shaved ice.
Saturday, September 01, 2007
parking in Japan
Travel in Japan has given me a whole new appreciation for parking. In this country, you may not purchase an automobile unless you can present a certificate showing you have a place to park it, so you see some pretty amazing parking jobs, and some rather tiny cars. (A Toyota Prius is a large car here.)
For instance, consider this parallel parking job. That's a foot deep trench between the rear wheels.
This structure is a vertical parking lot.
It holds one or two cars per floor. There's a turntable at the bottom to rotate the car to the proper position and an elevator to lift it to the floor with the parking place.
They typically rent out spaces by the month, about 2,000 yen per month (around $200) in Kanazawa. More in Tokyo.
Or how about this arrangement to let you put four cars in two spaces?
As I mentioned, the cars are also smaller. Some of them extremely small. There are two classes of general passenger cars, the normal-sized ones and the "lightweight cars" which have an engine smaller than 1000 cubic centimeters. You can recognize the lightweight ones by their yellow license plate. A Honda Fit, which is one of the smaller cars available in the U.S., is too big for the "lightweight" class. Here's a lightweight van next to a tour bus for comparison:
This is what one looks like close-up: very small wheelbase and tall seating position (so you can see over the other traffic).
They're not much bigger than a motorcycle.
For instance, consider this parallel parking job. That's a foot deep trench between the rear wheels.
This structure is a vertical parking lot.
It holds one or two cars per floor. There's a turntable at the bottom to rotate the car to the proper position and an elevator to lift it to the floor with the parking place.
They typically rent out spaces by the month, about 2,000 yen per month (around $200) in Kanazawa. More in Tokyo.
Or how about this arrangement to let you put four cars in two spaces?
As I mentioned, the cars are also smaller. Some of them extremely small. There are two classes of general passenger cars, the normal-sized ones and the "lightweight cars" which have an engine smaller than 1000 cubic centimeters. You can recognize the lightweight ones by their yellow license plate. A Honda Fit, which is one of the smaller cars available in the U.S., is too big for the "lightweight" class. Here's a lightweight van next to a tour bus for comparison:
This is what one looks like close-up: very small wheelbase and tall seating position (so you can see over the other traffic).
They're not much bigger than a motorcycle.
unexpected interdependences, or why a mobile phone makes a poor alarm clock
When I travel, I often don't carry a travel alarm. Instead, I generally rely on my mobile phone's built-in alarm clock feature. During this trip, I discovered that's not such a good idea because of unexpected interdependences in technology.
You see, my phone sets its current time from the cellular network, which is handy because it means the phone's clock is usually very accurate. But the phone is not an international model. So when I turn on the phone here in Japan, it gets stuck looking for a cellular network, which it's never going to find because the network is on a different set of frequencies. So there's no way to get to the alarm clock function because the phone's stalled on the network search. And there's no way to manually set the time. Rather than degrading gracefully to let me use the other functionality, the phone's basically a very sophisticated electronic paperweight.
My old Palm Pilot has an alarm clock feature as well, but I've been hesitant to use it because, to make sure I wake up at the right time, I'd need to change its clock to local time. But Japan's on the other side of the international date line from the U.S. I have no idea what happens if I advance the clock, make an entry, and then upon returning to the U.S. reset the clock to what the Palm Pilot will believe is the previous day.
So far, we've been using wake-up calls at the hotels, which are an automated system at the places where we've been staying. They've been working out pretty well, but that's a heck of a lot of technology to rely upon, and the tour guides are sticklers for punctuality.
Maybe next time I'll just bring an old fasioned mechanical wind-up alarm clock.
You see, my phone sets its current time from the cellular network, which is handy because it means the phone's clock is usually very accurate. But the phone is not an international model. So when I turn on the phone here in Japan, it gets stuck looking for a cellular network, which it's never going to find because the network is on a different set of frequencies. So there's no way to get to the alarm clock function because the phone's stalled on the network search. And there's no way to manually set the time. Rather than degrading gracefully to let me use the other functionality, the phone's basically a very sophisticated electronic paperweight.
My old Palm Pilot has an alarm clock feature as well, but I've been hesitant to use it because, to make sure I wake up at the right time, I'd need to change its clock to local time. But Japan's on the other side of the international date line from the U.S. I have no idea what happens if I advance the clock, make an entry, and then upon returning to the U.S. reset the clock to what the Palm Pilot will believe is the previous day.
So far, we've been using wake-up calls at the hotels, which are an automated system at the places where we've been staying. They've been working out pretty well, but that's a heck of a lot of technology to rely upon, and the tour guides are sticklers for punctuality.
Maybe next time I'll just bring an old fasioned mechanical wind-up alarm clock.
Kanazawa
Kanazawa was interesting but not the knockout that Takayama City was. We visited Kenroku Garden, one of the most famous Japanese-style gardens in Japan.
Notice how this island is shaped like a turtle (with a heron perched on the turtle's head . . .):
Originally, this garden was the private garden of the shogun who controlled this area. It costs about five million dollars a year to maintain it.
Next was the a pottery studio that has been making kutani pottery for five generations.
We visited a former geisha house, now a museum, and wound up the tour with a shop and manufacturing facility that makes and uses gold leaf.
. . . where they treated us to tea with (real) gold flakes in it.
This place is known for, among other things, having a million dollar bathroom: the ladies' side is picked out in gold leaf, while the mens' is in platinum leaf.
Notice how this island is shaped like a turtle (with a heron perched on the turtle's head . . .):
Originally, this garden was the private garden of the shogun who controlled this area. It costs about five million dollars a year to maintain it.
Next was the a pottery studio that has been making kutani pottery for five generations.
We visited a former geisha house, now a museum, and wound up the tour with a shop and manufacturing facility that makes and uses gold leaf.
. . . where they treated us to tea with (real) gold flakes in it.
This place is known for, among other things, having a million dollar bathroom: the ladies' side is picked out in gold leaf, while the mens' is in platinum leaf.
Subscribe to:
Posts (Atom)