Tuesday, February 27, 2007

a nonrenewable resource

Dear Digital Archaeologist,

I spoke with one of your predecessors at dinner today. She's just finished her undergraduate degree and is planning to become a grad student. She's not a digital archaeologist, of course--she pieces together the past by physically digging up shells, potsherds, or other physical debris of past societies. Right now, she works for a company that does archeology for profit. As she explained it, we have laws which require anyone building something in an area of archaeological significance to, essentially, have an archaeologist available to tell them to stop their construction work if they find something important--say, shells or potsherds--long enough for the archaeologist to recover it. The laws have created the whole field of Cultural Resource Management and an opportunity for companies to hire archaeologists to be available at construction sites. In California, there are a lot of sites of archaeological significance--people have lived here for thousands of years--so they get a fair bit of work. I guess whether you think what they do is a net benefit to the economy or simply an extra cost on whoever's doing the building depends on how much you value the archeology they accomplish.

One of the things she said that stayed with me is that archaeological finds are a nonrenewable resource. She meant, of course, that once a bulldozer (a piece of earth moving equipment used in construction) destroys the site to put a building there, the things it destroys are gone forever, as is the chance to learn from them. That got me thinking about your job. Of course, you might need to rush in before whatever your modern equivalent of a bulldozer might be destroys files. In fact, you might need to rush in before knowledge of how to read a particular file format vanishes. But I wonder if archeology is a nonrenewable resource in a different sense.

You see, once you make a discovery--you find a piece of a puzzle--you need to assign meaning to it. You will need to interpret that new discovery in light of the context that surrounds you. Eventually, you will need to tell your colleagues about your interpretation. That moment, the moment where you decide what the new discovery means, is itself a nonrenewable resource. Once you have assigned meaning and told the world, that act changes the world, the surrounding context. People may accept your interpretation, or they may assign different meaning than the one you chose, but even if they they disagree with you they will also have altered that context in the process of disagreeing. The ripples from these decisions spread and interact.

Today, a man named James Cameron, a film maker, is trying to make one of those assignments of meaning. He claims to have found a container that held the skeletal remains of Jesus, a person of great religious significance who, according to religious texts, should not have had any skeletal remains in the first place. His claim is causing a great deal of controversy. No matter what people ultimately decide the container means, the repercussions will continue for a very long time, maybe forever.

Monday, February 26, 2007

extremism: a follow-up

This is just a quick follow-up to last night's post on extremism. There is, of course, more going on than just intolerance run rampant. Jared Diamond's books Collapse and Guns, Germs, and Steel make a pretty good case that one of the things we're seeing is resource constraints, such as the effects of intermittent electricity, extremely high fuel and food prices, and rampant unemployment. Essentially, he makes a pretty good case that, when a human population's resource base shrinks below the level that will sustain that population, one of the things that tends to happen is enough warfare to reduce the population down to a sustainable level. Unfortunately, I've lent out my copies of both books so don't have them here to get into more depth. The scary implication, though, is if Diamond's right, then there may be no good reason to think it couldn't happen here.

Sunday, February 25, 2007


I recently saw "The Bitter Truth" text. It's that list of reasons a good Muslim can't be a good American. I couldn't find the original source of the text, but you can read essentially the whole thing here. I'm not going to reproduce it in this post. You can also read a pretty good point-by-point response here.

I get concerned when I see text that describes a people as fundamentally different and incorrigible. Too many wrongs have followed painting with that kind of broad brush. What's more, this description of Muslims simply doesn't fit my personal experience. The Muslims I've known personally have been as varied as the Christians, the Jews, the agnostics, the atheists, and those following pretty much every other belief system. In fact, none of the Muslims I've known has been as intolerant as, say, the followers of Fred Phelps, but that's probably just because I'm more likely to run into a Christian than a Muslim in this country, so I also have a better chance of bumping into an extremist Christian than an extremist Muslim.

And it seems to me, when you get right down to it, the real problem is extremism. It's people not willing to tolerate other people who don't look like them, or talk like them, or worship like them, or think like them. It's xenophobia. And it's bigotry. It's been with us for longer than recorded history. And neither a Muslim suicide bomber in Iraq nor a white supremacist American with a Ryder truck in Oklahoma City has a monopoly on it.

Friday, February 23, 2007

you know it's the 21st Century when . . .

You know it's the 21st Century when the neighbor kids are playing Red Light / Green Light . . . while riding around in circles in a two seat electric toy car.

Sure wish I'd had a two-seat electric car when I was playing that game.

OK, back to work. The air is thick with "Red light!" "Green light!"

Thursday, February 22, 2007

digital archaeology

Gordon Bell at Microsoft has been creating some buzz for the MyLifeBits project, an effort to digitize essentially everything in his life. In a Scientific American article, he points out that you can buy a terabyte of storage today for $600, and within a decade you should have that same amount of storage in your cell phone (and four terabytes in a desktop machine.) Multiply that kind of storage capacity by the number of machines on the Internet and it got me thinking about what can start happening with the volume of information that expands to fill that kind of space.

Will there be digital archaeologists, people who sift through our society's discarded files and broken web links, carefully brushing away revisions and piecing together antiquated file formats? Will a team of grad students working on their PhDs a thousand, or two thousand, years from now be digging through old blog entries, still archived online in some remote descendant of the Wayback Machine or a copy of Google's backup tapes?

If so, welcome Digital Archaeologist. I'm honored you've paused to read this digital record. I might send you a letter from time to time, just to let you know how things are going here in your distant past. If I can help your research, so much the better. It's a strange world we live in now, one that's rapidly changing in directions that are not entirely clear. Perhaps by trying to explain it to you, I'll understand it better myself.

Tuesday, February 20, 2007

blogging and communities

I recently posted a comment on Feld Thoughts. (The blog's moderated. I don't know if he'll approve the comment, so you might not see it there if you follow the link.) The comment contained a few thoughts about blogs and online communities. I wanted to expand those thoughts a bit, and what better place to think than out loud, on the fly, and in front of the entire Internet? :-)

The gist of the comment was this: blogs have some advantages for forming communities, but they're not ideal. On the plus side, they provide content that acts as fodder for conversations. As word about the blog's theme spreads, people from around the Internet may gather there, so you have a natural nucleus for a community of people with a common set of interests.

They also have some drawbacks. Communities form despite the drawbacks, but if your goal is to build online communities, some design changes may be in order.
  1. Blogs create conversations with unequal power. The blogger, the person who owns the blog, has the blog's full typographical facilities available, the ability to post pictures, and, most importantly, the ability to moderate comments. People writing in the comment space have a much more limited set of abilities. Now, sometimes that's fine. In fact, sometimes having a good moderator can improve the conversation. But more often it seems to impede the conversation and flow of ideas. Of course, if the comments section supports hyperlinks, the comment writer can always link off-site, but that leads to problem #2.
  2. Conversations become fragmented over space. Suppose you have ten friends. Each friend has his or her own blog. Now you have the potential for conversations to sprawl through the comments sections of ten different blogs (times the number of active posts--see #3.) If each blog supports an RSS feed for the comments, you will at least have a notification that something new is there, but it still takes effort to piece it all together. Also, if a comment writer links off-site, people may follow that link, scattering them further.
  3. Conversations become fragmented over time. Conversation threads on an e-mail list tend to rise and fall depending on peoples' interest in following them. With a blog, though, conversations tend to shift with each new blog post. So when the blogger puts up a new post, that's where most of the energy goes, and people who want to continue a previous conversation are likely to be chastised for posting off topic. It's true that there's really nothing to stop you from continuing to comment on an old post, but it'll get very little attention, especially as the post scrolls off the RSS feed.
So, how do you deal with these problems? #3 seems to be the easiest to tackle, at least in isolation. One way might be to have the comments section be a sort of continuous, threaded forum that wheels along on its own rather than seeing a sharp cut-off every time there's a new post. That's as opposed to today's design where each new post essentially creates its own new forum space. If you wanted to keep a connection between the post and the forum, it might work to have each new post automatically open a new thread in the forum.

Problems #1 and #2 seem interrelated. If you could solve the problem of keeping a coherent conversation across web pages, then you could give each participant his or her own web page, and the distinction between comment writer and blog author might disappear. It would require a few things.
  1. Instead of reading blogs, you'd read authors: you would add authors to your client software instead of adding web sites to it. A community might consist of a centrally-maintained list of authors, much as we have central e-mail lists today, that you could add or drop as you choose.
  2. Your reading software would have the job of locating your authors' comments and posts from around the Internet and collating them into some sort of meaningful conversation structure.
So have I just re-invented the publicly-archived e-mail discussion list with a killfile? Sort of. One difference is that it's a pull design rather than a push design--instead of a server sending e-mail to me, my software goes and finds the entries I want. That means I have complete control over who I do and don't read, decreasing the chance of my e-mail address drifting into a bulk spam list. Another difference is that a properly designed system would let people comment on web sites that don't have their own comment feature.

There's another problem I haven't addressed, yet: permanence. Stuff on the Internet tends to be indexed, archived, and searchable for a very long time after it's written. Sometimes that's great. Other times, maybe I want to kick ideas around without their being tied back to me because I'm just exploring and haven't really formulated my thoughts. For instance, consider someone just trying to understand what communism was all about in 1933, discussing its pros and cons. Those words could easily come back to haunt that person years later before the House Un-American Activites Committee. One way to deal with it would be to use pseudonyms, so an author could abandon a pseudonym every so often. There might be a better option, but I'm not yet sure what it'd be, since once you turn something into bits it's very hard to make sure those bits later vanish.

Saturday, February 17, 2007

few things tick me off like spam

I get a lot of spam. Today, since 10:00 this morning, I've gotten 32 spam messages and four legitimate ones--and that's just counting what slipped past Spamassassin. So it was especially interesting to find a spam message advertising spamming services ("Email Marketing- Easy and affordable"). It's interesting for a couple reasons. First, it just plain cheeses me off that spammers are propagating themselves. Second, unlike the more common stock pump-and-dump scheme, if you're going to advertise spamming you have to include some way to contact you. So I dug into it a bit.

The message came from a machine called promailer.prserv.net. Which has the same IP address as www.attbusiness.net. Both domain names are registered to AT&T.

The body of the message contains some evil spammer tricks. Basically, it's made up of a set of links (called imagemap links) to images on remote web pages. It's designed in such a way that, if you open the e-mail and your mail software doesn't have the proper protection in place, the mail software will connect to the remote server to display the images. In the process of connecting, it will send information that can uniquely identify that message. The net upshot is that the spammer can tell it's not only a valid e-mail address, but also that someone's reading the mail there.

OK, so why is an AT&T server advertising spamming services? And why is it fishing for valid e-mail addresses while doing it? Both good questions that I put to AT&T.

I first tried sending to abuse@attbusiness.net. Turns out that address doesn't exist. Now I'm starting to get annoyed enough spend a bit more time on it. Some digging around AT&T's web site turned up postmaster@attglobal.net. I send a message there. A few minutes later, I got a reply summarily closing my trouble ticket:
This is the report of the incident you should receive.  Sev:  4 - Warning
For Account: aotsmail Incident Number: xxxxxxxxxxxxxxx Status: Closed
Thank you for taking the time to inform us of this situation.
However, we cannot take any further action until you provide us with the actual connection logs. These connection logs will include the complete IP address, date, time and time zone associated with the abusive action. Only with this information can we identify the responsible individual.


To find more information on filtering SPAM, please visit
and type the word filter into the search engine.
If you feel we handled this incident improperly or require
assistance providing headers, please call 800-821-4612.
Wrong answer. One of the few things that'll piss me off more than spam is a company that doesn't care that I've taken the time to investigate and report to them that they, or someone there, is spamming. I next called their 800 number, where they told me to send the message's headers to their Remote Access address, RM-RemoteAccess@ems.att.com, to be appended to the trouble ticket.

Now we'll see where things go from here. In the meantime, I will either calm down and get back to the work I need to be doing, or I'll start going through the CAN-SPAM act, 15 U.S.C. §§ 7701-7713, to see whether this spam message matches up with federal law.

Friday, February 16, 2007

beware the peanut butter

Some batches of Peter Pan Peanut butter are packing salmonella. Desperate times call for desperate measures, and the Houston Independent School District has begun confiscating home-made peanut butter sandwiches and replacing them with school-made ones:
The Houston Independent School District, the largest in the state, confiscated all peanut butter sandwiches brought by students after a nationwide recall of certain brands thought to be contaminated with salmonella, a bacteria that causes illness.

The district replaced the sandwiches with cafeteria-made ones spread with an untainted brand. Other area school districts, including the state's third largest, Cypress-Fairbanks, did not follow suit.

The headlines from a Google News search reveal the tenor of the times:

Let's be careful out there.

Wednesday, February 14, 2007

bitfrost and you (and your computer, too)

The One Laptop Per Child project has a very interesting security model. Wait! Don't run away just yet! I'm going to explain the nifty bits in plain English, and before I do that I'm going to tell you why it's relevant to your life.

So, why should you care about this thing? No viruses, no spyware and no need for either antivirus software or spyware scanners. And think about this: a lot of spam today gets spread by viruses and worms that turn computers into spamming machines (called bot nets). So if this thing works as advertised, and the design spreads to adult computers, we could see a lot less spam. Not only that, but the system is designed to work with as few software updates as possible, so you wouldn't need the equivalent of Windows Update rebooting your machine on you every few weeks.[1]

The security model is called Bitfrost (a play on the Bifrost Bridge of Norse mythology). It's designed to be so simple that a five-year-old child, who can't read, can use it. That means it's not popping up those annoying dialog boxes all the time--you know, the ones where you just click "yes" and keep going. Because they're planning on having tens of millions of these computers in the world, it also has to be very tough to crack. That combination of requirements is what's driving the innovative design.

The basic idea behind it is to run every program in the computer inside its own separate little security box. That's very different from most of today's computers, where each user is in a security box, but all the user's programs are pretty much free to talk to each other, and any of the user's programs can access any of the user's files. The common design encourages virus attacks because a virus that can take over just one of the user's programs can access all the user's data, try to corrupt the user's other programs, and eventually work its way out to the Internet. With Bitfrost, even if the virus takes over one user program, it will have a very hard time spreading to another one, getting to any files except those the program is normally allowed to use, or using any resources except the ones the program's normally allowed to use (including talking to the Internet.)[2]

They build in some other features, too, that are interesting, though I'm still deciding how well I think they'll work. One is an anti-theft system. If someone takes your laptop, you report that it's stolen. The next time the laptop checks into the network, it gets disabled. If the thief never connects it to the network, then it'll still automatically disable itself after a few weeks. This system strikes me as something that could misfire if someone screws up badly and could also be misused, but those risks might be worth the social benefit of reducing laptop theft.

[1] Because OLPC is Linux-based, reboots will be very rare anyway, but this means even fewer security updates will be necessary, and they'll be less critical.

[2] I'm still thinking through one situation. If the web browser or e-mail program gets infected, those programs already have access to the Internet, so they might be prime targets for incoming viruses.

Monday, February 12, 2007

sign in class

"Death penalty presentation. Free food!"

Friday, February 09, 2007

more on the CO2 challenge

So I was just playing with the numbers to try to get a handle on the size of the challenge of sequestering a billion metric tonnes of carbon dioxide per year. My first thought was "hey, how about building materials?" After all, we build a lot of buildings around the world. Could we store carbon dioxide in our walls, essentially by growing a fast-growing plant (like, say, bamboo) and using it as a construction material? In that case, all the technology's already there--all we'd need to solve is the (probably much harder) problem of changing building codes.

Well, maybe not. Or at least, maybe not enough. According to Bamboo Living, some bamboo species sequester up to 12 tons of carbon dioxide per hectare per year. I dropped that number down to 5 because I figured we wouldn't be lucky enough to have the species all the builders want to use absorb that much. Then I pulled in some numbers from the CIA World Factbook to see how much land would be available for this project. Here's what I wound up with:
annual tons CO2 per hectare that bamboo absorbs: 5
convert to annual metric tonnes CO2/hectare: 4.55
target metric tonnes CO2 per year: 1,000,000,000 tonnes/yr
hectares necessary: 220,000,000 hectares
square km per hectare: 0.01
square km necessary: 2,200,000 km^2
So we need 2,200,000 square kilometers of bamboo. Where shall we put it? How about the U.S., since it's the world's largest producer of carbon dioxide?
total land area of U.S.: 9,161,923 km^2
total arable land in U.S.: 1,650,062 km^2
Oops, looks like it won't fit. Just for comparison, let's look at the total arable land in the world:
total land area of world: 148,940,000 km^2
total arable land: 19,823,914 km^2
total permanent cropland: 7,015,074 km^2
So we'd be looking at a sizable chunk of the world's cropland.

That's a lot of carbon dioxide.

Sir Richard Branson, dude, you rock!

Sir Richard Branson is now somewhere near the top of the list of people I'd like to meet. Not only does he run an airline that, by all reports, actually gives its passengers enough leg room, and he's started a space tourism company, but now he's offering a $25M prize for technology that absorbs carbon from the atmosphere.

Now, as for the prize, they're looking at removing 1 billion metric tonnes of carbon dioxide from the atmosphere per year. Hmm. Need to run some back-of-the-envelope calculations . . .

avoid violent braking next 125 miles

I just got the car back from the shop, where they put in new rear brake pads. On the stick they put this little tag: "Attention: Avoid violent braking next 125 miles." Huh? So now I'm wondering.

(1) If it's a choice between braking violently or colliding violently, I think braking violently has to win.

(2) What happens if I do brake violently? I mean, the whole reason I took the car in is to make sure the brakes are going to stop me when I really need them to. Will the back end of the car fall off? Or will the brakes just go on strike?

(3) Maybe that tag is telling me to avoid gratuitous violent braking--you know, violent braking for fun. If so, that suggests the rather scary possibility that there must be enough people in this world who brake violently just for the Hell of it that it makes sense to put little tags on the gear shift telling them to lay off for the next 125 miles.

Thursday, February 08, 2007

when events come to visit

One of the effects of living in a fairly dense neighborhood is that there's frequently stuff going on. One day I woke up to find the neighbor's car, which was parked on the street in front of our house, had been banged up--it looked like someone had smashed in the windows with a baseball bat. Another day, the cops were parked in front, twice that day, talking to one neighbor or another. Now, an ambulance and a full-sized paramedics fire truck are parked in front of the apartment building across the street. They walked in and a bunch of people inside walked out. I hope whoever's in there is OK.

With all the web cams and surveillance cameras popping up everywhere, I've often threatened--not seriously, at least so far--to put our own web cam in the window and tie it back to the home server so it remembers the last 24 hours of whatever it sees. Setting it up would be more trouble than I'd be willing to take right now, but it's an appealing idea (especially after the neighbors' car got smashed up.)

OK, time to get back to work. The folks across the street are pros and know what they're doing, and they don't need me in the way (or staring out the window, for that matter.)

Sunday, February 04, 2007

my Japanese written blog site

Just so you know that I started my Japanese written blog site, Gomihead, a month ago. I have been posting, on and off, my opinions about issues that caught my eye. I am not sure how many people can or would read them, but please stop by if you are interested in.

Saturday, February 03, 2007

a conversation earlier today

Coppertop: I can't wait till you're done with this school thing so you can help me with grocery shopping.

Me: Believe me, it would be tremendously easy to distract me from this stuff (which has to get done) with grocery shopping. It's that much more exciting.

Coppertop (laughing): That's pathetic.

Me: Yeah, that thought's crossed my mind.

Friday, February 02, 2007

reusable highlighter, part 3

I might not be too bright, but at least I don't give up easy. Behold the implements of destruction for the next assault on the problem of the reusable highlighter. In this episode, our hero attempts to reload a Bic Brite Liner liquid-filled highligher using Noodler's Firefly ink. Here's a picture of the firefly ink in natural light. Since I don't have an eyedropper, I poured some into a Nalgene bottle with a squeeze top. Be prepared to have your fingers highlighted during this process.

Here's what it looks like with the flash turned on. Notice the unearthly glow. Ooh! Aaah!

Right, lets get down to business. Do not take the back end off the highlighter (the bit the pliers are pointing to.) The barrel is sealed inside, so if you try to pour ink in from that end, you'll just get ink on yourself. Trust me on this one.

Instead, take the front end off. Not the felt bit--which, incidentally, pops out easily enough but doesn't give you enough of a hole through which to pour ink--but the housing around the felt bit. Use your pliers to grasp the yellow plastic just behind the felt. If you turn it a little, it will rotate inside the barrel. So rotate it back and forth and pull on it. Eventually, you can work it out of the barrel.
Now you can pour ink into the barrel. Be careful about over-filling it. You don't want it squirting out as you stick the tip back in. After you've filled the barrel (over the sink, just in case you spill--not that I ever spill(ed)), you can re-insert the tip of the highlighter. At that point, it should work good as new. Or almost new, anyway.

So, is it worth it? Honestly, probably not. It's kind of a pain. I may keep doing it just because throwing away the empty highlighter after only a week or two of use offends my rather peculiar sense of aesthetics, but for most people it'd more trouble than it's worth. On the other hand, if someone would make a nice felt nib for a cheap fountain pen with a converter, that'd simplify the whole process quite a bit.
Posted by Picasa

IPCC report on global warming

The Intergovernmental Panel on Climate Change has released a report on global warming. So far, I've been able to find only the Summary for Policymakers (pdf) available for download. Here are some things that jumped out at me during a quick read-through.
  • There's a very high confidence (90%+ probability) that the net effect of human activities has been one of warming, with a net forcing effect of +1.6 W/m^2.
  • The combined greenhouse gas effect is likely around +2.3 W/m^2 (with a 90% confidence interval of +2.07 to +2.53).
  • Offsetting the warming is a global dimming effect due to aerosols and increased cloud cover of -1.2 W/m^2.
  • The balance comes from other warming and cooling effects. The warming due to increased solar radiation is +0.12 W/m^2, about half the decrease of -0.2 W/m^2 due to changes in surface albedo (which I'm guessing means effects like desertification--don't know what happens to that number as ice melts.)
  • Measurements show warming trends, increase in water vapor present in the atmosphere, and warming oceans (which have been absorbing 80% of the heat added to the system, causing the water to expand).
  • Sea level rise from 1993-2003 is significantly higher than the rise from 1961-1993, driven mostly by thermal expansion and the Greenland and Antarctic ice sheets.
  • Observed shifts in weather patterns are consistent with the temperature changes.
  • From a paleoclimate point of view, "[t]he observed widespread warming of the atmosphere and ocean, together with ice mass loss, support the conclusion that it is extremely unlikely that global climate change of the past fifty years can be explained without external forcing, and very likely that it is not due to natural forces alone." (emphasis omitted)
  • Projected temperature increases for the period from 1990-2005 were between 0.15 and 0.3 deg. C per decade. The observed increase for that period has been 0.2 deg. C per decade, strengthening confidence in the projections.
  • Warming tends to reduce the CO2 uptake by the land and oceans, leaving more CO2 in the atmosphere.
  • Even at the current CO2 concentration, by the end of the century the best estimate of sea level rise is +0.3-0.9 meters. Other, greater CO2 scenarios show a rise of up to about 6 meters.
  • Due to the time it takes natural processes to remove carbon from the atmosphere, both past and future carbon emissions will continue to contribute to warming for a millennium.

Thursday, February 01, 2007

software liability

Periodically, Bruce Schneier proposes establishing liability for software. I've responded with some concerns--I wasn't sure a liability rule is necessarily the wrong approach, but there are a lot of alternate ways to deal with the issue and liability might be the wrong hammer to turn this screw. But it's hard to discuss an abstraction, so I took a few minutes to try to write a statute that would impose liability for consequential damages on software manufacturers, just to see what sorts of issues I ran into. After going through the process, I think either it'd take a better statute writer than me to do the job well, or that a liability rule may be overkill at this point in the industry's development and risks some fairly significant damage to the software industry, in which case some of the other possibilities might be better choices.

Anyway, here's a blow-by-blow. First off, let's try to exclude open source projects.
(a) Except as provided in subparts (b) through (d) of this section, any entity that manufactures computer software for sale shall be liable for consequential damages caused by defects in that software, where such defects arise through the negligence of the manufacturer, its employees, or agents.
"For sale" excludes open source projects and people who give their software away for free, but it has some unintended consequences, too. Shareware authors will face liability under this statute, and adware (free software supported by advertising) probably faces no liability. Offering web-based applications, like the stuff all the rage at Google, Yahoo, and Hotmail right now, probably also doesn't constitute a "public sale." It's not clear what happens if you give away the software for free but make up your development costs on support contracts.

In fact, there's another, fatal problem in this language: very little software today is offered "for sale." Almost without exception, it's offered under a license to use it and specifically not to sell it. But if we write "for sale or license," the statute suddenly will apply to the GPL, LGPL, BSD, Artistic License, and anything else that's not public domain. In the end, it might be necessary to re-cast the statute in terms of generating revenue rather than selling software.

The plaintiff needs to show negligence, which imports the industry's ordinary standard of care into the statute. Notice that there's no limit on liability. That's a potential problem because it could either chill development or move manufacturers to jurisdictions that don't have liability rules. There's a lot of literature on product liability that may have some helpful suggestions for these issues.

Now, we want to encourage software houses to find and fix their bugs, so let's give them a safe haven exception for doing that.
(b) If, within sixty days of learning of a defect, the manufacturer corrects such defect and makes the correction freely available to purchasers of the software, this section shall not apply to damages that arise from such defect after the date on which the manufacturer learns of such defect.
So if you learn about a bug, and you fix it in 60 days, you don't face liability for problems caused after the day you learn about it. It's designed to encourage manufacturers to learn about their bugs as early as possible and to fix them within a reasonable amount of time. After all, we don't necessarily want them rushing those patches out the door. On the other hand, the statute doesn't say anything about bugs that the bug fixes introduce, and it says precious little about how they have to do the distributing. I'm having a tough time figuring out how to define a self-installing binary patch in statutory language, and freezing that definition into a law that might be around for 5o-100 years is likely to be a really bad idea.

Now, we don't want software houses to have to maintain support for their creaky old software forever--I mean, do you really want Microsoft to spend its time keeping Windows 3.1 on life support rather than chucking it and writing something with real security? On the other hand, we also don't want them to drop support the day before the software rolls out the door so they can avoid liability altogether. So how about a hold-down period?
(c) If the manufacturer publicly disclaims support for such software, this section shall not apply to damages arising after the Final Support Date, where such Final Support Date is determined as provided in this sub-part and sub-part (d) of this section:
(1) the Initial Sales Date shall be the date of first public sale of such software;
(2) the Final Offering Date shall be the latest of
(i) the date of the last public sale of such software,
(ii) the date on which the manufacturer publicly disclaims support for such software, or
(iii) the date one year after the Initial Sales Date;
(3) the Final Support Date shall be the later of
(i) the date one year after the Final Offering Date, or
(ii) such later date as the manufacturer may establish by agreement.
So if you sell software, you have to support it for at least two years. That might be a problem if you're a shareware writer. Individual consultants may get away without liability because there's no "public sale," though their customers can establish liability in the consulting contract. But suppose the customer puts that code into a product and then sells the product--then the consultant will be on the hook to provide support.

And there's another problem, here: (c)(2)(i) makes the Final Offering Date dependent on the date the software's last sold, so that mom-and-pop store with the dusty rack of ancient shrink-wrap code can keep the manufacturer in the support business. On the other hand, we don't want the manufacturer to say "no, really, we're dropping support" and keep pumping the product out the door and bringing in revenue from it. We also don't want the manufacturer to be able to disclaim support and then farm out the process of stamping disks to an affiliate that keeps selling the software. I haven't figured out a good answer to this problem, yet.

Anyway, let us forge onward. It sure would be nice if our manufacturer had the ability to sign separate support contracts with different people. ("OK, government entity, we'll keep supporting Windows 3.1 just for you, but it's gonna cost you because everyone else is moving to Vista.")
(d) A manufacturer may, by agreement, establish different Final Offering Dates and Final Support Dates for different users of the sofware, but in no event shall the manufacturer establish a Final Support Date which is earlier than the earliest date provided in subsection (c) of this section.
The "in no event" language is to keep someone from putting a 1 day Final Offering Date and Final Support Date in the EULA. Notice, though, that the "date of the last public sale" language may cause problems here, too. If you sell a copy of Win 3.1 to the government as part of a private support contract, is that a public sale? Also, the language is really clunky and could lead to differing court opinions on what the "earliest date" means.

Finally, we need to make it clear that you can't contract out of the liability. Otherwise, the software house will just put a clause in the EULA where the user agrees to waive liability.
(e) No manufacturer may waive liability under this section except as provided in subparts (b) through (d) of this section.
Hmm. This one needs some refinement for sure. Could you contract your support out to a dedicated support firm and send your liability along with it? What about to an undercapitalized support company that, if someone sues them, won't have any money to pay damages? If you do contract out support, can you get them to indemnify you for damages? If so, could you put indemnification in the EULA, so that the user suing you has to indemnify you for any damages that user might recover?

Finally, what happens if 47 states adopt this statute but Delaware, New Mexico, and, say, California don't? Notice that every EULA includes a choice of law provision ("the laws of the state of X shall govern this license"). You might expect those provisions to quickly swing over to whichever states don't impose liability.

Anyway, these are all issues to consider. As I said, I'm not expert at drafting statutory language, but the process of going through it has exposed a lot of ideas to consider with any push for software liability. These aren't just issues for the lawyers. Many of them are policy issues that determine what incentives we want to create and how we want the software industry to evolve.