Orcmid's Lair |
status privacy contact |
|
Welcome to Orcmid's Lair, the playground for family connections, pastimes, and scholarly vocation -- the collected professional and recreational work of Dennis E. Hamilton
Archives
Atom Feed Associated Blogs Recent Items |
2006-02-24Open is Different than Stolen: The Importance of Social Trust in Honoring IP
IPcentral Weblog: Coming Soon: Son of Grokster! James DeLong makes a fascinating observation around social trust that also invites us to understand the basis for reciprocity and building of social capital:
{tags: intellectual property economics free riders reciprocity social contract orcmid} I think that attitudes and stories about free-riders are important to pay attention to. It was, after all, our built-in assumption of the fundamental defect in communism and socialism that was common currency in the US during the earlier stages of the cold war. This argument also shows up in contemporary discussions of social welfare policies, health insurance provisions, and social security and Medicare programs, with “moral hazard” being the tip-off expression. Calling a producer a free-loader. What I find fascinating is that Richard Stallman’s argument in favor of the GNU Public License is that it prevents closed-source ventures from free-riding on the freely given work of others (except if it is truly a gift, what are all of these strings attached to it). It also creates an IP regime, by someone who simultaneously refuses to grant standing to the term “intellectual property.” It is also the case that illicit file-sharing is justified by the argument that the producers of mass media are free-loading on the backs of the artists (although so is the file-sharing community, big time). The other popular rationalization is that the consumer gets to fix the price and the currency of exchange and the producer doesn’t have any say in the matter. We have some odd ideas about the social contract on both sides of that deal. Flavors of reciprocity. Although I disagree that open-source development and distribution are insane ideas, I do think that DeLong’s earlier commentary on reciprocity is also important:
The free-rider in the punchbowl. It is useful to recall that Microsoft was driven to an OEM-based licensing regime and effective DRM on the software (e.g., Microsoft Basic in ROM) because of nearly going out of business with the hobbyist community (and hardware builders) ripping off Microsoft software like crazy. Microsoft seems to have solved its problem, but we haven’t come to grips with ours. I mention this because it is striking how much distrust of free-riders figures in our justification for various arrangements, including ones that appear to be diametrically opposed to each other. You can extend this to the political and economic climate as well as the on-going concerns about culture, creative arts, and intellectual property of various kinds. Comments: Post a Comment 2006-02-20What Was Y2K All About?
FREAKONOMICS BLOG » Was the Y2K threat real, imagined, or invented? Steven Levitt questions the threat and emergency generated around Y2K. For Levitt, the resounding fizzle of the actual event and our collective forgetting of the cases that did arise (which Levitt has no record of) is evidence that the whole scare scenario was a false prophecy. I want to suggest that whatever the degree of hype and popular fear created around Y2K, the situation is more complicated than that. {tags: complexity Freakonomics Y2K Michael Crichton State of Fear scientific speculation orcmid} Levitt had earlier used Y2K as an example of false predictions going unpunished. A few commenters suggested that Y2K doesn’t fit, and one pointed out how IT organizations were able to determine that their systems would fail (by changing the computer’s calendar under test conditions) and then made the necessary repairs. In rejoinder, Levitt cites a 5th anniversary commemoration of Y2K by Larry Seltzer in eWeek. Seltzer pooh-poohs the whole event, crediting cynical motivations for the whole brouhaha. Levitt finds this an appropriate echo for his own mildly-stated cynicism about the avarice of programmers. I think there are three levels to this question: the catastrophe predictions, the situation in computer-based systems and information technology at the time, and the specific technical matter and the role of programmers. Public Awareness and Predictions of DisasterIn Michael Crichton’s State of Fear, there is a remarkable passage where one of the characters talks about how public conversations — socially-held existence of ideas — move through a series of disaster fads. In the context of the book and Levitt’s questioning of the veracity of the predicted Y2K catastrophe, one might start keeping track of the current global warming and bird-flu pandemic conversations and see where they go. For more clarity on the themes around which Crichton built his novel, there is an excellent November 2005 speech (and great videos of its delivery and follow-up questions) also documented on Crichton’s web site. The speech gives clear attention to the Y2K global-disaster hype, suggesting that over $100 billion was spent by industry and organizations for mitigation in the years before the clock turned over. On page 446 of the novel, Crichton introduces us to Professor Norman Hoffman (a part crying out for Christopher Lloyd unless Irwin Corey is available). Hoffman is a sociologist whose field of expertise is the ecology of thought “and how it has led to a State of Fear.” His point is that ideas have a life that is akin to fashion, where they become the conventional wisdom and then disappear, being completely forgotten while different ideas take their place. The key aspect for Y2K is the way that scientific and technical matters are launched into public conversations, being fed by expert reports and media accounts as matters deserving of public attention. These ideas tend to be extrapolated and exaggerated far beyond their initial context and any careful phrasings, and seem never to be overtaken by later information and corrective facts. Hoffman speaks of the left-right-brain dichotomy as an illustration of ideas that “hang on past their time,” supposing that it will take until 2000 (twenty years after the notion was debunked) to die. It seems that the 20 year estimate in the novel was optimistic. Not being actively engaged in IT by the time of Y2K, I don’t recall being particularly concerned, expecting any disruptions to be quickly repaired. I had family members that were concerned, with one expecting great social breakdowns, descent into lawlessness, and something akin to Isaac Asimov’s “Nightfall.” Others managed to spend New Years Eve 1999 in the countryside. I had no idea where those ideas came from. That was startling for me, though I and my mate stayed quietly at home. That’s not an unusual choice for us. In this regard I notice that people generally don’t bring me scary predictions, so I have no expertise on how they get around. It is interesting to me that I once spent a fair amount of time fact-checking outrageous e-mail urban legends and virus-danger messages. Now I rarely receive those, not because they aren’t still being propagated by some of my acquaintances but because they are no longer being propagated to me. My speculation is that this urge to spread warnings, even baseless ones, stems from our desire for soap opera and drama. Maybe we have fear as a way to know we haven’t died yet. At the level of public ideas of impending doom, the speculation of extreme consequences was indeed unwarranted and uninformed. The IT ResponseMeanwhile, in the run-up to Y2K, there was a lot of activity to mitigate the kinds of breakdowns that would inevitably arise were no action taken. One of the key problems with Y2K is that functioning data-processing systems were rapidly approaching a point in time that had not been provided-for in their early development. Basically, we had been keeping systems alive and running with software that was developed under the assumption that the software would no longer be in use by the time Y2K was a concern. The endurance of old legacy software and the databases it worked with was seriously underestimated. The second problem with enduring code is that organizations did not know how many places and under how many conditions Y2K would be a factor. In some cases, the original programs and their programmers were unavailable and it would be time to endure their replacement, fixing something that is not broken but might fail any day now. There was no doubt that Y2K was a problem. The fundamental defect was designed into digital records themselves. There was no question that some sort of modification was required to prevent any miscalculation and errors in the behavior of computer software. What was not always known was how many places in the software carried Y2K-defective forms of intermediate data along with programmed “logic” that would fail when dates beyond 1999 were introduced. It was sometimes known that the defect was there when programs were created, starting in the great expansion of computer systems of the 1960s. It was economically and personally convenient to ignore the prospect of future failures. Later, it may have simply become habit along with an aversion to forced conversions of systems. This was in many ways a great demonstration of the “you can pay me now or you can pay me later” maxim and the unerring arrival of “later” in the approach of year 2000. It must also be clear, as Crichton mentions, that many systems involving long-lived data and records (sometimes about long-lived people) had been upgraded well before the imminence of 2000. But there was always a concern for latent defects that wouldn’t surface until the date actually approached. This led all serious organizations to conduct audits of their software and to require similar audits by their suppliers. Programming and the Y2K BugIt is a rare programmer that delights in maintaining old code. It is no fun, there is no excitement, and there is little sense of accomplishment. It’s tedious work complicated by time constraints and the danger of introducing destabilizing changes as the result of sketchy understanding and the state of accretion of the software. In the case of Y2K, there needed to be careful auditing of all date-related operations and the data elements employed. This could occur in many places in a program, and finding the minimal upgrade to alleviate the “bug” — a kind of software time bomb — was tricky. Just as tricky was the problem of converting data to allow for a wider range of dates. All of this remedial effort got in the way of new work and improvements for other purposes. One price for the Y2K remediation effort was deferral of new development work. In the early years of data processing, dates were often carried in four or five digits, using formats like “yddd” (Julian day) or “ymmdd” (within-decade calendar date). This was done because storage was precious, especially on limited media such as punched paper cards. For electronic digital records, there was widespread update to six-digit dates, employing data elements with forms like “yymmdd”. Given a six-digit element, there is nothing to indicate whether the omitted and understood element is “18”, “19” or, as the twenty-first century approached, “20”. There simply isn’t anything inherent that can be relied upon. Computer software that employed these dates in date-related comparisons and calculations was either designed on the assumption that the missing element is always the same (so we don’t care what it is) or is always “19” or is, say “19” for yy of 38 or more and “20” for smaller values of yy. There were many other algorithmic adjustments introduced to carry a hundred-year span of dates without actually specifying the number of the century. Y2K compelled isolation and examination of all of those devices. You or I might look at an xx-xx-xx date and recognize its values as the coding of an mm-dd-yy date and neither dd-mm-yy nor yy-mm-dd. It requires additional context to know which is the case for those elements that could be any one of those forms. It requires even more context to know whether yy is short for 18yy, 19yy, or 20yy. The only way to be certain in a data-processing situation where date elements may span a century or more, is to provide the full four-digit Gregorian year, yyyy. (Other calendars involve different requirements.) For date-related calculations, the actual range to be tolerated must be known, especially for calculations that involved the number of days or even number of hours or minutes between two dates that might be centuries apart. Fortunately, storage capacity and calculation ability are not the concerns that they were when we clamped dates to 6 or fewer digits. The need for care in the programmatic handling of dates remains. For some organizations, Y2K is not over. There may be legacy records and files that have not been converted. When those records are ever accessed, some conversion must take place for any carried dates to be related properly in post-Y2K calculations assuming post-Y2K date elements. For some organizations, Y2K wasn’t anything special. Who Knows Whether It Was Worth ItIt is hard to know whether the economic and financial analysis considerations that have us want to improve systems as late as possible were reliable in the case of Y2K’s arrival. Enterprise finance and information officers would be able to tell us whether it was worth it and whether there is any continuing benefit from the auditing and reworking that was done to mitigate Y2K.
Comments: Post a Comment |
You are navigating Orcmid's Lair. |
template
created 2002-10-28-07:25 -0800 (pst)
by orcmid |