Orcmid's Lair |
||||
|
2004-03-27ZDnet TechUpdate: Six barriers to open source adoption. Here's Dan Farber's 2004-03-20 Tech Update article on the topic. The need for service-level agreements, and some business model that provides them, is emphasized. An interesting section has to do with the need for roadmaps and transparency. This is so that enterprise users can perform necessary planning and also influence the process where possible. It strikes me that open-source projects are going to have to deal with risk management as well, in that case. I am also struck by the need for clear licensing and intellectual property treatment. And finally, for now, reference selling is what matters most. enterprises want to know about the experiences of other enterprises. Then the support of independent software vendors comes into play. ACM News Service - Six Barriers to Open Source Adoption. This blurb is about the current barriers to adoption of open-source software at the enterprise level. The subscription model comes up again, too. Collaborative and Cooperative ComputingWikis and BlogsThe InterWiki EffortThere is now work on creating some sort of InterWiki agreement. Areas of concern include sharing of WikiText, including import and export; InterWiki connection (that is, transclusion and interlinking); and other federative schemes. There is also a harmony between Wiki and Blog models, and that may crop up in the InterWiki space too. There are more places where this fledgling activity is popping up. I'm still researching ...TheInterWikiMarkupStandardShouldBeXhtml - InterWiki. This is the page that Danny Ayers started on the InterWiki Wiki. There are some other links and discussions here, and these are all fodder for requirements and also understanding for the gotchas. One gotcha has to do with features that aren't (or haven't been) mapped, and whether an escape mechanism is required to preserve them. There is the usual problem about what happens with an encapsulated extension in the face of annealing of the containing WikiText. I don't see how these can be preserved where they have no interpretation, and they are likely to be inconsistent with the edited surround. Any provenance could identify that there were extensions, and that they were dropped, perhaps. Must musing ... Raw: InterWiki. This is a page on Danny Ayers blog. The article is about InterWiki, and it there is other material here that I would like to review also. His view on InterWiki is that interchange should be RDF and REST for metadata and XHTML for content. FrontPage - InterWiki. Here's an InterWiki Wiki on the creation of InterWiki tools and such. I managed to crash Moin Moin while creating my own profile here, so I must remember to post that bug report somewhere. This site is proposed to grow into a nexus on InterWiki and how to do that. Hmm, I suppose it is related to my feuding lexicographers scenario too. Hmph. I suppose that is a good way to look at the different perspectives Murray Altheim and I bring to this topic as well. My, my ... Parsing, Analysis, and Generation/ConversionThis is a large topic, but I found ANTLR in the context of application to WikiText and, indeed, Terrance's own TML 0.1 simple markup language. I didn't think of this as part of Situating Data, but of course it is. It is about Information Processing.freshmeat.net: Project details for ANTLR. Here's what looks like an out-of-date Freshmeat listing for ANTLR. I find it interesting that there is an optional dependency from a VRML project. ANTLR Adder Tutorial. Here is a nice article that provides better error-reporting control in an ANTLR grammer. It looks like these are the kinds of error messages one wants to be able to feed back into an IDE, so that is important. I am wondering if this is not a cool way to build Frugal on the cheap. I will have to wait to think about that. ANTLR Parser Generator and Translator Generator Home Page. I saw some raving about ANTLR on tools-yak so I looked it up. What is fascinating there is the definition of TML 0.1, Terrance's Markup Language, which is a flavor of a WikiText. It is apparently parsed using an ANTLR grammar. ANTLR compiles to actions in various programming languages, and TML 0.1 allows for macros, in effect, that permit introduction of additional processing (not sure when or where, though). 2004-03-26Blog Clients and InteroperationBlog tools are starting to converge in some respects. That will be important to follow. It is also important to harmonize Blog and Wiki operation, so that is another angle from which to appraise these efforts.SourceForge.net: Project Info - Semagic. This is the open-source client for a specific blog client. The value is in having production Win32 (Unicode support) code, though it is in C++ and Russian (in part). Developer Information. Here is another blog client and server pair, with open-source support for both. It doesn't seem to interoperate with other servers like w.blogger does, but it might provide some clues about things. :: w.bloggar ::. Here's an interesting tool. I found this while reading up on the MS Media Player 9 blogger support using a "What I'm Listening To" link. w.bloggar is a Visual Basic application that provides for editing and preview on the desktop, and then publishing to the blog site. This is an addition to my benchmarking on blog and wiki interface tools. 2004-03-25System Architecture and DesignModel-Driven DevelopmentModel-driven approaches, including code generation from models, raise interesting questions about where design rules come from, or does one do something closer to the metal and forget about abstractions and independence of layers. This topic is yet to be played out, but the featuring of different paths by OMG and friends, versus Microsoft, will be played out in the world of big enterprise. I have no idea how it could go.Sidebar: Waiting for UML 2.0 - Computerworld. This second sidebar 2004-03-22 article from Carol Sliwa is longer than the rest of the material put together. It features interviews with Grady Booch and Bran Selic, and I find it, ... well, ... incoherent. Raises my concern whether UML 2.0 has any "semantics" the way people want that understood in any way whatsoever. But then, maybe it has nothing to do with whether you use pictures instead of text. It might be more to do with who is using these as a medium of communication, and how is it going? Sidebar: A Different Model for Microsoft - Computerworld. This sidebar 2004-03-22 article by Carol Sliwa covers Microsoft's charting a different course than UML and MDA. The Microsoft model-driven approach will appear with Visual Studio 2005 in mid-2005, and it will take a more platform-centric approach. My earlier appraisal was that Microsoft seems to be confusing code with design, and I suppose it comes with confusing architecture and packaging (not that they are independent). The MDA proposes greater decoupling between model, business abstraction, and implementation design. We will see how it plays out. Blueprint for code automation - Computerworld: Early adopters of Model Driven Architecture face cultural barriers, but the payoff promises savings in time and money and better code quality. This is the Carol Sliwa 2004-03-22 Computerworkd article on the ramp up of MDA and how it has succeeded with staged introduction in organizations. There is broader and deeper coverage than in the blurb, and also some interesting related stories. ACM News Service - Blueprint for Code Automation. This blurb is about successes with Model Driven Architecture (MDA) and its use to provide more effective solutions and communication of requirements to developers. Computing MilieuxEconomic SystemsSubscription LicensingYahoo! News - Major Shifts in Software Licensing Expected. This article identifies subscription approaches as an accelerating trend, along with customer demand for simplicity and predictability.Computer NetworkingNetwork SecurityOur examination of this topic in my class has ended, but here are some follow-on finding I ran into while still paying attention to this topic. Some companies provide slick supporting information, and here are some of the commercial approaches.Nortel Networks: Products, Services & Solutions - Secure Netwoking - Network Security 101. This is another compendium page that has great links and overview information. I encountered this when nosing around a link that Keith Richards supplied. I wonder if this page has been linked by other classmates and I missed the import because of the modest title! ICSA Labs Cryptography Community. There are book lists and other links here. This is a marker for digging deeper. Standards for commercial security products are set by ICSA Labs. OK, I got it. ICSA Certified is a commercial-consortium mark, and TrueSecure Corporation operates ICSA Labs. I keep getting the idea that ICSA should be some independent association. Silly me. TruSecure, Intelligent Risk Management, Managed Security Solutions. I don't quite know how I got here. I was looking for something else. On the other hand, this is something to look at as we dig into network management this final week of the Computer Networking class. I see that the Month of March included a "Heavy Damage" exploit against SNMP, for example. Nortel Networks: Secure Networking. I was looking at the little certified by, secured-by logos on the previous page, so I went looking for a rundown on Security. Wow, this would have been a good resource in the week just concluded on Network Security. This summer I take a full course on the topic, and this page will be a keeper. Just the list of industry links is a great tool, but the on-line information is also appetizing. Nortel Networks: Alteon Portfolio - Alteon Switched Firewall (ASF). Classmate Keith Richards just posted a great observation about firewalls and multiple lines of defense, multiple vulnerabilities, and the greatest weakness of current systems. He also raved about this product, which provides extraordinary enterprise service for its piece of the security tapestry. Checking it out, I realize that from my SOHO perspective I have next to no "feel" for the state of industrial-grade approaches in serious production network systems. 2004-03-24CompleteWhois Main Whois Web Form. Here's an interesting site found by classmate Pete Kelly. It is designed to provide complete information about one's DNS records, errors in them, etc. Computing MethodologyComputational RandomnessQuantum-Statistical ApproachesOK, I can't resist. Here is a sequence of supposedly-random numbers produced by quantum-statistical means.Trial Run of Quantum Randoms. I just requested 100 4-digit random numbers from the new generator. I don't have any reason to trust these, and if I was going to use them, I would get a different batch or two and transform them somehow to prevent any weird coupling between my numbers and anyone elses. Here they are: 5290 0420 7289 9291 0261 3697 2906 6693 1155 7322 0004 7894 5523 9903 1349 2479 2805 2347 0374 0591 1298 2674 1528 1472 9466 0688 6342 9506 8727 9057 1164 5419 9574 0821 0048 6185 8128 1095 6985 3534 8179 5454 4593 8914 2179 8349 9059 3718 4068 1969 7836 3770 1164 9770 7684 4626 6707 3561 6158 9279 4202 2615 1213 4022 1756 2691 3863 3845 8149 4014 4842 0371 9381 9327 8687 1083 3687 5526 2869 3674 3891 0408 7693 1103 8556 5834 4369 0525 8594 0536 9325 2623 0195 6872 9596 9092 4079 8100 2785 2108 ID Quantique: cryptography, photon counting, random numbers. Here's the quantum random-number-generator system and a variety of applications, including a PCI card as well as an integrated component. 2004-03-23Trust and Trustworthy ComputingSafety-Critical SystemsSoftware QualityWhy Software Quality Matters. This is a lengthy 2004-03-04 article by Debbie Gage and John McCormick at Baseline: The Project Management Center. It addresses serious concerns of product liability for software in safety-critical systems. There are a number of tables and reference materials at the end. However product liability issues continue over software-related failures, there are important matters here on how to develop for safety.Programming SystemsFunctional ProgrammingHaskell Model and EducationI know that I must understand Haskell better because of the use of "monads" as a way to deal with interactions and procedure effects. oMiser is purely functional in that there are none of those (which makes driving an oMiser awkward for us procedural computation types who require an interactive scheme that does not poison the functional character of oMiser. I can punt with oFrugal, the shell that does the imperative housekeeping that makes oMiser more usable. But I do want to get to iMiser, with interactive capability comprehended in the mechanism. So how this is done for Haskell seems important to me. Also, the use of graphical models and interactions with them is intriguing because the models may be valuable for reuse in other settings, including oMiser and also Situating Java.Slashdot | Learning Functional Programming through Multimedia. Wow, the ACM TechNews blurb on this topic links to a Slashdot review. Wowza. Lots of links and comments, of course. There is more for me to learn about Haskell, and this Andrew Cooke review of Paul Hudak's The Haskell School of Expression: Learning Functional Programming through Multimedia is a winner. I am saving this page and figuring out what to do about all of this for nfoWare and the Miser Project. ACM News Service - Learning Functional Programming Through Multimedia. It is hard to see something exciting in this newsblurb, unless one is already a Haskell fan. On the other hand, the use of multimedia for learning is what I want to do with nfoWare and I had this thought that it might be cool to develop Miser to the point where I could use it for delivery and its own explanation, though I think it is a pretty big gap to where iMiser (one with an interactive computational model) is more than a gleam in my eye. I think this is could be useful as a way for me to become more familiar with key aspects of the Haskell model (especially monads) and also get some ideas for scripted presentations in nfoWare. Trust and Trustworthy ComputingEconomic ConsiderationsThe Antivirus IndustryWired News: Cashing In on Virus Infections. This Michelle Delio 2004-03-18 Wired News article has Jimmy Kuo's statements in a different context than the newsblurb. It is not clear how to deal with the e-mail situation, but it looks like we could reduce risk considerably by different user practices and becoming feature-resistant.ACM News Service - Cashing in on Virus Infections. This is an interesting blurb about how the business model of antivirus firms has driven out what might be more promising approaches. I agree that the updated, signature-based systems are very convenient, and the subscription model is to the advantage of AV firms. But, because of the constant work on new threat approaches by intruders, I don't see how something lik subscriptions can be avoided. What struck me is the observation that e-mail programs are too feature rich, and made feature-easy rather than safety-easy. I would be willing to revert to plaintext-only e-mail as a way to be more secure from viruses. It won't help with all spam, but it would also filter out much commercial and porn spam as well. Computing MethodologyComputational RandomnessQuantum-Statistical ApproachesServices and products for providing "pure" random sequences based on quantum physics are now being offered. I'm not sure how one certifies such a thing, and it will be interesting to see how operation can be confirmed.randomnumbers.info. The University of Geneva page that carries an item about the quantum-based random numbers is in French, and the English home page does not have the announcement. Nevertheless, I was able to pick this URL out of the French text and here is the site that is being used to promote this activity. There is more information here, and you can get it to generate sequences of 4-digit numbers. Here's a cute item: "Note: In case you use random numbers downloaded from this site to play lotteries and you win, we recommend you to donate half of the sum to www.randomnumbers.info!" The site needs a little work, figuring they will be slashdotted any day now, but it is interesting. To get a bigger random number, ask for a sequence of several 4-digit ones, and then run a cryptographic hash or something! [;<). True randomness upon request. This is the Innovations Report on the University of Geneva team that worked with id Quantique to launch the quantum-based random number generator site. There is no direct link in the article. It looks like the application will involve licensing of a client-server application that downloads random numbers for use by applications, and there is API code in popular languages for accessing the numbers and using them. Oddly, because of the Ads by Google on the same page as this announcement article, there ads for random number generators. ACM News Service - True Randomness Upon Request. This blurb covers the announcement of a quantum-based random-number generator system that is accessible on the web and can be used as a seed for calculations. An interesting challenge is to find a way to use this that does not allow possible discovery of a secondary generator that is used to produce random numbers, as when seeking probable primes for employment in cryptographic systems. Computer NetworkingNetwork SecurityCryptographic ApproachesKenneth Castelino. Castelino produced a nice example of SSL and of DES operation. He did a graduate-course group project in which a form of encrypted chat was created, although the use of certificates and authorities wasn't introduced, so the demo does not show authentication. However, key exchange and implementations of RSA and Rijndael were done. Kenneth has some other interesting projects, but he turned up here because of the security project undertaken in a computer networking course.2004-03-22Schneier.com: Why Cryptography Is Harder Than It Looks. This is a great essay. It expands on a topic that Guido Hollander raised in our class on Computer Networks. So I now have the pleasurable duty to look up more of what Schneier has to say on these topics. Crypto expert: Most encryption software is insecure - Computerworld. This is a July 9, 1999 article by Ann Harrison. It is based on an address by Bruce Schneier about the quality of encryption software. Classmate Nigel Briant turned it up. Encryption Software - Encryption Software Toolkits - 3DES. This organization was located by classmate Nigel Briant. I am not so sure that I want to buy a library or toolkit that I can't vet. But this is useful to know about. Computing MilieuxEconomic SystemsPreservation of Systems and DataThis is an odd classification, probably more based on how I arrived at this topic in cleaning up my clippings than the content itself. It is also about IT economics and the maintenance of stability and usability of systems in a non-disruptive way. The article also recognizes, in a tacit way, the disruptive influence of early adopters and power users who end up making it necessary for others to upgrade their software in order to preserve network membership.Refreshing the Desktop - Computerworld. This article looks at how people deal with two issues: maintaining stability and consistency in laptop and desktop deployment, to prolong system life and minimize support; to deal with the network effect driven by upgrades of operating systems and primary office productivity software. The price points and configuration choices are interesting, as are the components that are appealing (such as USB key storage). Open-Source CompetitionIt is fascinating to see who stands up on what side of the open-source versus closed source models and the so-called software economy.Microsoft exec: Open-source model endangers software economy - Computerworld. Paul Krill filed this 2004-03-16 InfoWorld story on the Software Development Conference underway in Santa Clara. What Jim Gray said was, The thing I'm puzzled by is how there will be a software industry if there's open-source," followed with "The key thing is [with] people who are selling their software, the software has to somehow be better than the free software, and [if] it's not better, I'm puzzled as to what the business model is because they can't sell it." Gray does not think that service-added is very good competitive model as a substitute for commercial software development, because people in China could do better. I still think there is some confusion here. In the old days (as Gray would remember), software was either bundled, proprietary, or free. There was a lot of open-source software or mildly-encumbered-software before the PC model and a software development business that produced proprietary commodity products. So there is a good point. I don't have the answer. And, I think that open-source commodities with appropriate support models will become important. It just makes the commodities more like free or community goods than the software industry (which is quite young) expected. So it could be viewed as a natural progression for something that has no manufacturing costs. The other aspect of open-source in the commoditization space is that there may be some stability at the foundation, because there is no advantage to be too innovative, either for users or for developers. Reuse may become more economical, though integration will become the next important service. There are probably new business models in these areas, at least until they become a matter of competition on price alone. It may be that there will never be another Microsoft, even for Microsoft. And I mean that with all due respect to what Microsoft has achieved. Gray's concern about where standards come from and who does the work to develop them seems important. It is not clear how one can maintain a standards-development community where there might not be commercial self-interest, although I think we do see some aspects of that in the W3C and especially the IETF models. Maybe there are new collaboration models and funding models for the maintenance of community-beneficial collaborative activities. Trust and Trustworthy ComputingSecurity ModelsExploitsExperts publish 'how to' book for software exploits - Computerworld. This is Paul Roberts' 2004-03-15 Computerworld article about The Shellcoder's Handbook: Discovering and Exploiting Security Holes. Although it exposes some zero-day topics, it provides worked examples for creating exploits that take over systems and/or compromise their security and extract data. I care about this for nfoWare and development of trust-point analysis. There are a wide variety of failure modes that one must pay attention to, and this might provide a good basis for analyzing them. 2004-03-21Computer NetworkingNetwork SecurityDenial-of-Service AttacksThe Distributed Reflection DoS Attack. This is one of those great Gibson Research articles on an attack that occured on January 11, 2002, exposing Gibson to a distributed-reflection denial of service attack. It is a lovely account.DNS IntegrityHelp Net Security - Attacking the DNS Protocol. This blurb is for an interesting site, and it offers a 2003-11-12 PDF that gathers together DNS security issues and topics.DNS Security Algorithm Numbers. This is the IANA registration list for DNS KEY and SIG security algorithm identifications. Programming SystemsJavaJava Package NamespaceI see how this particular site has made Java packages available, and it reminds me of what I must do for publishing packages and unpackaged code and documentation. I think there is an advantage here in having java as a "root" in a Web hierarchy, but I need to keep mulling over all of the considerations.dnsjava. Here's the "root" of information on the dnsjava package. It is a nice collection of utility classes and ways to exercise a DNS, as well as serve up one. My theory about names has been munged by this material now being at dnsjava.org too. Overview (dnsjava documentation). Here's someone after my own heart, when it comes to figuring out where to put the documentation for Java packages based on a domain name. Package org.xbill.DNS descends from here, in terms of the reverse directory path. I keep thinking I will have a different top, with multiple packages underneath, but the principle is the same. Computer NetworkingNetwork SecurityDNS IntegrityWe looked at DNS spoofing and the consequences of insecure DNS activity for injection of improper cache values, counterfeiting of DNS query responses, and putting a man in the middle in DNS or in the Internet communications from some end-point (domain).org.xbill.DNS.security (dnsjava documentation). Here's some java code for verifying DNS responses that have signatures in them. dnswalk - a DNS database debugger. Packages like this are recommended by Cricket Liu for debugging DNS systems. I am not so sure this is the proper tool, but it puts me in mind of the value of having a tool for making DNS queries and capturing responses. This would be easier than doing it all by watching my sniffer! The latest Perl code (2.0.2) from 1997 is on SourceForge and linked from this page. Networking: DNS Security Vulnerabilities. This is Beth Cohen's 2003-05-12 article for Networking.EarthWeb.com. The last of the 4 pages provides a checklist. The list of vulnerabilities is consistent with others I have found. Men & Mice - What is DNS spoofing?. A nice little page on what DNS spoofing is, and a reference to a paper by Cricket Liu that describes what can be done with the Microsoft DNS Server and with BIND. Using DNS Security Extensions (DNSSEC). This Microsoft Technet article suggests that Windows Server 2003 DNS Server support is passive with regard to the retention and serving up of the DNSSEC records. It does not check those records. The article also suggests that Windows XP Clients do not check DNSSEC information on the records received. DNS Security: Present and Future. These are Edward Lewis slides to the November 13, 2001 ICANN Panel. The spoof points for injecting false DNS information are identified, along with approaches to poison a cache server or to even spoof a cache. ICANN | Committees | Security and Stability Advisory Committee. This committee is addressing the security and stability of the Internet. The last reported meeting was ensnarled in the wildcard situation and VeriSign creating a wildcard registration that diverted all name errors to a commercial page of their own. Security-related aspects are also addressed. ICANN | DNS Security Reading List. A handy reference of links on the DNS system, the Internet itself, and Security considerations, especially for DNS. ICANN and Internet Security. This is Steven Bellovin's keynote address to the November 2001 ICANN meeting and workshop. A key slide is the identification of the major components, and the difficulty of securing a total system (i.e., the Internet completely) with distributed responsibilities. Trust In Cyberspace, Nat'l Academy Press, 1999. This book is online, and the definitions and illustration of trustworthiness may be instructive. The report was produced in 1999 and has 352 pages. Steven M. Bellovin. This is Steven M. Bellovin's web page at ATT Labs Research. It is a nicely cool page, especially with the customs documents passing the returnees from Apollo 11 through the Honolulu U.S. Customs along with their rocks. Also of interest is work of a NRC committee on Information System Trustworthiness and a current committee studying the privacy implications of authentication technologies. ICANN | DNS Security Update #1. These notes are from a November 2001 ICANN Meeting on the subject of securing the DNS root servers. There are a number of useful discussions, but the most interesting is that DNSSEC was not yet deployed at that time. There was also an important presentation by Steve Bellovin. Internet-Draft: Threat Analysis of the Domain Name System. This 2004 February Internet Draft provies an analysis against which DNSSEC provisions can be rated. From the beginning, DNSSEC was not intended to limit access to DNS information. The policy is that DNS information is public. Similarly, DNSSEC does not provide access control between clients and DNS servers. The key provisions are data integrity and data origin authentication. Packet Interception (2.1) is a known threat, also known as monkey-in-the-middle with eavesdropping on requests combined with spoofed responses that beat the real response back to the resolver, etc. The response can also be a set-up for a more complicated attack, returning the correct response but altering other aspects. It is suggested that link-level security (something like TSIG) is too costly for the basic queries. It is also claimed that DNSSEC, properly used, provides end-to-end data integrity. DNSSEC does not provide protection against rewriting of the message header and so a "properly paranoid resolver" must perform all of the DNSSEC signature checking on its own, Use whatever is available to ensure integrity of connections with trusted nameservers or resign itself to being attacked via packet interception. 2.2 ID Guessing and Query Prediction. This is a tough one. The idea is that an intruder not able to intercept packets attempts to address a fake response to a client based on some aspect of client behavior that is predictable so that brute-force insertion is possible. The correct UDP response port is found for interjecting a bogus response (to an unknown query!). The resolver should validate the integrity of the data or, if it expects that the responder has done so, employ a secure connection to that responder. 2.3 Name Games involve poisoning the resolvers cache, especially in a way that has future DNS requests go to a server of the attacker's choice. This can also happen by poisoning a trusted DNS server. So there is also the prospect of the chain of trust being breached. And it is easy to provoke DNS requests for names chosen by the attacker using various HTML-carried exploits, such as a 1-pixel web-bug graphic. DNSSEC is helpful in this regard, but it requires that the resolver check signatures. This seems weak, although the comment about glue records seems all right. 2.4 Untrustworthy trusted server. This is not much different. The conclusion is that the client must be able to confirm DNSSEC signatures itself, and it must have the certificates that it needs to accomplish that. I wonder what WinSock does about this? 2.5 Denial of Service. A DNS Server can be used as a DoS amplifier (the response being larger than the query). 2.6 Denial of Domain Names. This would be to indicate that the name does not exist or (technically) has no record. An intruder can inject that response. The signing of a response that indicates no hit will work. If a list of records is retrieved, there will be no indication that the list has been altered with records deleted from it. There are some problems with agreement on times and ways to convince a resolver to accept an expered signature. Also, there may be ways to convince an authoritative signature to generate a signature valid for a time period different than that intended. DNSSEC - DNS Security Extensions. This is the DNSSEC.NET site, providing a comprohensive treatment of DNS-related security. Bind Vulnerabilities: Internet Systems Consortium, Inc.. This is an account of vulnerabilities in implementations of BIND that make it vulnerable to security exploits and DoS attacks. A number of them are related to defects in security algorithms and libraries. DNS and BIND, 4th Edition: Chapter 11: Security. This is the on-line chapter from Albitz, Paul., Liu, Cricket. DNS and BIND, ed.4. O'Reilly (Sebastapol, CA: 2001). ISBN 0-596-00158-4 pbk. The chapter provides information on TSIG and the DNS Security Extensions. The article describes an exploit when the Internic root server was rerouted to Alternic. TSIG which provides pairwise security between two DNS sesrvers is defined in RFC 2845, an update to RFC 1035, the Domain Name standard. This proposed standard remains in effect, though it has been updated. The DNS Security Extensions, which employ digital signatures, were defined in RFC 2535. It is not obsolete, but there are many updates. SHA-1, SHA-224, SHA-256, SHA-384, and SHA-512 (FIPS 180-2). There is a new edtion of FIPS 180-2 on SHS dated 2004-02-25. This is the family of Secure Hash Standards (SHS) that are used in DSA and its flavors. This page summarizes the available material and testing requirements. DSA, RSA, ECDSA (FIPS 186-2). The only assymetric key (PKI) algorithms approved for digital signature as Federal Information Processing Standards (FIPS) are DSA, rDSA (ANSI X9.31), and ECDSA (Elliptic Curve DSA, ANSI X9.62). All make use of the Secure Hash Standard (SHS). AES, DES, 3DES, and Skipjack Algorithms. This page summarizes the symmetric-key standards that are supported in current Federal Information Processing Standards (FIPS publications). There are links to source documents and specifications for testing and validation of implementations for the algorithms. FIPS 140-1: Security Requirements for Cryptographic Modules. This is the overall guidance, revised on 2002-12-03, on Security Requirements. There are also a set of approved annexes listed on this page. A key caveat is found in the following: "It is important for vendors and users of cryptographic modules to realize that the overall rating of a cryptographic module is not necessarily the most important rating. The rating of an individual area may be more important than the overall rating, depending on the environment in which the cryptographic module will be implemented (this includes understanding what risks the cryptographic module is intended to address)." Cryptographic Standards and Validation Programs at NIST. The Cryptographic Module Validation Program (CMVP) at CSRC NIST provides the relevant specifications and related information on the validation of modules. C S R C - Cryptographic Toolkit. The Computer Security Resource Center (CSRC) at NIST provides a Cryptogaphic Toolkit, a compilation of publichats and materials on the application of cryptographic techniques. Most of the NIST publications on cryptographic algorithms are for cryptographic protection of non-classified materials. That is, the methods are deemed effective for commercial and personal use. Advanced Encryption Standard - Wikipedia. The AES was adopted in November 2001. It has a particular mathematical basis and it is not known whether that will ultimately be a vulnerability. There have been attacks on variations with fewer "rounds" and the number of rounds that the actual algorithm is "better" is not great. As of October 2002, the algorithm has not been defeated. Data Encryption Standard - Wikipedia. This article provides a history of the DES algorithm and gives a summary of the methodology. Because of its short key size, DES is vulnerable to brute force attack. Users switched to a 3DES, a particular way of encrypting each data block 3 times with different keys. DES has been replaced by the Advanced Encryption Standard. RSA - Wikipedia. Here is the article on the Rivest-Shamir-Adleman public-key cryptography algorithm. The article provides some information on the varieties of attacks that can be made on the RSA algorithm, and its dependence on a condition that once met, can be applied retroactively. The greater difficulty of retroactive learning of a private key is not so much that messages become readable as that they can be forged. So the timing of material is important, including any counter-signing that occured at a time that the public key was presumed to be secure.
|