Blunder Dome Sighting

Professor von Clueless in the Blunder Dome

status 
 
privacy 
 
about 
contact 

Hangout for experimental confirmation and demonstration of software, computing, and networking. The exercises don't always work out. The professor is a bumbler and the laboratory assistant is a skanky dufus.

This page is powered by Blogger. Isn't yours?

Recent Items
 
Recipe for Nano-ISV Success? J. D. Meier just post...
 
The Difference Between Resolution and Size: Or, My...
 
Specialization is for Insects I’ve become a regula...
 
Getting to Unicode: The Least That Could Possibly ...
 
Dear Microsoft: No Thanks for the Updates
 
Raymond Chen: What Feature Did You Remove Today?
 
More Spolsky Gems: Open-Source, the Desktop, and S...
 
Is There an MVC in the House?
 
Nano-ISV Are I, Are I
 
Amazon.com: Your Order Has Shipped - Joy to Book R...

Archives
2004-06-13
2004-06-20
2004-06-27
2004-08-29
2004-09-05
2004-09-12
2004-09-19
2004-10-10
2004-10-24
2004-11-07
2004-11-28
2004-12-05
2004-12-12
2004-12-26
2005-01-30
2005-02-06
2005-03-06
2005-03-13
2005-03-20
2005-04-03
2005-04-10
2005-04-17
2005-04-24
2005-05-01
2005-05-08
2005-05-15
2005-05-29
2005-06-05
2005-06-12
2005-06-19
2005-06-26
2005-07-10
2005-07-17
2005-07-31
2005-08-28
2005-10-09
2005-10-16
2005-10-23
2005-11-13
2005-11-27
2005-12-04
2005-12-18
2006-01-08
2006-02-05
2006-02-12
2006-02-19
2006-03-05
2006-03-12
2006-03-26
2006-04-23
2006-04-30
2006-07-16
2006-07-30
2006-08-06
2006-09-03
2006-10-08
2006-10-22
2006-10-29
2006-11-26
2006-12-10
2007-01-28
2007-02-04
2007-02-11
2007-03-11

2007-03-11

Recipe for Nano-ISV Success?

J. D. Meier just posted a précis of agile-manager David Anderson’s recipe for success:

  • “Focus on Quality
  • “Reduce Work-in-progress
  • “Balance capacity against demand
  • “Prioritize”

It is Meier’s description of these critical elements that caught my attention.  I can see how, in my latest solo-developer, nano-ISV project, I have been providing precisely these elements and with more consistency than I first realized.  That’s startling for me and I am going to want to come back and develop my report card more thoroughly as a record on which to base future work.

For one thing, I thought that these practices wouldn’t scale down to sole-developer nano-projects, because of the problem of attention and focus in a human being (especially when that’s myself).  My experience says otherwise, and that is very exciting to see unfolding into full bloom.

It’s great to be led back to Anderson’s work via Meier’s links in the article.  I subscribe to the blog, but sometimes another perspective drives me back with new eyes for what’s there.  This time, I even resorted to Wikipedia so that I could figure out what a kanban system is.  I also wonder why I resist applying that specific technique, now that I understand how it provides critical focus.  I do know that right now I am risking the “overhead of running each iteration like a mini-project.”

Experiencing Focus on Quality

One of the ways that “Focus on Quality” and pushing quality upstream showed up for me is in visualizing the end-point qualities from the beginning and coming up with an evolutionary development approach that I could envision constantly reinforcing quality in my particular project.  This has given me surprising adaptability, and I now have a safety net that permits me some aberrations and adjustments that I could not have imagined at the start. 

As I approach public beta and an opportunity to report more of the experiences when the open-source code comes out of embargo, I have found myself in an interesting situation.

Inside a Relentless Process, You Can Permit Shortcuts

There is a sponsoring customer that is putting my software to work inside of a specific product, on behalf of a specific customer of their own.  As I approached the end of alpha-level deliverables and moved into private beta deliverables, I modified my approach to releases in ways I did not expect. 

Having achieved the major design and implemented key functionality, I have now begun to short-circuit some of my testing and deployment investments in order to deliver the few remaining features that will provide the greatest first-customer utility.  This isn’t good enough for a public distribution of the software for reuse.  It is more than good enough for a specific integration inside of a specific release of application software.  So my attending to that at this time focuses on the least that could possibly work for the initial customer, accomplishing earlier availability and opportunity for further experience and correction.  (It is also remarkable to me how much “the least that could possibly work” arises in the selection of iterations.)

The point I want to make is that I have designed and I am evolving toward a public product with all of the stability and conceptual coherence such an effort requires.  However, for the first use, I can cover a point space that is good enough for the initial application and in which my later-hardened software can be slipped underneath without impact and with greater confidence in continuing stability in the face of changes in the application software. 

What’s clear to me in this experience is the deficiency of code-and-fix point-solution development, the kind that often happens inside of an application-software development setting and for which testing and hardening is usually just enough to get things working in the context in which the software has been conceived.  This variant of code-and-fix creates two well-known prospects for later failure: the cost of maintenance/re-engineering and the illusion of reusability (potentially for resale of what was originally an internal software development).  The second effort often dies a grizzly death in consequence of the unaffordable maintenance and support costs.

The Bet with Myself

I claim that I have avoided that limitation because the points I fix are inside of a coherent design.  That design dominates the development, as does the quality progression for which there is always a way to recover from (intentional) deviations and continue.

As I cross over to public beta, I know what cracks I must seal and I will do so.  What struck me on reading Meier’s post today was realizing that I have permitted myself some code-and-fix expedients simply to have my sponsor able to satisfy their immediate needs.   So, for a short time, I am actually doing one-feature-fix-at-a-time, clean-up mini-drops (0.56, 0.57, …, 0.59) before catching my breath and declaring the 0.60 public beta.

With the logistics arrangements between my working in Seattle and the sponsor in Europe, I do have an incentive to package the deliverables in a way that has them be usable without my presence.  This has been critically useful although I still probably over-do it.  I am trusting that to pay off big in the home stretch, though.

So now, back to work.  I have a roadmap to update and continue.

Tags: orcmid, software+engineering, nano-ISV, personal+software+productivity, David+Anderson, agile+development

 
Comments: Post a Comment

The Difference Between Resolution and Size: Or, My Abstraction Leaks More Than Yours, so there!

Speaking of leaky abstractions, I just ran across this interesting problem in Doug Mahugh’s post, Doug’s World » What Resolution is best?.

What resolution is best for images on blogs? Everyone has an opinion, and mine is that 1024×768 is ideal.

I love (that is, hate with intense envy) the photographs that Doug posts on his site.  If he doubled the pixel dimensions of his images, I’d still find some way to look at them.

The problem with Doug’s analysis, of course, is that 1024 x 768 is not a resolution.  Resolution has to do, roughly, with pixel coverage in a unit area.   (In photography and imaging that’s not quite right, which shows that even straightening this out is an improvement but not the whole story.)  There needs to be something about the quality of those pixels too (8 bit, 24 bit, 32 bit color for example) and the fidelity with which they are imaged on a display surface.

A second problem is that any browser report on “resolution” is presumably about the pixel coverage of the entire display and we have no idea (1) what the physical dimensions of that display are nor (2) do we have any idea how much of it is being used by the browser and (3) how well the browser is rendering the image.

This handling of abstractions is so leaky (and so commonplace) it is not clear there is any abstraction at all.  It’s just noise about two numbers that are taken to be measures of something.

Of course, I could simply be objecting because my 19–inch LCD monitor provides 1280 by 1024 pixels and, in my normal way of working, there are browser pages that don’t fit properly even when I go to maximum screen and I usually don’t want all of my screen consumed by a single application.  When I am on Quadro, my tablet PC, 1024 by 768 is the best I get, making it even more difficult to browse some sites (and use some applications).  If I change to a portrait view (that is, 768 by 1024), it may be even more difficult.

So long as Doug puts up decent thumbnails and lets me choose which ones I want to see done large, I am very happy.  As a regular visitor to his blog, I also know what to expect: Viewing his pictures is invariably worth the effort to bring them to my browser.  I can also compensate for the browser changing the dimensions (resizing) and then resampling the image on me, especially now that Internet Explorer 7 makes it easy to detect and over-ride that behavior.  It doesn’t hurt that I am getting 4 Mbps downloading on my DLS ADSL broadband connection.

Here’s the lesson I want to emphasize:  What Doug is doing is like guessing the ideal bench seat setting for everyone’s car, based on some averaging of ergonomic statistics.  This does not mean his choice is a bad one.  It is just that his analysis doesn’t answer the real question, which is what really works for his readers.  (My using two computers makes it more like bucket seats for very different people, so now what?  When I’m on wireless on the Tablet changes everything, of course, as if I’ve got a Ferrari and a Vespa in the same garage.)

This is a common pitfall in the design of software and computer-based systems generally.  Useful examples are important to notice.  And yes, the difference between coverage/density, size, and resolution will be on the quiz.  The difference between those technical metrics (gotten right) and what works for the user will be on the final.

 

Tags: orcmid, Doug+Mahugh, software+abstractions, usability, image+resolution, image+size, image+density

 
Comments: Post a Comment

Specialization is for Insects

I’ve become a regular reader of Oren Eini’s blog, “Ayende @ Rahien.”  Although I don’t toil in the same developer space as Oren, I find that his introspective illustrations of methodology and technique yield little diamonds every day.  His attention to testing is heart-warming.

Today, in a theme-titled post, there is this great observation:

“I expect to see a lot more work going into building non leaky abstractions in the future, and I think that we are getting better and better at it. Furthermore, I believe we will see a lot more emphasis on Not Surprising The Developer.”

I think it is more than that.  We must stop Surprising the User (including the Developer special-case), for all of the same reasons.  It seems to me that plugging-up the abstractions, and knowing how to have them fail appropriately when fail they must, is a perfect agenda for conquering the complexity that we have unleashed on the world in the name of mastering complexity.

There’s a lot more in Oren’s post and I recommend that you digest all of it (and then subscribe to his RSS feed).

Tags: orcmid, Oren+Eini, Ayende+Rahien, abstractions, software+testing, software+fundamentals

 
Comments: Post a Comment

2007-02-12

Getting to Unicode: The Least That Could Possibly Work

I’m in the process of stabilizing the first beta release of a project.  I’m doing mini-drops of patches that move from 0.50beta (the first beta achieved) to 0.60beta.  Getting from 0.52 to 0.54 involves adding code-page sensitivity to conversion from some native Windows interfaces that are hard-wired for single-byte codes.  I must produce Unicode for use in Java and any other wrapper layers that must work in internationalized settings.

{tags: orcmid software engineering software testing evolutionary development}

In considering this update, I looked at four solutions.  The first solution leaves exposed the single-byte codes, delivered them into buffers of whatever wrapper surrounds my lowest-level native Windows layer.  Solution #1 basically punts the entire problem of correct conversion to all higher levels.   I have a long list of reasons why that is unsavory and putting the job in the wrong place.   Launching myself into architecture orbit, I considered three other solutions.  The fourth completely encapsulates the conversion to Unicode at my deepest integration layer, making it a general solution for whatever kind of wrapper sits above me, whether to interface Java, plain C++, .NET, who knows.  Naturally, I am in love with solution #4.

Last night, I went to sleep with the one last concern on my mind: all of the current unit and regression tests for the bottom layer will no longer work.  They will have to be completely redone for Unicode: all of my tests, their displays and results, filenames, everything that is now conveyed in single-byte code.

This morning, I found the trump card.  With solution #1, the conversion to Unicode with code-page sensitivity happens in exactly the place where I am converting to Unicode without code-page sensitivity.  So no black-box tests have to change.  They simply become regression tests and demonstrations that the single-byte codes outside of the basic ASCII set are coming through properly, something that really matters for the European ISV that is using the result of this work.

So, I am back to solution #1 and its winning qualities:  It is the least change that can possibly work.  It provides running code in the hands of an integrator as early as possible with the least possible destabilization.  It requires additional testing to introduce interesting character codes into the test cases, but all regression-test code works without change.

I wasted a week figuring this out.  I wonder if my hesitancy was because of some nagging sense that I was going down a dangerous path?


I will, at a more convenient later time, be refactoring the lower and intermediate layers of my code as part of hardening and getting as much of the work as possible done at the native, high-performance layer.  This will be at a point where my top-level component interfaces will be locked down and no refactoring will be visible to applications that use the components.   It’ll still be risky to make those changes, but I’ll have painfully-solid regression tests by then.  At that point, I’ll look at approach #4 once again.  I’ll let you know what happens.

 
Comments: Post a Comment

2007-02-10

Dear Microsoft: No Thanks for the Updates

This morning, I declined to accept the “High Priority” update that Microsoft Update wanted to download and install for me: Visual Studio 2005 SP1.  I already knew this update was available, and I have been waiting for the smoke to clear on how well it handles the Express Editions, especially the Visual C++ 2005 Express Edition. 

{tags: orcmid Microsoft Update Visual Studio VC++ 2005 Express software engineering}

There are enough reports of instabilities and screw-ups around redistributables and other integration edge cases that I want to wait.  In truth, I would wait in any case.  I am in the middle of a project and all of my development tools are working just fine thank-you-very-much.  I am not about to risk destabilization of anything until the project is at a point where I everything is delivered, buttoned down, and I have an easy fall-back mitigation in place.  That means no installing hot graphics adapters, new sound fixtures, or upgrading to Vista on my main development machine.

What I will do is obtain the download/installable version, not the one that Microsoft Update installs automatically, and keep it around until I am ready to use it.  Since it must be run once for each VS 2005 flavor that is installed, including each separate Express Edition on each machine, I want one I can keep, backup, move around, and re-rerun if I do any re-installs in the future.  Hmm, it is interesting that the SP1 does not appear to be in my MSDN distribution for January 2007.  I think I’ll just wait for the little treasure to show up in my postal mailbox, no downloading required.

Some place around my project’s 0.90 beta (I’m building 0.60beta right now) and the hardening of my project and its regression tests, I will use variations of VC++ 2005, Platform SDKs, and Java SDKs so that I can verify that the source code, scripts, and builds all on the latest-available tool bases as well as the ones I started with.  After that I can give myself leave to make other interesting upgrades to my systems.

Oh oh, I did download some non-priority upgrades, including .NET 3.0 and new root certificates.  I will know shortly whether that was prudent or not.

[update 2007-02-10T20:37Z: added tags and a cross-reference on security.]

 
Comments: Post a Comment

2007-02-04

Raymond Chen: What Feature Did You Remove Today?

Raymond Chen’s second book-promotion interview is available.  He hasn’t arranged to make an author book-signing appearance yet, but the topic comes up in this podcast.

{tags: orcmid Raymond Chen Windows software development}

Raymond is a great advocate for all of the reasons that compatibility is maintained between versions of Microsoft Windows.  His stance is so well-articulated that Joel Spolsky used Raymond as an archetype for two kinds of Microsoft Developers: the Raymond Chen Camp (representing backward-compatibility religion) and the MSDN Magazine Camp (representing the out with the old, in with the new sprituality of the day).  You might surmise where my allegiances are. 

I mention this because the interview is by friends of MSDN and Raymond is not quite that enamored of being the poster child for the backward-compatibility camp.  So, naturally, the topic comes up in this interview; for a moment it was more about Joel on Software than The Old New Thing.

Raymond confesses that he has matured beyond building great features to discovering features that can be removed.  (Yes, I am also thankful that the “unused desktop icons” sheriff is banished from Vista.)  This is not exactly a nod to the “other camp,” and it is a very interesting notion.  Anything that reduces the code surface simplifies testing and documentation and deployment and, most-of-all, support.  That sounds like goodness to me and another fine way to have earned a day’s pay.


I don't listen to podcasts much and many are longer than I want to listen through.  I just discovered, however, that the little Windows Movie Maker utility that comes with every Media Center PC and every useful version of Vista (oops, it won’t run on my only Vista machine right now because I don’t have the right/enough graphic hardware acceleration, which I guess is why my Toshiba Satellite Tablet PC has a Windows Experience base score of 1.0).  Ahem.  Well, what I learned about Windows Movie Maker is that it is perfectly capable for editing audio files (e.g., podcast MP3 files) and resaving them.  In this case, I chopped the first ten minutes off of the Technet Podcast right at the pause before the Raymond Chen interview begins.  I can see doing some chopping of audio and video downloads where I want to preserve a particular piece.   I’m always happy to save on disk and backup space.

 
Comments: Post a Comment

More Spolsky Gems: Open-Source, the Desktop, and Supporting Customers

I needed to get back on the rowing machine after avoiding exercise for almost two weeks.  This gave me a chance to listen to the Joel Spolsky interview on the micro-ISV Podcast.

{tags: orcmid Channel 9 Joel Spolsky Michael Lehman Bob Walsh micro-ISV nano-ISV}

Early in the interview, Joel talks about having now graduated to mini-ISV with a dozen employees that tend to 24 when the interns arrive in June.  It is fascinating how much, and how well, Fog Creek does with interns.  I think it provides great resume chops for the interns too and most of all an early experience at successfully delivering product.

In the course of the conversation, Spolsky explains that the original vision for Fog Creek was to produce a comprehensive software [I thought he said “content”] management solution.  Joel now thinks that there is no such distinct category as content management, but out of that vision FogBugz, the main Fog Creek product, emerged.  FogBugz is a bug and feature-tracking package that is an instance of workflow management and was, it seems, intended to be part of the infrastructure for the grander product.

I am fascinated by the following harmonies in how Fog Creek grows as a business.  The first product was something that was useful in its own development and deployment.  It is also valuable to other companies like Fog Creek itself, in providing an important workflow package for their own bug and feature tracking.

The Copilot package for remote on-line assistance came out of a similar situation: making it easier for their customers to install an on-line assistance package so that on-line troubleshooting and assistance could take place.  Copilot exists so that the setup for assistance doesn’t itself become a problem requiring assistance and getting in the way of resolving the customer’s original problem.

The key is how this is all around supporting users and customers and part of establishing a reliable relationship with adopters of Fog Creek products.  It also underscores the message at the end of the interview about needing the business person as well as the developer in a micro-ISV, and needing a clean vertical focus for the initial product(s).

In the discussion of Copilot, Joel talks about the value of developing Copilot as an open source product and how they learned that there is nothing to fear from people cloning the product, after seeing clones of FogBugz come and go.  What cloners failed to copy was the know-how related to application and support of the product.  (I once observed a team’s attempt to clone a purloined compiler onto a different computer.  With no understanding for the principles of operation of the original code and how to unravel and re-engineer the code-generation model for the second computer, there was no useful result of any kind, not even understanding of what failed to be understood.)

There is more discussion about how much ISV staff (at least half, in Joel’s view) must be devoted to non-development activities and how that ratio grows ever greater with larger companies. 

The discussion also turned to Joel’s current view that the web, not the desktop, is the place to do application development these days.  There is presumed to be a hefty environment provided by the browser of course.  He wasn’t asked about the “smart client” case, where deployment is supposed to be as easy as using a browser, but he’s probably right in supposing that Ajax (and perhaps PF/E, I wonder) makes dedicated desktop applications increasingly unnecessary.  This doesn’t dissuade me as an user, but it gives me pause as a developer.

 
Comments: Post a Comment

2007-02-03

Is There an MVC in the House?

I think I need to understand Model-View-Controller.  No, I’m pretty certain that I do.

{tags: orcmid Java Windows MVC Model-View-Controller modal dialogs Swing JNI}

My MVC ignorance came up because I have to build Java GUI applications to test some features of a middleware package that impact GUI behavior.  This is a consequence of a highly-successful separation of high-level concerns that leads to a little low-level coordination breakiness.

The two ends of the middleware connection don’t have to know anything about each other, but the back end (which runs in the same process as the front) can put up modal dialogs that apply to the window that belongs to the application on the front end.  The modal part means that the main Window is supposed to not accept input until the dialog is dismissed.  The dialog is supposed to stay atop the application Window until that happens.  It’s not working properly right now, and that is a very serious usability matter.

The breakiness comes in because the front end is a Java application and the back end is operating in native Windows.  The middleware functions are delivered to the application by Java components using JNI.  At the back-end connection, it is all native Windows non-GUI code.

I needed a simple GUI application to use as a front end so I could observe the difficulties with this arrangement and also confirm the repair that I have in mind.

I started out fussing with the first real GUI application in the Swing Tutorial.  As far as GUIs go, the little application with one button and one label is enough.  But I have more elaborate things happening under the covers than just updating a counter every time the button is pressed.

I thought I needed to develop a more elaborate GUI too.  Fortunately, I started explaining this in a buddy call with Bill Anderson.  As the words came out of my mouth, I realized that I was making it too complicated.  I could do just fine with one button and one label to provide a test of what happens when the application underneath the Java GUI does something that provokes a dialog from the back end.

I still needed to refactor the Swing Tutorial example so that I could make sense of it for myself and be clear on where it is appropriate to add the application and extend the Controller part.  I often learn an application by refactoring until it makes sense to me and I can explain it to myself and others.  I did that with the single Java file and it is now larger by the amount of annotation I added, along with the logic that my test application requires. 

One problem with the example being in a single Java file is that it obscures the separation among the methods and my (or your) understanding which of them are only used in particular levels of the GUI operation.  A large part of the code is for initialization, and only one method and some instance data are involved in operation of the running application.  Sorting out the roles of the code is exacerbated by the little trick of having a static method of the single-class application fire up an instance of that very class and use it to constitute the view and also provide the Controller (i.e., the ActionListener implementation).   This is an useful device, here, but it further obscures the separation of function and the relevance of individual methods.  (People who see this trick tend to use it everywhere, and that makes for some really stupid Hello World demonstrations where it adds no value to understanding Java whatsoever.)

With the refactoring completed, I think I have a sense for a simple Model-View-Controller situation.  At least my test application works properly and demonstrates the known coordination problem in the middleware. 

My buddy Bill says MVC breaks his brain, and examples seem to fall down somehow.  He’s not the only one that is puzzled when MVC is claimed yet you can’t find one of the pieces (usually the M) nicely separated out.  I think I get it for my simple case.  The sense I make of it also seems to conform to the situations Ayende describes.  Having struggled to separate out the parts of an actual, simple case, even the Smalltalky explanation kinda sorta makes sense, although that is very abstract in contrast with the perhaps-overly-concrete instances that come up in comments to Ayende’s posts.  In contrast, the effort to characterize MVC in Design Patterns leaves me with my eyes crossed and ringing in my ears.

I am happy for now, although I think I will have to know far more when I write the guidance for use of my middleware fixture by Java applications. 

I already see where I am going to have a serious challenge in raising my understanding to a level where I can provide realistic examples that others can grok.  Because operations of the capital-M Model can lead to dialogs and user interaction at the back end, calls into the Model from event-loop entries of the Controller (although nicely thread safe) can take an indefinite time before returning control to the event loop.  This will also block the Java event-handler thread while a back-end dialog is up, and that means no front-end windows will be refreshed.  So I might have to provide guidance on having Model operations carried out on a separate thread from the Controller procedures. 

In that event I’ll get to worry about providing guidance on the thread-death oblivion chasm and how to avoid falling into it while also preserving dialog modality.

Aieeeeeeee.


You might surmise that I am not a GUI kinda guy.  You’d be right.  Web pages, yes.  GUI, no.

All of my work on this project has been tested and demonstrated using console applications, whether Java-based or for native Windows.  I have used command-line compiles and utility operations for the entire project.  My one concession to the modern era is to use jEdit for my Java, C++, and batch scripts.  I’m using the Visual C++ 2005 Express Edition command-line compiler and running naked through the Windows Platform SDK.  I could have used the VC++ IDE, but I was using jEdit for the Java bits anyhow and it was simpler to stay in one editor. I also use Visual Source Safe and the VS 2005 Express Editions don’t integrate with VSS so there is not much gain to using VC++ projects (although my file organization is compatible with someone doing that).  When I put the software up on SourceForge, I will add Subversion to the mix.

I always wanted to do GUI work, and I will be doing some more-serious forms of it.  I just didn’t want that learning curve in the middle of this project.  I am thankful that, for my testing requirements, the modified one-button tutorial example is the least that actually works.  I shall now dive into my hand-rolled custom-factory COM code where I am completely at home.  Different strokes for different folks, as they say.

[update: 2007-02-04T17:57Z the last paragraph needed some tweaking and I couldn’t stand to leave it alone.]

 
Comments: Post a Comment

Nano-ISV Are I, Are I

I’m a nano-ISV.  I don’t have a better way to define it.   My ISV-ness is smaller than what a micro-ISV endeavors to become.  I’m uncertain that I have the moxie or the ambition to move to the micro-ISV level at this stage of my vocation.

I have an open mind about it, although I have to be careful to keep my eye on what is important to me.  That is being a scholar, writer, and software contributor, not necessarily a rip-snorting product-selling micro-ISV operation.  But having a self-sustaining operation (and more) is important, and that means having customers of some fashion.

{tags: orcmid Channel 9 Joel Spolsky Michael Lehman Bob Walsh micro-ISV nano-ISV podcast}

Thanks to a tip-off from Joel Spolsky, I learned that podcaster and Microsoft developer evangelist Michael Lehman specializes in micro-ISVs and is co-hosting the MicroISV show on Channel 9.  How come I have to read a blog to find out the neat things my acquaintances are up to? 

The great thing about the MicroISV show is that their 10th show (who knew?) launches a new format and the interviewee is none other than Mister Spolsky. 

About the format: There are published transcripts as well as downloadable versions of the podcasts.  That is very great because I can scan text a lot faster.

About Joel’s advice to micro-ISVs at the very end:

  1. There needs to be at least two of you.  I get that.  As a solitary developer, even at the nano– level, it is very difficult to stay on top of important matters that required different kinds of attention.  Writing code for that next beta drop versus paying the monthly bills and moving a domain registration, for example.  Or managing the project rather than doing the project.  I need to do something about that.  It might not be with an additional person in the business, but maybe through some sort of associate arrangement: buddying up as a kind of mutual assistance arrangement.  I have a critical need for a sounding board and for someone to help me balance my attention.  I’d be willing to share some revenue to make that happen, once there’s any of that worth mentioning.  
      
  2. Find a niche.  I’m fortunate in that I have a couple of those.  But it is reassuring that this is an appropriate way to maintain focus.  I also notice that it provides a way to stay concrete and not get lost in the conceptual clouds that I am easily prone to (and that I owe Joel for too).

 
Comments: Post a Comment

2006-12-15

Amazon.com: Your Order Has Shipped - Joy to Book Rats

I, like Bill Gates, am hard to buy gifts for.  Mostly because I just go ahead and order what I want for Christmas, not because I already have everything, although in my world I want for little.  At the moment I am struggling with tech envy and saving up my American Express Reward points for a Nikon digital, or a digital video camera, or a pod-cast quality portable recorder, or a SanDisk MP3 player or … and fortunately  I am hoarding my points and that is keeping me from foolish impulse purchases.  Well, that Sansa e280 is giving me the eye and it takes courage to ignore it.  I wonder if it qualifies as a thumb drive for Vista ReadyBoost? …

Then there are books.  Books are my downfall.   And certain software.  Books and software.

{tags: orcmid software books Charles Petzold Jeffrey Richter Kathy Sierra Bert Bates Raymond Chen}

Two days ago I received one of those wonderful non-spam notices that a book order had shipped.  I was particularly thrilled because I had it in my mind that Raymond Chen’s new book, The Old New Thing: Practical Development Throughout the Evolution of Windows must have shipped early.  Not so.  But I forgot about the other books that I had gleefully ordered and that had been consolidated in one super-saver shipment.  They arrived this morning via an early USPS delivery:

  • Charles Petzold, Applications = Code + Markup: A Guide to the Microsoft Windows Presentation Foundation.  I’m a big fan of Petzold’s minimalist approach to mastery of foundation concepts and this is one of those “kids, collect the whole set” exercises on my part.
  • Jeffrey Richter, CLR via C#, ed.2.  As much as I may understand the CLR intellectually, I need someone to hold my hand through the reality of coping with .NET development of deployable software, and this looks to be exactly what I need.  I ordered the book entirely on speculation based on the glowing recommendation of Charles Petzold.
  • Kathy Sierra and Bert Bates, SCJP Sun Certified Programmer for Java 5 Study Guide (exam 310–055).  I have not managed to get my head around Head First.  For me the Java documentation and tutorials are perfect.  Exam study guides (when they are not terrible) are some of the best resources for technical mastery of a subject that I have found.  Now that Java 6 is shipping, it is time for me to get on top of Java 5.  I trust Kathy Sierra and the amazon.com customers on this one.

This may give you some sense for the variety of my interests, but probably not much insight into what I might actually be working on at the moment.  And if my sponsor is reading this, be assured that I took time out to blog with one ear tuned to a webcast on 2007 Office System server-side capabilities.

According to another friendly announcement in this morning’s e-mail, I am about to receive delivery of Microsoft Expression Web (upgrade edition).  This will be the test for whether Vicki and I can really abandon Microsoft FrontPage for our various web sites.  They’ve all been rehosted on Apache servers recently, so it will be interesting to see what works (although the development server is still IIS with FrontPage extensions and Visual SourceSafe).

I will be excited to report on what I’m busily at work on at another time.  I needed mostly out-of-print books for the project (and Raymond’s blog, of course), and those I didn’t have already were found through amazon.com referrals to sellers of remaindered and used books.

 
Comments: Post a Comment

2006-11-30

Tweaking, Tweaking, Tweaking, Roll-on ...

This is a brief post to tweak my technorati blog claim.  I made similar adjustments to the companion Orcmid’s Lair and Numbering Peano sites.

The post is to cause the adjusted template to be used in updating of the main page and, from now on, individual-article and archive pages.

Because I don’t republish previous pages except for comments and content updates, there are still many technorati tags that involve the now-unclaimed technorati locations.

It may be that technorati is smart enough to find the right places.  On the other hand, this is an opportunity to do something that I always wanted to try: hook into the FrontPage extensions on my home-office development server and make content changes by search and replace in the blog-directory HTML files.  I don’t know when I will ever get around to that, so there may be considerable but less-than visible breakage around until I do undertake that.

Meanwhile, there is visible breakage on my sites as a result of moving from Windows-based servers to Apache and going from a case-insensitive web host to a case-sensitive one.  I am slowly getting my head around that, and I have no automatic solution because I have many pages that are cross-linked using different capitalizations, and I will have to think much harder about

  • not adding to the grief and cleaning up as I update pages in the normal course of events
      
  • finding some scheme for a mass review and identification of pages and links to them that require some link straightening and orthodonture.  Hmm, maybe I should look in Dr. Dobbs?

 
Comments: Post a Comment
 
Construction Zone (Hard Hat Area) You are navigating the Blunder Dome

template created 2004-06-17-20:01 -0700 (pdt) by orcmid
$$Author: Orcmid $
$$Date: 06-11-30 21:40 $
$$Revision: 20 $

Home