Ben Langhinrichs

Photograph of Ben Langhinrichs

E-mail address - Ben Langhinrichs

Recent posts

Sat 18 Jul 2020

Enduring favorite - Getting Data out of Notes (for whatever reason)

Thu 9 Jul 2020

Maximizing power while minimizing code and effort

Fri 29 May 2020

Round tripping, even while staying put

August, 2020
02 03 04 05 06 07 08
09 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30 31

Search the weblog

Genii Weblog

When is a bug not worth fixing?

Tue 28 Sep 2004, 01:11 PM

by Ben Langhinrichs
It seems so simple to software users.  There should be no bugs, no limitations, and no difficult workarounds.  I have been a software user for years, and it certainly feels like that to me.

But how does a software vendor decide whether a bug is worth fixing?  That is something I struggle with frequently.  The problem is usually a case where the bug is not encountered by many users, and has a fairly clean workaround, and the fix is likely to cause more trouble than the original bug.

An agent was copying tables and then deleting rows from the tables.  The agent was not crashing, but was reporting an odd error:

MIDAS: Unable to save additional deleted PAB definitions.

and the rows were losing justification after the delete.  Now, this sounds like a concrete, repeatable bug, the kind I like to fix right away.  So the customer sent a sample, which included the code:

Set rtchunk = rtitem.DefineChunk("Table *;Row 1-" +Str( FIRST_ROW-1 ))
Call rtchunk.remove

My first reaction is, "Cool!", because I love it when somebody actually uses the more complex variations of chunk definitions.  Unfortunately, this is the problem.  The FIRST_ROW can be 8, and there are as many as 12 columns, so there can be 84 cells deleted in each table, with n unknown number of tables.  Internally, we store paragraph definitions for the separate cells to determine whether they will be re-used later, and the limit is 100, which is almost always plenty.  Just not this time.

The workaround is simple for this user, which is to cycle through the tables:

Set rtchunk = rtitem.DefineChunk("Table 1")
   Set rtrow = rtchunk.DefineSubchunk("Row 1-" +Str( FIRST_ROW-1 ))
   Call rtrow.remove
Loop Until Not rtchunk.GetNextTarget

and if that wasn't enough, you could break it down to another loop for each row.  This works, but obviously I wish customers didn't have to worry about this stuff.

Possible solutions
I could just change the limit to 255 instead of 100.  This still would only allow the user 3 tables, but going any higher would add a lot of overhead to deletions.  I could go with a linked list approach, but it is a lot of change for this purpose, since I use parallel arrays.  I could rewrite the logic so that duplicate pab references were not stored multiple times, which is the correct solution, but I worry about the implications I haven't thought of.  So for now, I'll swallow and accept another limitation and document this somehow, and hope that one day soon I'll get tired and fix it properly.  In the meantime, like all the Notes "oddities", this remains a Midas "oddity".  I don't imagine the IBM engineers like leaving them around either, but the risk of change may outweigh the risk of leaving bad enough alone.

Update: A fix has been made (that certainly didn't take long, did it?)  See Fewer bugs, less sleep for details.

Copyright 2004 Genii Software Ltd.

What has been said:

No documents found