The
same 'problem' occurs in symbolic computation: the zero-equivalence problem for
constants is well-known to be undecidable. However, in practice, constants
that appear in real problems (ie ones that occur from physical models) are very
easy to prove non-zero. The constants that are difficult to handle tend to
arise in idealized settings.
Now
that I have started also using Formal Methods for proving my code correct, I
have encountered the same thing: most meaningful programs can be proven correct
(given the right tools), but I can also construct short silly programs which
none of the standard tools handle.
I also
see an analogy with type systems such as Ocaml's: in theory, type inference is
exponential, while in practice it is very fast. This is because the worst
cases are very degenerate, and do not tend to occur in common / meaningful
programs.
Consider the above as a meta-observation based on my experience, first as
Sr. Architect at Maplesoft, now as a researcher applying formal methods to
computer algebra.
Jacques
I have just been reviewing some papers by Greg
Chaitin on Algorithmic Complexity Theory, in which he boldly states that
"Similarly, proving
correctness of software using formal methods is hopeless. Debugging is done
experimentally, by trial and error. And cautious managers insist on running a
new system in parallel with the old one until they believe that the new system
works."
from
http://www.cs.auckland.ac.nz/CDMTCS/chaitin/omega.html
He goes
to great lengths to discuss the halting problem and its implications for
proving correctness of algorithms.
I wonder, as a non-specialist in
this area, how the goals of FPL squares with this result?
/bigger>/bigger>/fontfamily>
David McClain
Senior
Corporate Scientist
Avisere,
Inc.
david.mcclain@avisere.com
+1.520.390.7738
(USA)