I’ve been working through Handbook of Practical Logic and Automated Reasoning by John Harrison. I enjoy translating its OCaml listings to Haskell: the two languages share much in common, so I can devote most of my attention to experimentation and exploration.
I largely focused on aesthetics. With the help of pattern synonyms, recursion schemes lead to succinct code for manipulating abstract syntax trees. Monads and typeclasses reduce clutter. Lazy evaluation simplifies some tasks such as enumerating all ground instances: we produce a never-ending list rather than manage drip-fed enumeration with a counter.
As I’ve come to expect from Haskell, frequently my code just worked with little or no debugging. But success bred suspicion; my code worked too well!
Flattening the competition
My first MESON port solved Schubert’s Steamroller in about 15 seconds on my laptop in GHCi. I was pleased, but the end of section 3.15 showed how to more efficiently distribute the size bound over subgoals to get a faster MESON that proves the steamroller "in a reasonable amount of time".
Wow! Did the author feel 15 seconds was sluggish? Would this optimization bring the time down to 5 seconds? Half a second? I eagerly implemented it to find out.
The change ruined my program. It crawled so slowly that I interrupted it, afraid it would eat all my system’s resources. In desperation I downloaded the original OCaml code to investigate why I was experiencing the opposite of what the book said. I expected to be awed by its speed, after which in a fit of jealously I’d figure out how I’d botched my rewrite.
Instead, I was shocked to find the OCaml version was even worse. In other words, my supposedly unoptimized MESON was an order of magnitude faster than the most advanced MESON in the book. But how? I had merely translated from one language to another, almost mechanically. Surely bugs were to blame.
After spending hours looking for them in vain, it dawned on me that my code might be correct after all, and I had fortuitously stumbled upon effective optimizations. Further analysis supported this: I now believe my implementation of MESON legitimately outperforms the book version due to lazy evaluation.
Because of how we use continuations, Haskell memoizes expensive computations and avoids repeating them during backtracking. It calls to mind a suggestion in the book to "somehow remember lemmas encountered earlier in proof search". Adding the sophisticated size bound distribution hampers the reuse of the memoized continuations because of an additional parameter, which explains why a purported optimization crippled my program.
Normally, lazy evaluation surprises me with an unpleasant space leak. I’m grateful that for once it surprised me by dramatically boosting performance!
Got something to prove?
Thanks to the Asterius GHC WebAssembly backend, we can use a web browser to confirm the unreasonable effectiveness of a lazy MESON:
Click on "Presets", select "steamroller", then click "Lazy MESON".
No comments:
Post a Comment