## Thursday, March 14, 2013

### MathJax

About a decade ago, I began putting my notes on my homepage for the reasons cloud computing proponents love to spout (though I did it without uttering any buzzwords).

But I hit a snag. How do I put equations on the web? Among the many awful workarounds, I picked the one which I thought was noblest: MathML. My pages would be static content; operable without JavaScript. Text is far slimmer than images, and are far more agreeable to things like searching. As for PDF? Over my dead <body> element!

I was optimistic back then. Mozilla supported MathML provided you also downloaded a font or two, and despite the crushing dominance of Internet Explorer, I felt that righteous Free Software would ultimately triumph. One day, I hoped, a typical browser would render my site perfectly, out of the box.

Turns out my predictions were half right. The web broke free of Internet Explorer’s chokehold. Now, more often than not, we use open source browsers. And one of them, Firefox, supports MathML out of the box.

However, my mathematics notes still render incorrectly on most browsers. Popular search engines appear to shun them, possibly because I zealously followed the arcane XHTML 1.1 plus MathML guidelines. And everything supports JavaScript.

Maybe they’re all going to support real soon, but ten years is too long for me. I switched to MathJax, a clever JavaScript library that figures out what your system can do, then renders the equations using an appropriate technique. It just works.

## Monday, February 18, 2013

I’ve been leafing through a few books on probability, a subject which I’ve mostly avoided since undergrad. Originally thinking I’d just refresh what I already learned, to my surprise I was led to reconsider fundamental beliefs. What follows is my journey told via book reviews.

## Hexaflexagons and Other Mathematical Diversions by Martin Gardner

As a kid, I devoured this book and the others in the series, which I later learned were collections of Mathematical Games columns from Scientific American magazine. I didn’t always understand the material, and the puzzles were often too difficult, but Gardner’s writing skill kept me reading on.

Among the many fascinating chapters was “Probability Paradoxes”. Gardner’s ability to communicate was so strong that after many years I still remember much of the content. In particular, he asked:

Mr. Smith says, "I have two children and at least one of them is a boy." What is the probability that the other child is a boy?

and his explanation of 1/3 being the correct answer not only stuck in my mind, but shaped my early views on probability. For the details, see this New Scientist article on a Martin Gardner convention.

Only a few years ago, after a debate with a friend, did I reconsider the reasoning. It turns out Gardner’s statement of the problem is ambiguous. This revelation sparked a desire to hit the books and brush up on probability one day.

## A Primer of Statistics by M.C. Phipps and M.P. Quine

The second edition of this slim volume was the textbook for my first course on probability. I used it to cram for exams. For this purpose, it was good: I got decent grades.

Sadly, it wasn’t as good in other respects. I acquired a distaste for the subject. Why did Probability and Statistics seem like a bag of ad hoc tricks, with few explanations given? Do I have poor intuition for it? Or is it glorified guesswork that seems to work well enough with real-life data? Whatever the reason, I decided that for the rest of my degree I’d steer towards the Pure Mathematics offerings.

## The Signal and the Noise: Why So Many Predicitons Fail — but Some Don’t by Nate Silver

My renewed interest in probability was sparked by the United States presidential election of 2012, or rather, its aftermath. Many had predicted its outcome but few were accurate.

It was only then I read about Nate Silver, who turned out to have been famous for his prowess with predictions for quite some time. Eager to learn more, I thumbed through his bestseller.

Though necessarily light on theory, the equations that do appear are correct and lucidly explained. Also, the pages are packed with interesting data sets and anecdotes. General pronouncements are often backed up with concrete tables and graphs, though, as Silver readily admits, some qualities are difficult to quantify, resulting in potentially dubious but novel yardsticks (such as measuring scientific progress by average research and development expenditure per patent).

But most of all, I was intrigued by the tale of an ongoing conflict that I never knew existed, with frequentists on one side and Bayesians on the other. They never told me this in school!

I soon found out why: Silver states that Fisher may almost be single-handedly to blame for the dominance of frequentism, the ideology foisted upon me when I was just out of high school. Sure enough, I went back and confirmed Phipps and Quine listed Fisher in the bibliography.

## Against the Gods: The Remarkable Story of Risk by Peter L. Bernstein

My dad told me about this book. Technical details are scant as it is also aimed at the general public. But in contrast to Silver’s work, what little that appears is laughably erroneous. In some sections, I felt the author was trying to trick himself into believing fallacies.

The misinformation might be mostly harmless. Those with weak mathematical ability are going to skip the equations out of fear, and those with strong mathematical ability are probably also going to skip them because they already know them.

But conceivably this book could be a gifted reader’s first introduction to probability, and it’d be a shame to start off on the wrong foot. As a sort of public service, I’ll explain some of the gaffes.

Exercises

Chapter 6 contains an example expected value calculation involving a coin flip.

We multiply 50% by one for heads and do the same for the tails, take the sum---100%---and divide by two. The expected value of betting on a coin toss is 50%. You can expect either heads or tails, with equal likelihood.

Why is this wrong? How can we fix it?

The next example involves rolling two dice.

If we add the 11 numbers that might come up…the total works out to 77. The expected vale of rolling two dice is 77/11, or exactly 7.

Why is this wrong? How can we fix it?

What’s the difference?

Bernstein and Silver offer competing reasons why modern civilization differs from the past. Bernstein singles out our relatively newfound ability to quantify risk, and also suggests that key intermediate steps could only have occurred at certain points in history due to the overall mood of the era.

In contrast, Silver seems to place most importance on the printing press. In an early chapter, Silver suggests that after some teething trouble (lasting 330 years), the printing press paved the way for modern society. Apart from distribution of knowledge, perhaps more importantly the printing press helped with the preservation of knowledge; previously, writing would often be lost before it could be copied.

I’m inclined to side with Silver, partly because of Bernstein’s basic technical mistakes. After observing how fast and loose Bernstein was playing with mathematics, I’m tempted to believe some of his statements are gut feelings.

There is another glaring difference. Bernstein’s book lacks any mention of the frequentist-Bayesian war. Fisher’s name is conspicuously absent.

For or Against?

Against the Gods is riveting. My favourite feature is the backstories of famous scholars. For some of them, before reading the book, the only thing I knew about them were their names, and I would have known even less if their names weren’t attached to their most famous discoveries (or at least, discoveries vaguely connected with them). Learning about their life, motivations, temperament, beliefs, and so on was illuminating. An intellectually superior form of gossip, I suppose.

However, the elementary mathematical mistakes ultimately cast a cloud of suspicion over the book. How reliable are the author’s assertions in general? Although I heartily recommend Against the Gods, I also recommend thorough fact-checking before using it as a reference.

So a tip for bestseller authors: if a section is technical, then ask an expert, be an expert, or cut it out. Too many howlers make readers like me wary of the whole, no matter how well-written and accurate the non-technical parts are.

As Bernstein himself implies, an expected value is a weighted average. We need weights, and we need numbers to sum. It takes two to tango; the expected value dance can only proceed if probabilities are accompanied by values.

One example neglects the values, and the other neglects the probabilities. The author only computes the sum of the weights for the coin flip, and the sum of the values for the dice roll. In both cases the author divides by the number of outcomes, which might be considered another error: we already divided by the number of outcomes to compute the weights (probabilities) in the first place.

Why are these blunders amusing? For the coin example, let’s ignore that the expected value is confused with a probability. Instead of a coin, consider winning the lottery. The probability of winning the lottery plus the probability of not winning the lottery sums to 100%. Dividing this by the number of outcomes, i.e. 2, yields 50%, so apparently we win or lose the lottery with equal likelihood! It’s almost like saying “either it happens or it doesn’t happen, so the chances it happens is 50%”.

For the dice example, imagine rolling 2 loaded dice, both of which almost always show 6. The expected value should be close to 12, but because the probabilities are completely ignored, the author’s procedure leads to the same expected value of 7. Surely your calculation should change if the dice are loaded?

How do we fix these problems? For the dice example, the author supplies the correct method in the very next paragraph. At last, both the probabilities and values are taken into account. Unfortunately, the author then concludes:

The expected value…is exactly 7, confirming our calculation of 77/11. Now we can see why a roll of 7 plays such a critical role in the game of craps.

This should have never been written. The first sentence suggests both methods for computing the expected value are valid, when of course it just so happens the wrong method leads to the right answer.

The second sentence is difficult to interpret. Perhaps uncharitably, I’m guessing the sentence is an upgraded version of: “Look! Here’s a 7! Didn’t we see a 7 earlier?” What would have been written if we rolled a single die? The expected value is 3.5, but a roll of 3.5 obviously has no role in any game we play with one die.

As for fixing the coin example: computing an expected value requires us to attach a numerical value to each outcome. One does not simply plow ahead with “heads” versus “tails”. We need numbers; any numbers. We could assign 42 to heads, and 1001 to tails; here, the expected value of a fair coin toss would be 50% of 42 plus 50% of 1001, which is 521.5. Typically we pick values relevant to the problem at hand: for instance, in a game where we earn a dollar for flipping heads, and lose a dollar for tails, we’d assign the values 1 and -1 (here, our expected winnings would be 0).

[It may be possible to reinterpret the coin example as assigning the value 1 to both heads and tails. But if this were done, the expected value should also be 1, not “50%”. Furthermore, we learn nothing if the outcomes are indistinguishable.]

## Probability Theory: The Logic of Science by E. T. Jaynes

If only Jaynes' book had been my introduction to probability. Like a twist ending in a movie, reading it was a thought-provoking eye-opening earth-shattering experience that compelled me to re-evaluate what I thought I knew.

Whereas Silver presents whimsical examples that demonstrate the Bayesian approach, Jaynes forcefully argues for its theoretical soundness. From a few simple intuitive “desiderata” (too ill-defined to be axioms), Jaynes shows step-by-step how they imply more familiar probability axioms, and why the Bayesian approach is the natural choice. And all this happens within the first 3 chapters, which are free online.

I had been uneasy about probability because I thought it was a collection of mysterious hacks, perhaps because it had to deal with the real world. I was flabbergasted to learn probability could be put on the same footing as formal logic. All those hacks can be justified after all. Probability is not just intuition and duct tape: it can be as solid as any branch of mathematics.

Since there still exist competing philosophies of probability, presumably others find fault with Jaynes' arguments. I’m still working through it, but I’m convinced for now. If there’s another twist in this story, I’ll need another great book to show it to me.

Washington University in St. Louis maintains a page dedicated to Jaynes. It’s a shame he died before he finished writing. The remaining holes have been papered over with exercises, which explains their depth and difficulty.

It’s also a shame Jaynes left Stanford University many years ago. Had he stayed, with luck I would have discovered his work earlier, or even have met him. A backward look to the future describes his reasons for departure.

In short, Jaynes felt the “publish or perish” culture of academia was harmful and was taking over Stanford. I can’t tell if Jaynes was right because by the time I got into the game, this culture seemed universally well-established. I had no idea an alternative ever existed.

## Sunday, September 16, 2012

### Programming Dominion

I recently learned to play Dominion, a game that spawned a genre known as deck-building card games. I’m a terrible player. While suffering defeats at the hands of a simple AI, I realized I might have more fun writing a Dominion-playing program.

Implementing just the basic rules is a boring exercise. Luckily, Dominion is a self-modifying game. For example, each turn, you’re supposed to start with one Action and 5 cards in your hand, but there are ways of increasing your Action count, or changing the number of cards in your hand.

Moreover, rule modifications interact with one another, further increasing complexity. For example, playing Witch causes other players to gain a Curse card, but not if the supply of Curse cards is exhausted, or a player is holding a Moat. Or take Throne Room, which plays another Action card twice. How can we design software to handle so many special cases?

Of course, sufficient spaghetti can get anything working. But we should try to minimize mess; ideally the logic for each card should be as isolated as possible. It’d awful if, say, Throne Room required us to bury code somewhere in the Action-playing routine so it runs twice instead of once.

Dominion in Go

I’m reasonably pleased with my first attempt. For the simplest cards, the logic is completely contained in a string, in a tiny domain-specific language:

Village,3,Action,+C1,+A2Woodcutter,3,Action,+B1,$2 Less trivial cards require a bit more: case "Feast": add(func(game *Game) { p := game.NowPlaying() game.trash = append(game.trash, p.played[len(p.played)-1]) p.played = p.played[:len(p.played)-1] pickGain(game, 5) }) And that’s it! To add a card, just one string, and maybe one block of code. As time passed, it became easier to add new cards. For some cards, it was more like data entry than programming. Moat is an exception. As the only Reaction card in the Base set, rather than figure out a clean way to implement it, I sprinkle ad hoc code here and there to get it working. If I were to add more Reaction cards, I’d factor out the common parts. There’s no reason to do so pre-emptively. In fact, that’s what happened with other cards: I would only refactor once there was duplicate code to eliminate. Intrepid readers can browse my git repo: https://github.com/blynn/gominion.git But beware. It’s all in one untidy monolithic file, the UI is horrible, and the AI is stupid, though it still beats me when I get too greedy with Action cards! The game state is shared by all players. If network play were added, to prevent cheating, information would need to be more tightly controlled. I have no plans to work much more on this, as many mature implementations already exist, and Rio Grande Games plans to release an official online version soon. All the same, I highly recommend learning to play Dominion, and then trying to program it. Both are enlightening experiences. ## Sunday, August 26, 2012 ### Smashing the non-executable stack for fun and profit In 1996, Elias Levy ("Aleph One") published "Smashing The Stack For Fun And Profit" in Phrack magazine. The article showed how to overflow a buffer to launch a shell. I’m almost ashamed I never took a closer look for over a decade. My background would suggest I’d be one of the early adopters. As a kid, I loved messing with assembly language and poking around the system. I collected computer viruses. I bypassed copy protection systems. I knew how to make free phone calls. In grad school, my advisor and my colleagues taught a computer security class, where rooting a system by smashing the stack was a homework assignment. With pride, and relief, I can now announce that at long last, in 2012, I have exploited a buffer overflow. Moreover, I have written a truly marvelous step-by-step guide to this, which this post is too narrow to contain. (I’m afraid of overflowing it.) I took notes because I encountered difficulties with other tutorials: • 32-bit systems are often assumed. My system is 64-bit. • Various countermeasures are now enabled on stock installs. • I wanted to try a newer variant of the attack known as return-oriented programming, which defeats one of the countermeasures. Luckily my website has ample room. Read now, and get a bonus shell script that demonstrates the attack! ## Wednesday, August 8, 2012 ### Isn't Algebra Necessary? A recent New York Times article ponders if we should downgrade mathematics taught to high school and college students, and in particular, cut basic algebra. Seriously? A horizontal line may represent an unknown word in those fill-in-the-blank primary school comprehension tests ("The dog’s name is __."), but a letter should never represent an unknown number lest it cause undue mental stress? Among my first thoughts was that the article was a professional troll posting. After all, The New York Times is sadly going through a rough patch, and I sympathize if they must occasionally stoop lower to catch some extra cash. (If it is a troll posting, hats off! You got me.) But the truth is probably mundane; it seems the author genuinely believes that algebra should be dropped. On the one hand, this benefits me. If the article is taken seriously, and algebra is withheld from the masses, then those of us who know it possess formidable advantages. (The conspiracy theorist in me wonders if the author actually finds elementary algebra, well, elementary, and the true intent is to get ahead by encouraging everyone else to dumb down.) On the other hand, the piece smacks of ignorance-is-strength propaganda, and thus is worth smacking down. Inflation The article suggests that, instead of algebra, classes should perhaps focus on how the Consumer Price Index is computed. I agree studying this is important: for example, I feel more attention should be drawn to the 1996 recommendations of the Boskin commission. If the Fed did indeed repeat the mistakes of the 1970s, then I should bump up the official US inflation rate when analyizing my finances. However, this stuff belongs to disciplines outside mathematics. More importantly, what use is the CPI without algebra? Take a simple example: say I owe you$1000, and the inflation rate is 5%. If all you care about is keeping up with inflation, is it fair if I pay you back $120 annually for 10 years? If not, what is the right amount? Without algebra, you might be able to figure that$1000 today is the same as 1000×(1.05)10 = \$1628.89 in 10 years. But how are you going to figure out that the yearly payment should be 0.05×1628.9/(1.0510 - 1)? The easiest way to arrive here is to temporarily treat 1.05 as an abstract symbol. In other words, elementary algebra. One does need to play this ballgame for personal finance after all.

You might counter that an amortized loan calculator can work out the answer for you; there’s no need to understand how it works, right?

Ignorance begets fraud

In the above calculation, do I make my first payment today, or a year from now? Don’t worry, I’ll figure it out for you. Or perhaps I’ll claim you’re using the wrong mode on the calculator and helpfully retrieve the "right" formula for you.

Maybe you’d avoid these shenanigans by entrusting an accountant to oversee deals like this. Okay, but what if it’s not a loan? Say you’re making a policy recommendation and I’m an disingenuous lobbyist: can you tell if I’m fudging my figures?

I heard a story about Reagan’s SDI program. Scientists estimated a space laser required 1020 units of energy, and current technology could generate 1010 units. They got funding by saying they were halfway there.

I hope this tale is apocryphal. Nevertheless, one can gouge the mathematically challenged just as unscrupulous salesmen rip off unwitting buyers. Unfortunately, with finance and government policy, damage caused by bad decisions can be far worse and longer lasting.

Fermat’s Last … Dilemma?

One bright spot in the article was the mention of "the history and philosophy of [mathematics], as well as its applications in early cultures". While not required to solve problems, knowing the background to famous discoveries makes a subject more fun.

It is inspiring that within a few short school years we enjoy the fruits of thousands of years of labour. Perhaps a student struggling with negative numbers would feel better knowing that it took many generations for them to be socially acceptable. For instance, the Babylonians were forced to divide the quadratic equation into different cases because they rejected negative numbers on philosophical grounds.

But at the same time, we see a mention of "Fermat’s dilemma", which charitably is a creative renaming of "Fermat’s Last Theorem" (though more likely there was some confusion with the "Prisoner’s Dilemma" from game theory). The author chose this example poorly, because the history of Fermat’s Last Theorem actually bolsters the case for algebra. It shows how a little notation goes a long way.

For Fermat did not use symbolic algebra to state his famous conjecture. Instead, he wrote:

Cubum autem in duos cubos, aut quadrato-quadratum in duos quadrato-quadratos, et generaliter nullam in infinitum ultra quadratum potestatem in duos eiusdem nominis fas est dividere cuius rei demonstrationem mirabilem sane detexi. Hanc marginis exiguitas non caperet.

(If it took him that many words to state the theorem, no wonder he had no space for a proof!)

We have it easy today. Mathematics would be considerably harder if you had to compute amortized loan payments with Latin sentences instead of algebra.

How could a writer fail to appreciate algebra? Strunk taught that "vigorous writing is concise." Which is more concise: the above, or "xn + yn = zn has no positive integer solutions for n > 2"?

What should we learn?

Some time ago, I arrived at the opposite conclusion of the author, after reading confessions of professional academic ghostwriters. Algebra is fine; the courses that need reform are those far removed from mathematics.

According to "Ed Dante", who is hopefully exaggerating, you can pass such courses so long as you have Amazon, Google, Wikipedia, and a decent writing ability. You get the same results and save money by paying for an internet connection instead of university tuition.

I suppose I should also end on a positive note: I propose introducing ghostwriting courses, where the goal is to bluff your way through another course in the manner "Ed Dante" describes. The library would be off-limits, and you must not have previously studied the target subject. Perhaps the first 3 assignments can be admissions essays: one each for undergraduate, master’s and doctoral programs. Grading would be easy: if they fall for it, you get a good score.

With luck, universities would be forced to either beef up the victim degrees (perhaps by assessing students with something besides essays, or by teaching something that cannot be immediately learned from the web), or withdraw them. Additionally, the students would learn the importance of writing, and be harder to fool.

## Sunday, August 5, 2012

### Keeping up with yesterday

My to-do list has grown frightening large. Perhaps I'll be more motivated to tackle it by publicly announcing a few of its entries.

• Apologies to those who sent me patches to my Git tutorial, or are awaiting email responses about the PBC library. I'll try to get around to them soon. And perhaps I'll even get back to working on the second edition of the printed version, which I originally planned to release 2 years ago!
• I took notes on return-oriented programming on 64-bit Linux that I want to put up on my site somewhere. They've been almost ready for months.
• Months ago, I also coded a logic puzzle solver that takes its input in a concise format. It's about ready for release.
• In general, I want to rant and rave more over petty technical issues.

I'd better stop here, otherwise this list may also become too scary for me look at.

## Thursday, April 12, 2012

### Sudoku: Cohen vs my reading ability

I was wrong about being wrong in my last post. My original predictions were correct after all.

I had skimmed Bram Cohen’s post too quickly, and when I tried following his approach, my branching heuristic was simply to take the clause with the fewest literals.

What happens if we do this? Initially, there are many clauses consisting of 2 negative literals, and a few consisting of 9 positive literals. Thus most of the time we pick a negative clause. We’ll only ever pick a positive clause if only 2 literals are left. By this stage, it’s no different from a negative clause because assigning to true to one of the variables is the same as assigning false to the other.

In other words, apart from unit propagation, each step simply eliminates a particular digit from a particular cell, depending on the the order of the clauses. That is, apart from unit propagation, this is no different to brute force!

I failed to see this when I tried writing a SAT solver, and wound up surprised at the results. It was only later I realized I should have been ignoring negative clauses to make Cohen’s program behave like like Knuth’s: the positive clauses correspond exactly to the columns in the exact cover problem, and the literals within a positive clause correspond to 1s in a column.

The only difference is performance: manipulating lists or arrays to find which negative clauses affect which positive clauses is much slower than manipulating a few links to find which rows affect which columns.

When I returned to Cohen’s post, I realized he had explicitly mentioned skipping negative clauses during branching.

Below is my C translation of Cohen’s program. It is more efficient than the original because of in-place deletion and undeletion of clauses and literals. (I’ve used “dancing arrays” instead of Python lists.) As expected, this solver orders of magnitude slower than my dancing links solver, but handily beats Norvig’s program.

#include <ctype.h>
#include <limits.h>
#include <stdio.h>

#define F(i,n) for(int i = 0; i < n; i++)

int main() {
int a[9][9] = {{0}}, x[4*9*9*(1+9*8/2)][10] = {{0}}, n = 0, m;
F(i, 9) F(j, 9) do if (EOF == (m = getchar())) return 1; while(
isdigit(m) ? a[i][j] = m - '0', 0 : m != '.');
int enc(int a, int b, int c) {return 9*9*a + 9*b + c + 1;}
void add(int n, int a, int b, int c) {x[n][++*x[n]] = enc(a, b, c);}
F(i, 9) F(j, 9) F(k, 9 || (n += 4, 0)) {  // At least one digit per:
add(n  , k, i, j);  // ...box
add(n+1, i, k, j);  // ...column
add(n+2, i, j, k);  // ...row
add(n+3, i, j/3*3 + k/3, j%3*3 + k%3);  // ...3x3 region.
}
for(int i = n-1; i >= 0; i--) F(j, x[i][0]) F(k, j) {
x[n][1] = -x[i][j+1];  // At most one digit per positive clause.
x[n][2] = -x[i][k+1];  // (Hence the 9 choose 2 factor above.)
x[n++][0] = 2;
}
int y[n], out[9*9*9];
F(i, n) y[i] = i;
int assign(int n, int v) {
F(i, n) {
int k = y[i];
F(j, x[k][0]) {
if (x[k][j+1] == v) {  // Satisfied clause:
y[i--] = y[--n];     // Swap with last one
y[n] = k;            // and decrement array count.
break;
} else if (x[k][j+1] == -v) {  // False literal:
x[k][j+1] = x[k][x[k][0]];   // Swap with last one
x[k][x[k][0]--] = -v;        // and decrement clause size.
break;  // Assume literals are unique in a clause.
}
}
}
return n;
}
void solve(int n) {
int s = INT_MAX, t = 0;
if (!n) {  // Print solution.
F(i, m) if ((t = out[i] - 1) >= 0) a[t/9%9][t%9] = t/9/9 + 1;
F(r, 9) F(c, 9 || 0*puts("")) putchar('0'+a[r][c]);
return;
}
F(i, n) if (x[y[i]][0] < s) {  // Find smallest positive clause.
if (x[y[i]][0] > 1 && x[y[i]][1] < 0) continue;
if (!(s = x[y[i]][0])) return;  // Empty clause: no solution.
t = y[i];
}
void try(int v) {
solve(assign(n, out[m++] = v));
F(i, n) {  // Undo any clause deletions.
int k = y[i];
if (x[k][0] < 9 && x[k][x[k][0]+1] == -v) x[k][0]++;
}
m--;
}
try(x[t][1]);
if (s > 1) try(-x[t][1]);
}
// Fill in the given digits, and go!
F(r, 9) F(c, 9) if (a[r][c]) n = assign(n, enc(a[r][c]-1, r, c));
m = 0, solve(n);  // Reuse m as count of 'out' array.
return 0;
}