Antti-Juhani pointed out to me that Haskell provides the **quot** and **rem** pair of operations in addition to **div** and **mod**. The difference is that while both satisify the identity involving division, multiplication, and the remainder, **div** and **mod** round down (towards -infinity), while **quot** and **rem** round towards zero (truncating the fractional part).

That's great! (Besides the fact that I displayed my ignorance in public, that is). It means I really can use GHCi to model my functions for testing! And I was getting worked up to write a C version of Plauger's **div** which rounded away from zero for negative quotients so I could have some guaranteed-consistent behavior. Instead I can use the C standard function **div** and Haskell's **quot** and **rem**.

So, this sort of begs the question: is there a good reason to prefer one method of integer division rounding to another? Yes. It comes down to this question: do you want your algorithms to behave *smoothly across* the origin, or to behave *symmetrically around* the origin?

Let's briefly consider some Haskell code for mapping a range of fractions onto a range of integers again.

let gen_nths denom = [ (numer, denom::Int) | numer <- [ (-denom::Int)..(denom::Int) ] ] let div_map m1 m2 = map (\ tup -> fst tup * ( ( (m2::Int) - (m1::Int) ) `div` 2 ) `div` snd tup ) div_map (-10) 10 ( gen_nths 3 )[-10, -7, -4, 0, 3, 6, 10]

Notice something about those values: the distances between them follow a consistent pattern. Reading the list left to right, we have distances of 3, 3, 4, 3, 3, 4. This pattern repeats. It is *smooth across* the origin.

Now let's change our **div_map** function to use **quot** and **rem**:

let quot_map m1 m2 = map ( \tup -> fst tup * ( ( (m2::Int) - (m1::Int) ) `quot` 2) `quot` snd tup ) quot_map (-10) 10 ( gen_nths 3 )[-10, -6, -3, 0, 3, 6, 10]

Looking at the differences between values, from left to right, we find that they are 4, 3, 3, 3, 3, 4. The function is *symmetric around* the origin.

Does this matter in practice? Of course! Let's say that we have a mapping using a non-origin-crossing version of our **quot_map** function:

let gen_non_negative_nths denom = [ (numer, denom::Int) | numer <- (0::Int)..(denom::Int) ] ] let quot_map_2 m1 m2 = map ( \tup -> fst tup * ( (m2::Int) - (m1::Int) ) `quot` snd tup ) quot_map_2 0 20 ( gen_non_negative_nths 6 )[0, 3, 6, 10, 13, 16, 20]

Shiny. The intervals go 3, 3, 4, 3, 3, 4. But if we shift the values:

let gen_non_positive_nths denom = [ (numer, denom::Int) | numer <- [ (-denom::Int)..(0::Int) ] ] quot_map_2 0 20 ( gen_non_positive_nths 6 )[-20, -16, -13, -10, -6, -3, 0]

The intervals go 4, 3, 3, 4, 3, 3 -- they are backwards. In other words, the effect of integer rounding on the values of the codomain changes if you shift the domain of the denominator across the origin: the ordering of the pattern is reversed.

If we do the same thing using div:

let div_map_2 m1 m2 = map (\ tup -> fst tup * ( (m2::Int) - (m1::Int) ) `div` snd tup ) div_map_2 0 20 ( gen_non_negative_nths 6 )[0, 3, 6, 10, 13, 16, 20]div_map_2 0 20 ( gen_non_positive_nths 6 )[-20, -17, -14, -10, -7, -4, 0]

we get a pattern of intervals (the effect of rounding) which remains the same: 3, 3, 4, 3, 3, 4.

Now, smoothness across the origin is is what I want, at least for the kinds of functions I am working on now. But your problem domain may lend itself to writing functions which involve rotations, or mapping values across the origin: in that case, you're going to want the symmetry. The important thing is to know what strategy you are using and apply it consistently. I'm really impressed that Haskell give me so many interesting options.

## No comments:

Post a Comment