Memoization is a technique for speeding up the evaluation of a referentially transparent function.
The first time the function is called with a particular set of arguments, the return value is stored in a lookup table.
On subsequent calls, the value in the lookup table is used rather than evaluating the function again.
In C++, memoization comes for free when writing templates.
Compare two methods of computing the Fibonacci sequence, one static and one dynamic:
The two methods are very similar, but the static method is Θ(n) and the dynamic method is Θ(2n).
The key difference is that template instantiation is guaranteed to be referentially transparent, so the compiler only needs to instantiate a single struct for each value of N.
Functions lack this guarantee, so the run-time must evaluate fib(n) each time the value is needed.
Is there any way of combining the speed benefits of memoization with the convenience and flexibility of run-time evaluation?
We need some way to memoize functions of arbitrary type, without significant overhead in terms of code complexity.
We take as our goal a function templated on the function type and the arguments to pass:
This is the complete memoize function. The run-time version of Fibonacci can now be written as
In my next post, I will be testing memoize on the less trivial Matrix Chain Multiplication problem.