Computing Recurrences in Python
A recurrence relation is a way of defining a sequence where each term is computed from earlier terms. You have almost certainly seen them before — even if you did not know the name. The classic example is the Fibonacci sequence, where each number is the sum of the two before it. Recurrences also show up constantly in algorithm analysis, where the cost of a recursive function is naturally expressed as a recurrence.
This post builds up recurrences from scratch, implements them two ways in Python, derives their closed forms, and shows concretely why memoization matters.
What Is a Recurrence Relation?
A recurrence relation has two parts:
1. A rule that defines each term in terms of earlier ones. For example:
$$T(n) = T(n-1) + n$$This says: the value at \(n\) equals the value at \(n-1\), plus \(n\) itself.
2. A base case that gives the starting value, stopping the recursion:
$$T(1) = 1$$Without a base case, the rule would recurse forever — there would be no ground to stand on.
Let's trace through the first few values by hand, so there is no mystery about where the numbers come from:
| \(n\) | Expansion | \(T(n)\) |
|---|---|---|
| 1 | Base case | 1 |
| 2 | \(T(1) + 2 = 1 + 2\) | 3 |
| 3 | \(T(2) + 3 = 1 + 2 + 3\) | 6 |
| 4 | \(T(3) + 4 = 1 + 2 + 3 + 4\) | 10 |
| 5 | \(T(4) + 5 = 1 + 2 + 3 + 4 + 5\) | 15 |
The pattern is clear: \(T(n)\) is just the sum \(1 + 2 + 3 + \cdots + n\). We will make this precise when we derive the closed form below.
Iterative Implementation
The most straightforward way to compute \(T(n)\) is to follow the recurrence directly with a loop, building up the answer step by step:
def recurrence(n: int) -> int:
"""Compute T(n) = T(n-1) + n, with T(1) = 1, using a loop."""
total = 1 # start from the base case T(1) = 1
for k in range(2, n + 1):
total += k # apply the rule: add k at each step
return total
Let's walk through what happens when we call recurrence(5):
| Step | \(k\) | total after this step |
|---|---|---|
| Start | — | 1 (base case) |
| 1 | 2 | \(1 + 2 = 3\) |
| 2 | 3 | \(3 + 3 = 6\) |
| 3 | 4 | \(6 + 4 = 10\) |
| 4 | 5 | \(10 + 5 = 15\) |
recurrence(5) returns 15. This matches our hand calculation above. ✓
Recursive Implementation with Memoization
The recurrence relation is itself recursive in structure — \(T(n)\) is defined in terms of \(T(n-1)\). So it is natural to write a recursive Python function that mirrors the definition directly:
from functools import cache
@cache
def recurrence_recursive(n: int) -> int:
"""Compute T(n) = T(n-1) + n, with T(1) = 1, using recursion."""
if n == 1:
return 1 # base case
return recurrence_recursive(n - 1) + n # recursive rule
@cache is a decorator from Python's standard library. A decorator
is a modifier you place above a function definition that changes how the function behaves.
@cache specifically adds memoization — it stores the result of
every call in a table, so if the same argument is passed again, the stored result is returned
instantly instead of recomputing from scratch.
For this particular recurrence, both versions produce identical results:
print(f"{'n':>4} {'iterative':>12} {'recursive':>12} {'match':>8}")
print("-" * 42)
for n in range(1, 11):
it = recurrence(n)
rec = recurrence_recursive(n)
print(f"{n:>4} {it:>12} {rec:>12} {str(it == rec):>8}")
n iterative recursive match
------------------------------------------
1 1 1 True
2 3 3 True
3 6 6 True
4 10 10 True
5 15 15 True
6 21 21 True
7 28 28 True
8 36 36 True
9 45 45 True
10 55 55 True
Why Memoization Matters — The Fibonacci Example
For \(T(n) = T(n-1) + n\), each value only depends on the single previous value, so memoization does not make a dramatic difference. But consider the Fibonacci recurrence, where each term depends on the two previous terms:
$$F(n) = F(n-1) + F(n-2), \quad F(1) = 1, \quad F(2) = 1$$Without memoization, computing \(F(n)\) means computing \(F(n-1)\) and \(F(n-2)\) separately. But \(F(n-1)\) itself calls \(F(n-2)\) and \(F(n-3)\) — and \(F(n-2)\) is computed all over again, even though it was already computed by the \(F(n-1)\) branch. This cascades into an explosion of repeated work:
# Count how many times fib is called without memoization
call_count = 0
def fib_no_cache(n: int) -> int:
global call_count
call_count += 1
if n <= 2:
return 1
return fib_no_cache(n - 1) + fib_no_cache(n - 2)
@cache
def fib_cached(n: int) -> int:
if n <= 2:
return 1
return fib_cached(n - 1) + fib_cached(n - 2)
# Compare call counts for fib(20)
fib_no_cache(20)
print(f"fib(20) without @cache: {call_count} function calls")
print(f"fib(20) with @cache: ~20 function calls (one per unique n)")
print(f"Same answer: {fib_no_cache(1) == fib_cached(1)}") # reset-safe check
fib(20) without @cache: 13529 function calls
fib(20) with @cache: ~20 function calls (one per unique n)
13,529 calls versus 20. And it gets worse fast — fib(30) without caching takes
over 2.7 million calls. With @cache, it still takes about 30. This is why
memoization is one of the most important tools for working with recursive algorithms.
The first ten Fibonacci numbers for reference:
for n in range(1, 11):
print(f"F({n:2}) = {fib_cached(n)}")
F( 1) = 1
F( 2) = 1
F( 3) = 2
F( 4) = 3
F( 5) = 5
F( 6) = 8
F( 7) = 13
F( 8) = 21
F( 9) = 34
F(10) = 55
Finding the Closed Form
A recurrence gives you a rule for computing values one by one. A closed form gives you a direct formula — plug in \(n\) and get the answer in one step, with no looping or recursion required.
For \(T(n) = T(n-1) + n\) with \(T(1) = 1\), the table above revealed the pattern: \(T(n) = 1 + 2 + 3 + \cdots + n\). We already know the closed form for that sum:
$$T(n) = \sum_{k=1}^{n} k = \frac{n(n+1)}{2}$$Let's verify this in Python:
def closed_form(n: int) -> int:
"""Closed form for T(n): n(n+1)/2."""
return n * (n + 1) // 2
print(f"{'n':>4} {'recurrence':>12} {'closed form':>12} {'match':>8}")
print("-" * 44)
for n in range(1, 11):
r = recurrence(n)
c = closed_form(n)
print(f"{n:>4} {r:>12} {c:>12} {str(r == c):>8}")
n recurrence closed form match
--------------------------------------------
1 1 1 True
2 3 3 True
3 6 6 True
4 10 10 True
5 15 15 True
6 21 21 True
7 28 28 True
8 36 36 True
9 45 45 True
10 55 55 True
All three approaches — iterative, recursive with memoization, and closed form — give the same answer. The closed form is by far the fastest: it does the same job in two multiplications regardless of how large \(n\) is, while the iterative and recursive versions both take \(n\) steps.
Iterative vs. Recursive — When to Use Which
| Approach | Best when | Watch out for |
|---|---|---|
| Iterative | Each term only depends on the previous one — simple to follow and memory-efficient | Can get awkward when a term depends on multiple earlier terms |
Recursive with @cache |
The recurrence has a natural recursive structure — code reads like the math definition | Python has a default recursion limit of 1,000 — deep recursion will raise a RecursionError |
| Closed form | You need speed or are computing for very large \(n\) | Not always possible to derive — many recurrences have no simple closed form |
If you enjoy exploring Python performance techniques like memoization, you might also find this article useful: Enhancing Resilience in Your Python Requests with Exponential Back-off.
Comments