A patient, visual tour of how functions call themselves — with code in Python, Java, and Go, hand-drawn call trees, quizzes between lessons, and honest notes on performance. No prior experience assumed.
Recursion is when a function calls itself to solve a smaller version of the same problem.
Think of Russian nesting dolls. Each doll holds a smaller doll inside, which holds an even smaller doll, and so on — until you reach the tiniest one that can't be opened. That smallest doll is where the "unwrapping" stops.
In code, a function that calls itself works the same way. Each call tackles a slightly simpler problem, passing the smaller piece "inward." Eventually, we hit a case so simple it can be answered directly — and then everything unwinds back out.
Every recursive problem has two parts: "How do I handle the tiniest case?" and "How do I turn a big problem into a smaller version of itself?"
Every recursive function — in every programming language — contains exactly two ingredients. Miss either one and things go badly wrong.
A condition so simple the function can return an answer without calling itself. This is the "stop" signal.
Without it: infinite recursion. The program calls itself forever until the call stack runs out of memory and crashes. (You'll see StackOverflowError in Java, RecursionError in Python, or fatal error: stack overflow in Go.)
The part where the function calls itself with a smaller or simpler input, moving toward the base case.
Without progress: if each call doesn't actually get closer to the base case, you again get infinite recursion. "Smaller" must mean something concrete — smaller number, shorter list, fewer nodes, etc.
Print numbers from n down to 1. Look at the base case and the recursive call in each language — the pattern is identical.
def countdown(n): if n <= 0: # ← base case: stop here return print(n) countdown(n - 1) # ← recursive call with smaller input countdown(3) # prints: 3, 2, 1
Factorial is the gentlest recursion to trace by hand. We write n! (read "n factorial") for the product of all whole numbers from 1 up to n. So 4! = 4 × 3 × 2 × 1 = 24.
def factorial(n): if n <= 1: return 1 # base case return n * factorial(n - 1) # recursive case print(factorial(4)) # 24
Press Next step and watch what happens. Each line shows what the computer is thinking at that moment.
Notice the two phases: first the function keeps calling itself deeper and deeper (the "winding up" phase), suspending each call mid-calculation. Then, once it hits the base case, results unwind back out, each multiplication finally happening on the way back up.
Every time a function is called, the computer adds a little "note" to a pile called the call stack. The note remembers: which function, which inputs, and where to come back to when done.
Recursion piles these notes up. factorial(4) can't finish until factorial(3) returns, which can't finish until factorial(2) returns, and so on. When the base case finally returns a real value, the stack unwinds — each function finishes its multiplication, pops off, and hands the answer to the one below it.
The call stack has a fixed size limit. Too many nested calls → crash. Python defaults to about 1000 recursion levels. Java's limit depends on thread stack size (typically 512KB–1MB, giving thousands of frames). Go grows the stack dynamically, so it handles deeper recursion but isn't infinite. We'll return to this in Chapter 7.
Here's the function you asked about, translated into three languages. They all do the same thing:
def f(n): if n <= 1: return n return f(n - 1) + f(n - 2) print(f(4)) # 3
When n <= 1, return n. This means f(0) = 0 and f(1) = 1. No recursion happens; the function returns immediately. This is why f(1) returns 1.
For any n > 1, return f(n-1) + f(n-2). Two recursive calls. Each gets smaller, so both are guaranteed to eventually hit the base case.
Each number is the sum of the two before it: 0, 1, 1, 2, 3, 5, 8, 13, 21... — the Fibonacci sequence.
f(4) returns 3. Trace back up: the base cases return 1, 1, 1, 0, 1, 0. Adding them up the tree gives f(2)=1, f(3)=2, and finally f(4) = 2 + 1 = 3.
Four questions. No pressure. Tap an answer — green means right, red means have another look. Explanations appear underneath.
The naive Fibonacci function we just wrote is painfully slow. Try f(45) — you might wait 10 seconds. Try f(50) and you could wait minutes. Why?
Look at the call tree again. To compute f(4), we ended up computing f(2) twice and f(1) three times. The redundancy explodes as n grows. Computing f(30) makes more than a million calls. Computing f(50) makes over 20 billion.
| Pattern | Time | Space (stack) | Example |
|---|---|---|---|
| Linear (one call) | O(n) | O(n) | factorial, countdown, sum of list |
| Linear + memo | O(n) | O(n) | memoized Fibonacci |
| Divide & conquer | O(n log n) | O(log n) | merge sort, quicksort avg. |
| Binary search | O(log n) | O(log n) | searching a sorted array |
| Naive double-branching | O(2ⁿ) | O(n) | naive Fibonacci, naive subset-sum |
| Exhaustive permutations | O(n!) | O(n) | generating all orderings |
If a recursive function makes two or more calls on itself in the recursive case, and those calls can revisit the same subproblem, you probably have an exponential runtime problem. The fix is usually memoization — coming up next.
Memoization is a fancy word for a simple idea: cache the answer to each subproblem the first time you compute it, so you never redo work. With one line of caching, Fibonacci goes from minutes to microseconds.
from functools import lru_cache # Idiomatic Python: one decorator, done. # lru_cache remembers every (input → output) pair. @lru_cache(maxsize=None) def f(n): if n <= 1: return n return f(n - 1) + f(n - 2) print(f(100)) # instant — 354224848179261915075
f(45) ≈ 1.8 billion calls. f(50) ≈ 20 billion. f(60) ≈ several hours.
f(1000) returns in microseconds. Each n is computed exactly once.
For Fibonacci specifically, you don't even need recursion. A simple loop uses constant memory and is the fastest option:
# Python — O(n) time, O(1) space def f(n): a, b = 0, 1 for _ in range(n): a, b = b, a + b return a
Sometimes the best use of recursion is to recognize when not to use it. Learning to translate recursive thinking into iterative code is a core skill.
A recursive call is tail recursive if it's the very last thing the function does — nothing happens after it returns. Some languages can optimize this into a loop, freeing up stack space.
def factorial(n): if n <= 1: return 1 return n * factorial(n-1) # ↑ multiplication happens AFTER the return
def factorial(n, acc=1): if n <= 1: return acc return factorial(n-1, n*acc) # ↑ the call IS the return. Nothing after.
| Language | Tail-call optimization? | What to do instead |
|---|---|---|
| Python | ❌ No (by design — Guido's decision) | Use a while loop for deep recursion |
| Java | ❌ No (JVM doesn't optimize it) | Iterative loops, or trampolining as a workaround |
| Go | ❌ Not guaranteed | Loops; Go's dynamic stack helps but won't save you at extreme depth |
| Scala, Kotlin, Scheme, Haskell | ✓ Yes (Scala needs @tailrec, Kotlin tailrec) | Write tail-recursive form freely |
In Python, Java, and Go, tail recursion gives you cleaner code but no stack savings. If you need to recurse deeper than ~1000 levels (Python) or many thousands (Java/Go), rewrite as a loop instead. Don't rely on tail-call optimization that isn't there.
Fibonacci and factorial are training wheels. In actual code, recursion shines for problems that are naturally hierarchical: file systems, JSON, HTML trees, binary search, sorting, compilers, and puzzle-solving.
class Node: def __init__(self, val, left=None, right=None): self.val = val self.left = left self.right = right def tree_sum(node): if node is None: return 0 # base case: empty return node.val + tree_sum(node.left) + tree_sum(node.right)
Each call cuts the search space in half, so finding an item in a million-element sorted array takes only ~20 comparisons. This is O(log n) — wildly efficient.
def binary_search(arr, target, lo=0, hi=None): if hi is None: hi = len(arr) - 1 if lo > hi: # base case: not found return -1 mid = (lo + hi) // 2 if arr[mid] == target: # base case: found return mid if target < arr[mid]: return binary_search(arr, target, lo, mid - 1) return binary_search(arr, target, mid + 1, hi)
File systems. Walking every file in every subdirectory is naturally recursive.
Parsers & compilers. JSON, HTML, and programming languages all have nested grammar.
Backtracking. Sudoku, maze solvers, N-queens — try, fail, undo, try again.
Always write the base case first, before the recursive call. If you can't state the base case out loud, you don't yet understand the problem.
Make sure every recursive call is on a strictly smaller problem. Passing the same value, or one that's just as complex, means you'll never terminate.
If your call tree has repeats (like naive Fibonacci), you need memoization or an iterative rewrite. Don't accept exponential runtime when a cache makes it linear.
Python's default recursion limit is 1000. Walking a list of 10,000 elements recursively crashes. Use a loop, or sys.setrecursionlimit() carefully if you truly need it.
Five questions covering the whole guide. Feel free to scroll back if you want to check your thinking before answering.
You now have the mental models, the vocabulary, and the language-agnostic patterns to read and write recursive code confidently. Go break some problems into smaller ones.