There are two major ways to think about writing code: object oriented programming and functional programming.
OOP says: bundle your data and behaviour together, model the real world as objects. FP says: keep your data and functions separate, transform data through a pipeline of pure functions.
Neither is “right” or “wrong”. But understanding functional programming changes how you read code, design APIs, and think about state. Let’s build that understanding from scratch.
Pure functions — the foundation
A pure function is one that:
- Always returns the same output for the same input
- Has no side effects — doesn’t modify anything outside itself
// Impure — depends on external state, has side effects
let totalPrice = 0;
function addToTotal(price) {
totalPrice += price; // modifying external state
console.log(totalPrice); // side effect
return totalPrice;
}
// Pure — always the same output, no side effects
function addPrices(a, b) {
return a + b; // only depends on inputs, returns a value
}
Why does this matter? When a function is pure, you can reason about it in isolation. Call it a hundred times with the same input, you get the same result every time. No surprises. No bugs hiding in state mutations.
Pure functions are referentially transparent — you can replace them with their return value and the program still works:
const result = addPrices(5, 10);
// is exactly the same as
const result = 15;
Idempotence — same result, same input
Idempotence is a special case of purity: calling a function once or a hundred times gives the same result.
// Not idempotent — depends on state
let counter = 0;
function increment() {
return ++counter; // 1, 2, 3, 4... different each time
}
// Idempotent — always the same for same input
function square(x) {
return x * x; // square(3) is always 9
}
Idempotent functions are extremely safe. AWS uses this principle for their APIs — if a network request fails and you retry it, the side effects only happen once, not repeatedly.
Imperative vs Declarative
Imperative code tells the computer how to do something:
// HOW: step by step instructions
const numbers = [1, 2, 3, 4, 5];
const doubled = [];
for (let i = 0; i < numbers.length; i++) {
doubled.push(numbers[i] * 2);
}
Declarative code tells the computer what you want:
// WHAT: describe the result
const numbers = [1, 2, 3, 4, 5];
const doubled = numbers.map(n => n * 2);
Declarative code is clearer because the reader doesn’t have to trace through loops and mutations to understand intent. They see: “map over numbers, double each one.” Done.
Functional programming is declarative — you describe transformations, not steps.
Immutability — don’t mutate, create new
Immutability means: don’t modify data, create new copies instead.
// Mutable — changes the original object
const user = { name: "Kim", cart: [{ item: "laptop", price: 200 }] };
user.cart.push({ item: "mouse", price: 20 }); // modified user.cart directly
// Immutable — creates new objects
const user = { name: "Kim", cart: [{ item: "laptop", price: 200 }] };
const newUser = {
...user, // spread copies shallow level
cart: [...user.cart, { item: "mouse", price: 20 }] // new array with new item
};
// user.cart is unchanged, newUser.cart has the new item
Why? Because when data doesn’t change, you can reason about it. You can compare versions of state to track what changed. You can undo operations. Debugging becomes predictable.
First-class functions
JavaScript treats functions like any other value. You can:
- Assign them to variables
- Pass them as arguments
- Return them from other functions
- Store them in data structures
// Store in a variable
const greet = (name) => `Hello, ${name}`;
// Pass as argument
function callTwice(fn, x) {
fn(x);
fn(x);
}
callTwice(greet, "Kim"); // greet gets called twice
// Return from a function
function makeMultiplier(n) {
return (x) => x * n;
}
const double = makeMultiplier(2);
double(5); // 10
Because functions are first-class, you can build abstractions on top of them. This is the bedrock of functional programming.
Higher-order functions
A higher-order function is one that takes a function as an argument or returns a function.
Array methods are perfect examples:
// map — transform each element
const prices = [10, 20, 30];
const doubled = prices.map((p) => p * 2); // [20, 40, 60]
// filter — keep elements that match a condition
const expensive = prices.filter((p) => p > 15); // [20, 30]
// reduce — accumulate to a single value
const total = prices.reduce((sum, p) => sum + p, 0); // 60
Higher-order functions let you write generic code. map works on any array, with any function. You don’t repeat the loop logic — you describe what to do with each element.
function map(array, transform) {
const result = [];
for (let i = 0; i < array.length; i++) {
result.push(transform(array[i]));
}
return result;
}
map([1, 2, 3], (n) => n * 2); // [2, 4, 6]
map(["a", "b", "c"], (s) => s.toUpperCase()); // ["A", "B", "C"]
One function, infinite uses.
Closures in functional programming
A closure is a function that remembers variables from its outer scope, even after that scope is gone.
function makeCounter() {
let count = 0;
return function () {
count++;
return count;
};
}
const counter = makeCounter();
counter(); // 1
counter(); // 2
counter(); // 3
The returned function “closes over” the count variable. Every time you call it, it has access to the same count.
Closures are powerful for building private state:
function makeSecureUser(name, password) {
// password is locked inside the closure
return {
getName() {
return name;
},
checkPassword(attempt) {
return attempt === password; // can check, but can't read it
}
};
}
const user = makeSecureUser("Kim", "secret123");
user.getName(); // "Kim"
user.password; // undefined — can't access it directly
Currying — transform functions
Currying transforms a function with multiple arguments into a chain of functions, each taking one argument.
// Normal function — takes two arguments at once
function multiply(a, b) {
return a * b;
}
multiply(3, 4); // 12
// Curried version — returns a function that returns a function
function curriedMultiply(a) {
return function (b) {
return a * b;
};
}
curriedMultiply(3)(4); // 12
// Or with arrow functions (cleaner)
const curriedMultiply = (a) => (b) => a * b;
curriedMultiply(3)(4); // 12
Why? Currying lets you partially apply — pre-fill some arguments and reuse:
const double = curriedMultiply(2);
double(5); // 10
double(7); // 14
const triple = curriedMultiply(3);
triple(5); // 15
Partial application — pre-fill arguments
Partial application is when you pre-fill some arguments of a function to create a new, more specialized function.
// Generic function
function addTax(taxRate, amount) {
return amount + amount * taxRate;
}
// Partially apply with 30% tax
const applySalesTax = (amount) => addTax(0.30, amount);
applySalesTax(100); // 130
// Or with a helper
function partial(fn, ...filledArgs) {
return function (...remainingArgs) {
return fn(...filledArgs, ...remainingArgs);
};
}
const applyTax = partial(addTax, 0.30);
applyTax(100); // 130
Partial application is about creating specialized versions of functions for specific use cases. It’s a tool for code reuse and clarity.
Memoization — cache results
Memoization caches function results based on inputs. If you call the function again with the same input, return the cached result instead of recalculating.
function fibonacci(n) {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}
// Slow — calculates the same values over and over
fibonacci(40); // takes a while
// Memoized — caches results
function memoize(fn) {
const cache = {};
return function (n) {
if (n in cache) {
return cache[n]; // return cached result
}
const result = fn(n);
cache[n] = result;
return result;
};
}
const fastFibonacci = memoize(fibonacci);
fastFibonacci(40); // instant — uses cached values
Memoization is useful for expensive calculations. The trade-off: memory for speed.
Dynamic programming — memoization in practice
Dynamic programming is just an application of memoization to solve complex problems efficiently.
Instead of recalculating the same subproblems, you store results and reuse them.
// Without memoization — O(2^n) time, recalculates fibonacci(n-2) thousands of times
function fib(n) {
if (n <= 1) return n;
return fib(n - 1) + fib(n - 2);
}
// With memoization (dynamic programming) — O(n) time
function fibDP(n, memo = {}) {
if (n in memo) return memo[n];
if (n <= 1) return n;
memo[n] = fibDP(n - 1, memo) + fibDP(n - 2, memo);
return memo[n];
}
fib(50); // takes forever
fibDP(50); // instant
Function composition — building pipelines
Function composition combines multiple functions into one. You pipe data through a series of transformations.
// Individual functions
function addTax(price) {
return price * 1.3;
}
function applyDiscount(price) {
return price * 0.9;
}
function roundPrice(price) {
return Math.round(price * 100) / 100;
}
// Without composition — nested calls (confusing, right-to-left)
const final = roundPrice(applyDiscount(addTax(100)));
// With composition — left-to-right flow
function compose(...fns) {
return (value) => fns.reduceRight((acc, fn) => fn(acc), value);
}
const calculateFinalPrice = compose(addTax, applyDiscount, roundPrice);
calculateFinalPrice(100); // same result, clearer intent
Notice composition reads right-to-left: addTax runs first, then applyDiscount, then roundPrice. That’s because we’re composing the functions themselves before calling them.
Pipe — composition left-to-right
Pipe is the same idea as compose, but data flows left-to-right:
function pipe(...fns) {
return (value) => fns.reduce((acc, fn) => fn(acc), value);
}
const calculateFinalPrice = pipe(addTax, applyDiscount, roundPrice);
calculateFinalPrice(100); // 117
Most people find pipe more intuitive — you read the transformations in the order they happen.
Here’s a mental model:
compose: f(g(h(x))) ← right-to-left
pipe: x → h → g → f ← left-to-right
Real example: shopping cart
Let’s build an Amazon-like shopping cart using pure functions and composition.
We want to:
- Add item to cart
- Apply tax
- Buy item (move to purchases)
- Empty cart
function addItemToCart(user, item) {
const updatedCart = [...user.cart, item];
return Object.assign({}, user, { cart: updatedCart });
}
function applyTax(user) {
const TAX_RATE = 1.3;
const updatedCart = user.cart.map((item) => ({
name: item.name,
price: item.price * TAX_RATE
}));
return Object.assign({}, user, { cart: updatedCart });
}
function byItem(user) {
return Object.assign({}, user, {
purchases: user.purchases.concat(user.cart)
});
}
function emptyCart(user) {
return Object.assign({}, user, { cart: [] });
}
Now compose them:
function compose(...fns) {
return (...args) => fns.reduceRight((acc, fn) => fn(acc), fns[0](...args));
}
const purchaseItem = compose(addItemToCart, applyTax, byItem, emptyCart);
const kim = {
name: "Kim",
cart: [],
purchases: []
};
const laptop = { name: "laptop", price: 200 };
const result = purchaseItem(kim, laptop);
console.log(result);
// {
// name: "Kim",
// cart: [],
// purchases: [{ name: "laptop", price: 260 }] ← 200 * 1.3
// }
Each function is pure — takes input, returns new output, no mutations.
kim → addItemToCart → applyTax → byItem → emptyCart → new kim
The beauty: each function is testable, reusable, and composable. Need to add a “give discount” step? Create a new function, add it to the pipeline.
Event sourcing — time travel debugging
Because we’re using immutable data, we can record the entire history of state changes.
const history = [];
function purchaseItemWithHistory(user, item) {
history.push(JSON.parse(JSON.stringify(user)));
const result = purchaseItem(user, item);
history.push(result);
return result;
}
purchaseItemWithHistory(kim, laptop);
console.log(history);
// [
// { cart: [] },
// { cart: [laptop] },
// { cart: [laptop with tax] },
// { purchases: [laptop], cart: [] }
// ]
Now you can replay the entire sequence. Amazon can debug: “Here’s every state the user went through. What went wrong?”
This is powerful.
Map, filter, reduce — the trinity
These three methods handle almost everything you need to do to arrays:
Map — transform each element
const numbers = [1, 2, 3];
numbers.map((n) => n * 2); // [2, 4, 6]
Filter — keep only matching elements
const numbers = [1, 2, 3, 4, 5];
numbers.filter((n) => n > 2); // [3, 4, 5]
Reduce — combine into a single value
const numbers = [1, 2, 3];
numbers.reduce((sum, n) => sum + n, 0); // 6
You can chain them:
[1, 2, 3, 4, 5]
.filter((n) => n > 2) // [3, 4, 5]
.map((n) => n * 2) // [6, 8, 10]
.reduce((sum, n) => sum + n, 0); // 24
Master these three and you can solve most data transformation problems.
Why functional programming matters
The goal is predictable code.
When your functions are pure:
- They’re easier to test — call with input, check output
- They’re easier to debug — no hidden side effects
- They’re easier to parallelize — no shared state conflicts
- They’re easier to reason about — you can trace the flow
Look at Redux in React:
// Redux reducer — pure function
function userReducer(state, action) {
switch (action.type) {
case "ADD_ITEM":
return { ...state, cart: [...state.cart, action.payload] };
case "CLEAR_CART":
return { ...state, cart: [] };
default:
return state;
}
}
This is functional programming at scale. Redux became popular because it forces this predictable, pure pattern.
Or Ramda and Lodash — libraries built entirely on composable, pure functions.
Functional vs imperative — a spectrum
You don’t have to choose all-or-nothing. Most codebases mix both:
// Imperative — step by step
function processUsers(users) {
const result = [];
for (let user of users) {
if (user.active) {
result.push({
...user,
score: user.purchases.length * 10
});
}
}
return result;
}
// Functional — declarative pipeline
function processUsers(users) {
return users
.filter((u) => u.active)
.map((u) => ({
...u,
score: u.purchases.length * 10
}));
}
The functional version is clearer — it says what it does, not how. But either can be right depending on context.
The short version
- Pure functions have no side effects and always return the same output for the same input
- Immutability means creating new data instead of mutating existing data
- First-class functions can be assigned, passed, and returned like any value
- Higher-order functions accept or return functions — the foundation of abstractions
- Currying transforms multi-argument functions into chains of single-argument functions
- Partial application pre-fills some arguments to create specialized functions
- Memoization caches results to avoid recalculating expensive operations
- Function composition combines functions into pipelines — right-to-left with
compose - Piping is composition left-to-right — often more readable
- Map, filter, reduce are the core array transformation tools
- Functional programming’s goal is predictable, testable, reusable code
- Real libraries like Redux, Ramda, and Lodash are built on these principles