Skip to main content

Numerical Differentiation

Numerical differentiation via finite differences for C++20.

Overview

The derivative is defined as a limit:

f'(x) = lim_{h->0} (f(x+h) - f(x)) / h

We cannot take h = 0 on a computer, but we can choose small h. The art of numerical differentiation lies in choosing h wisely—small enough that the approximation is good, but not so small that floating-point errors dominate.

Quick Start

#include <finite-diff/finite_diff.hpp>
#include <cmath>
#include <iostream>

int main() {
    using namespace finite_diff;

    // Derivative of sin(x) at x = 1
    auto f = [](double x) { return std::sin(x); };

    double x = 1.0;
    double df = central_difference(f, x);  // Uses optimal h

    std::cout << "f'(1) = " << df << "\n";         // ~0.5403
    std::cout << "cos(1) = " << std::cos(1.0) << "\n";  // 0.5403...
}

The Two Errors

Every finite difference approximation has two sources of error:

  1. Truncation error: From the Taylor series approximation. Decreases as h -> 0.
  2. Round-off error: From floating-point arithmetic. Increases as h -> 0.
Total error = Truncation + Round-off
            = O(h^p) + O(eps/h)

where eps is approximately 2.2 x 10^-16 (machine epsilon for double).

The optimal h minimizes total error:

MethodTruncationOptimal hTypical accuracy
ForwardO(h)sqrt(eps)~8 digits
CentralO(h^2)eps^(1/3)~10 digits
Five-pointO(h^4)eps^(1/5)~12 digits

API Reference

First Derivatives

// Forward difference: (f(x+h) - f(x)) / h
// O(h) accuracy
double df = forward_difference(f, x);

// Backward difference: (f(x) - f(x-h)) / h
// O(h) accuracy
double df = backward_difference(f, x);

// Central difference: (f(x+h) - f(x-h)) / (2h)
// O(h^2) accuracy - the workhorse
double df = central_difference(f, x);

// Five-point stencil: higher accuracy formula
// O(h^4) accuracy
double df = five_point_stencil(f, x);

Second Derivatives

// Second derivative: (f(x+h) - 2f(x) + f(x-h)) / h^2
double d2f = second_derivative(f, x);

// Five-point second derivative: O(h^4)
double d2f = second_derivative_five_point(f, x);

Multivariate Functions

// Gradient: grad(f)(x) for f: R^n -> R
auto grad = gradient(f, x);

// Directional derivative: D_v f(x)
double Dvf = directional_derivative(f, x, v);

// Hessian matrix: H[i][j] = d^2f/dx_i dx_j
auto H = hessian(f, x);

// Jacobian matrix for f: R^n -> R^m
auto J = jacobian(f, x);

Deriving the Formulas

Forward Difference

Taylor expand f(x+h):

f(x+h) = f(x) + h*f'(x) + (h^2/2)*f''(x) + O(h^3)

Solve for f’(x):

f'(x) = (f(x+h) - f(x))/h - (h/2)*f''(x) + O(h^2)
      = (f(x+h) - f(x))/h + O(h)

Central Difference

Taylor expand f(x+h) and f(x-h):

f(x+h) = f(x) + h*f'(x) + (h^2/2)*f''(x) + (h^3/6)*f'''(x) + O(h^4)
f(x-h) = f(x) - h*f'(x) + (h^2/2)*f''(x) - (h^3/6)*f'''(x) + O(h^4)

Subtract:

f(x+h) - f(x-h) = 2h*f'(x) + (h^3/3)*f'''(x) + O(h^5)
f'(x) = (f(x+h) - f(x-h))/(2h) + O(h^2)

The even-order error terms cancel, doubling our accuracy!

Five-Point Stencil

By taking linear combinations of more points, we can cancel more error terms:

f'(x) = (-f(x+2h) + 8f(x+h) - 8f(x-h) + f(x-2h)) / (12h)

This achieves O(h^4) by canceling the h^2, h^3, and h^4 error terms.

Optimal Step Size Derivation

For central difference with round-off:

Error = (h^2/6)|f'''(x)| + (eps/h)|f(x)|

Minimize by taking derivative with respect to h and setting to zero:

d(Error)/dh = (h/3)|f'''| - (eps/h^2)|f| = 0
h^3 = 3*eps*|f|/|f'''| ~ eps (order of magnitude)
h_opt ~ eps^(1/3)

For double precision: h is approximately 6 x 10^-6.

Comparison with Automatic Differentiation

AspectFinite DifferencesAutomatic Differentiation
AccuracyLimited by eps^(1/p)Exact to machine precision
Code complexityWorks with any functionRequires special types
Computational costn+1 or 2n evaluations2-3x single evaluation
Black-box functionsYesNo
Higher derivativesError compoundsStraightforward

Use finite differences when:

  • Function is a black box (no source code)
  • Quick approximation needed
  • Validating other methods

Use automatic differentiation when:

  • Accuracy is critical
  • Computing many derivatives
  • Source code is available