Skip to content

pjazdzyk/numenor-math

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

58 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Numenor Math


Numerical solvers and math utilities for Java

AUTHOR: Piotr Jażdżyk
LINKEDIN: https://www.linkedin.com/in/pjazdzyk

Numenor Math Vulnerabilities   Security Rating   Quality Gate Status  


Table of Contents


Overview

Numenor Math is a Java library that groups numerical solvers and mathematical utilities. It provides robust, well-tested implementations of root-finding algorithms, multivariate systems solvers, interpolation, and ODE integration.

Scalar Root-Finders

Solver Type Requires Bracket Requires Initial Guess
Brent-Dekker Bracketing (Bisection + Secant + Inverse Quadratic) Yes (two points) No
Newton-Raphson Open (Derivative-based) No Yes (single point)

Multivariate Solvers

Utility Type Purpose
Multivariate Newton-Raphson Matrix-Jacobian Newton driver Solves F(x) = 0 for F: Rⁿ → Rⁿ systems
Armijo Line Search Backtracking line search Sufficient-decrease step-size control for any iterative method

Interpolation

Utility Type Purpose
Linear Interpolation Exact two-point Evaluate f(x) from two known points (or extrapolate)
Linear Inverse Exact two-point (inverse) Find x given f(x) and two known points
Inverse Quadratic Three-point parabola Root-finding step — find x where f(x)=0 from three points
Secant Method Two-point linear root X-axis secant intersection between two points
Monotone Cubic Hermite (PCHIP) Piecewise cubic Hermite Monotonicity-preserving C¹ interpolant for tabulated data

ODE Integration

Integrator Order Evaluations/Step Use Case
Euler 1st 1 Stiff problems, monotonicity-critical evolution
Runge-Kutta 4 4th 4 Default when accuracy matters

All classes are immutable and thread-safe — a single instance can be shared freely between threads, with configuration adjusted through withX(...) copy-methods.


Available Solvers

Brent-Dekker Solver

A Java implementation of the improved Brent-Dekker algorithm based on modifications proposed by Zhengqiu Zhang. Uses a combination of Secant and Bisection numerical schemes with inverse quadratic interpolation. Features a two-phase automatic bracket-evaluation procedure (linear extrapolation, then exponential bracket expansion) that runs when the provided counterpart points do not span a sign change.

Best for: Functions where a bracket (two points with opposite signs) is known or can be found — robust convergence guaranteed.

Newton-Raphson Solver

A derivative-based root-finding method. If an analytical derivative is not supplied, a central-difference approximation is used. Converges quadratically near simple roots when the initial guess is sufficiently close.

Best for: Functions where you have a good initial guess and want fast convergence.

Multivariate Newton-Raphson Solver

Newton-Raphson driver for multivariate systems F(x) = 0 where F: Rⁿ → Rⁿ. Solves the linear system J(xₖ) · Δxₖ = −F(xₖ) via Gaussian elimination with partial pivoting. Accepts either an analytical Jacobian or computes a forward-difference numerical Jacobian. Supports configurable damping to prevent divergence.

Best for: Small-to-medium systems (n ≤ ~100) where dense Jacobian solve is acceptable.

Armijo Line Search

Standalone backtracking line search satisfying the Armijo sufficient-decrease condition. Given a current iterate x, a step direction d, and merit function φ(α) = ½ ||F(x + α·d)||², finds the largest step size α of the form α₀ · ρᵏ satisfying the decrease criterion. Composable with any iterative method (Newton, quasi-Newton, gradient descent).

Best for: Preventing divergence in iterative solvers by controlling step size.


Interpolation

Basic Interpolation Algorithms

The InterpolationAlgorithms utility class provides four static, zero-allocation interpolation primitives used as building blocks for root-finding and data interpolation:

Method Signature What it does
linearInterpolation (x1, f_x1, x2, f_x2, x) → f(x) Linear interpolation (or extrapolation) at x from two known points
linearInterpolationFromValue (x1, f_x1, x2, f_x2, f_x) → x Inverse: find x given a target f(x) value
inverseQuadraticInterpolation (x2, x1, xn, f_x2, f_x1, f_xn) → x₀ Fit a parabola through three points and find where it crosses f(x)=0
secantMethod (x2, x1, f_x2, f_x1) → x₀ X-axis secant intersection — one step of the classic secant method
import static com.synerset.numenormath.interpolation.InterpolationAlgorithms.*;

// Linear interpolation: f(x) at x = 3 from points (1, 2) and (5, 10)
double fx = linearInterpolation(1, 2, 5, 10, 3);   // 6.0

// Inverse linear: find x where f(x) = 7
double x = linearInterpolationFromValue(1, 2, 5, 10, 7);  // 3.5

// Inverse quadratic interpolation (root-finding from three points)
double root = inverseQuadraticInterpolation(2, 1, 3, 1.0, -2.0, 3.0);

// Secant method step
double secantRoot = secantMethod(2, 1, 1.0, -2.0);

These are pure static utilities — no state, no allocation, no instantiation. They power the interpolation steps inside the Brent-Dekker solver and are exposed for standalone use.

Monotone Cubic Hermite (PCHIP)

Monotonicity-preserving piecewise cubic Hermite interpolation per the Fritsch–Carlson (1980) algorithm. Given knot arrays (x, y) with strictly increasing x, builds a C¹-continuous piecewise cubic that passes through every knot exactly and preserves the monotonicity of the input data — no overshoot, no oscillation. Exposes both value f(x) and derivative f'(x).

Unlike classical natural cubic splines (C² but not monotone), PCHIP is the correct choice when tabulated data represents physically monotone relationships (e.g., valve/pump curves) and overshoot would break solver convergence invariants.


ODE Integration

Euler Integrator

Explicit forward Euler: y(t+dt) = y(t) + dt · F(t, y). One derivative evaluation per step, first-order accurate. Stable for sufficiently small dt.

Best for: Very stiff problems where higher-order methods oscillate, or state evolution where monotonicity matters more than accuracy.

Runge-Kutta 4 Integrator

Classical RK4: four derivative evaluations per step, fourth-order accurate (O(dt⁴) local error). The default choice when accuracy matters and stiffness is moderate.


Installation

Add the Maven dependency to your pom.xml:

<dependency>
  <groupId>com.synerset</groupId>
  <artifactId>numenor-math</artifactId>
  <version>2.0.0</version>
</dependency>

Quick Start

Brent-Dekker Solver (bracketing method)

import com.synerset.numenormath.solver.BrentDekkerSolver;
import java.util.function.DoubleUnaryOperator;

BrentDekkerSolver solver = BrentDekkerSolver.of("MySolver");

DoubleUnaryOperator linear = x -> 2 * x + 10;      // 2x + 10 = 0
double root = solver.findRoot(linear, -50, 50);    // -5.0

Newton-Raphson Solver (open method)

import com.synerset.numenormath.solver.NewtonRaphsonSolver;
import java.util.function.DoubleUnaryOperator;

NewtonRaphsonSolver solver = NewtonRaphsonSolver.of("MySolver");

DoubleUnaryOperator quadratic = x -> x * x - 4;    // x² - 4 = 0
double root = solver.findRoot(quadratic, 3);       // 2.0

Static one-liner

double root1 = BrentDekkerSolver.findRootOf(x -> 2 * x + 10, -10, 10);
double root2 = NewtonRaphsonSolver.findRootOf(x -> x * x - 25, 6);

Multivariate Newton-Raphson (system solver)

import com.synerset.numenormath.solver.MultivariateNewtonRaphsonSolver;
import java.util.function.Function;

// Solve x² + y² = 4, x = y  →  root at (√2, √2)
Function<double[], double[]> residual = x -> new double[]{
    x[0] * x[0] + x[1] * x[1] - 4,
    x[0] - x[1]
};
double[] root = MultivariateNewtonRaphsonSolver.findRootOf(residual, new double[]{1, 1});

PCHIP Interpolation (monotone-preserving)

import com.synerset.numenormath.interpolation.MonotoneCubicHermiteInterpolator;

double[] x = {0, 1, 2, 3};
double[] y = {0, 1, 2, 5};

MonotoneCubicHermiteInterpolator interp =
        MonotoneCubicHermiteInterpolator.of(x, y);

double value = interp.applyAsDouble(1.5);   // interpolated value
double slope = interp.derivative(1.5);      // interpolated derivative

Basic Interpolation (static utilities)

import static com.synerset.numenormath.interpolation.InterpolationAlgorithms.*;

double fx  = linearInterpolation(0, 0, 10, 100, 5);        // 50.0
double xin = linearInterpolationFromValue(0, 0, 10, 100, 50); // 5.0
double root = inverseQuadraticInterpolation(3, 2, 1, 6, 3, 0); // root near x=1

ODE Integration (RK4)

import com.synerset.numenormath.ode.RungeKutta4Integrator;
import java.util.function.BiFunction;

BiFunction<Double, double[], double[]> deriv = (t, y) -> new double[]{-y[0]};
double[] y0 = {1.0};

double[] result = RungeKutta4Integrator.of()
        .integrate(deriv, 0.0, y0, 1.0, 0.1);
// result[0] ≈ exp(-1) ≈ 0.3679

Brent-Dekker Solver Guide

Creating a solver

BrentDekkerSolver solver = BrentDekkerSolver.of();           // default
BrentDekkerSolver solver = BrentDekkerSolver.of("MySolver"); // named

Defining functions

Any single-variable equation can be provided as a DoubleUnaryOperator:

// Linear:                            2x + 10 = 0
DoubleUnaryOperator linear = x -> 2 * x + 10;

// Quadratic:                         2x² + 5x - 3 = 0
DoubleUnaryOperator quadratic = x -> 2 * x * x + 5 * x - 3;

// Nested log:
DoubleUnaryOperator logNested = x -> 93.3519196629417 -
    (-237300 * Math.log(0.001638 * x) / (1000 * Math.log(0.001638 * x) - 17269));

Finding roots

The solver needs two counterpart points a and b where f(a) and f(b) have opposite signs. If the points do not form a valid bracket, the two-phase evaluation procedure automatically searches for one.

double root = solver.findRoot(linear, -50, 50);  // explicit bounds
double root = solver.findRoot(linear);           // configured default bounds

// Use a different bracket to find another root
double otherRoot = solver.findRoot(quadratic, -1, 2);

Configuration (immutable)

Every withX(...) method returns a new solver instance — the original is left unchanged, so any instance can be cached or shared:

BrentDekkerSolver tuned = BrentDekkerSolver.of("tuned")
        .withAccuracy(1E-12)             // convergence tolerance
        .withIterationsLimit(200)        // max iterations
        .withCounterpartPoints(-1, 1)    // default a, b for findRoot(func)
        .withFailForNaN(false)           // do not throw on NaN/Inf
        .withDebugLogs(true)             // diagnostic log stream
        .withSummaryLogs(true);          // final summary log line

Newton-Raphson Solver Guide

Creating a solver

NewtonRaphsonSolver solver = NewtonRaphsonSolver.of();            // default
NewtonRaphsonSolver solver = NewtonRaphsonSolver.of("MySolver");  // named

Finding roots

Newton-Raphson only needs a single initial guess:

DoubleUnaryOperator func = x -> x * x - 4;

double root  = solver.findRoot(func, 3);                      // 2.0 (numerical derivative)
double root2 = solver.findRoot(func, -3);                     // -2.0

// Using an analytical derivative (usually faster and more accurate):
double root3 = solver.findRoot(func, x -> 2 * x, 3);          // 2.0

Configuration (immutable)

NewtonRaphsonSolver tuned = NewtonRaphsonSolver.of("tuned")
        .withAccuracy(1E-12)
        .withIterationsLimit(200)
        .withDerivativeStep(1E-8)        // h for central-difference derivative
        .withInitialGuess(5)             // default x₀ for findRoot(func)
        .withFailForNaN(false)
        .withDebugLogs(true)
        .withSummaryLogs(true);

When to use Newton-Raphson

Advantage Limitation
Fast convergence near simple roots Needs a good initial guess
No bracket required May diverge or cycle for poor guesses
Works with a single point Fails when derivative is zero or near-zero
Quadratic convergence Sensitive to local shape of the function

Multivariate Newton-Raphson Guide

Creating a solver

MultivariateNewtonRaphsonSolver solver = MultivariateNewtonRaphsonSolver.of("my-solver");

Solving a system

Function<double[], double[]> residual = x -> new double[]{
    x[0] * x[0] + x[1] * x[1] - 4,  // x² + y² = 4
    x[0] - x[1]                      // x = y
};

// Numerical Jacobian (forward difference)
double[] root = solver.findRoot(residual, new double[]{1, 1});

// With analytical Jacobian (faster, more accurate):
Function<double[], double[][]> jacobian = x -> new double[][]{
    {2 * x[0], 2 * x[1]},
    {1, -1}
};
double[] root2 = solver.findRoot(residual, jacobian, new double[]{1, 1});

// Full result with convergence info:
MultivariateSolverResult result = solver.solve(residual, new double[]{1, 1});
boolean converged = result.converged();
int iterations = result.iterations();

Configuration

MultivariateNewtonRaphsonSolver tuned = MultivariateNewtonRaphsonSolver.of("tuned")
        .withAccuracy(1E-12)
        .withIterationsLimit(200)
        .withDerivativeStep(1E-8)   // h for forward-difference Jacobian
        .withDamping(0.5)           // α ∈ (0, 1]; 1.0 = pure Newton
        .withFailForNaN(false)
        .withDebugLogs(true)
        .withSummaryLogs(true);

Armijo Line Search Guide

ArmijoLineSearch lineSearch = ArmijoLineSearch.of("armijo");

Function<double[], double[]> residual = x -> new double[]{...};
double[] x = {...};
double[] d = {...}; // proposed Newton step from solver

double alpha = lineSearch.findStep(residual, x, d);
// Apply step: x_new = x + alpha * d

Configuration

ArmijoLineSearch tuned = ArmijoLineSearch.of("tuned")
        .withInitialStep(0.5)          // α₀ starting step
        .withBacktrackFactor(0.5)      // ρ backtracking factor
        .withArmijoConstant(1E-4)      // c₁ sufficient-decrease constant
        .withMaxBacktracks(30)
        .withDebugLogs(true);

PCHIP Interpolation Guide

Building an interpolant

double[] x = {0, 1, 2, 3, 4};
double[] y = {0, 1, 4, 9, 16};

MonotoneCubicHermiteInterpolator interp =
        MonotoneCubicHermiteInterpolator.of(x, y);

Evaluating

double value = interp.applyAsDouble(2.5);   // interpolated value
double clamped = interp.valueClamped(5.0);  // returns y[last] = 16 (no extrapolation)
double slope = interp.derivative(2.5);      // interpolated derivative

When to use PCHIP vs. cubic spline

PCHIP (this library) Natural cubic spline
C¹ continuous C² continuous
Preserves monotonicity May overshoot/oscillate
No oscillations Gibbs phenomenon near steps
Correct for loss models Risky for solver-convergence

Interpolation Algorithms Guide

All methods in InterpolationAlgorithms are public static double — no state, no allocation, no instantiation. Import statically for clean call sites:

import static com.synerset.numenormath.interpolation.InterpolationAlgorithms.*;

Linear Interpolation

Evaluate f(x) at any x from two known points. Works for both interpolation (between the points) and extrapolation (outside the range).

// Points: (0, 0) and (10, 100)
double v = linearInterpolation(0, 0, 10, 100, 5);    // 50.0  (interpolation)
double e = linearInterpolation(0, 0, 10, 100, 15);   // 150.0 (extrapolation)

Linear Inverse Interpolation

Given two points and a target f(x) value, find the corresponding x. This is the exact inverse of linearInterpolation.

// Points: (0, 0) and (10, 100) — find x where f(x) = 75
double x = linearInterpolationFromValue(0, 0, 10, 100, 75);  // 7.5

Inverse Quadratic Interpolation

Fits a parabola through three points in y-space and finds where it intersects f(x) = 0. Faster than the secant method but more sensitive to initial guesses — best when all three points are close to the root.

// Three bracketing points near the root
double root = inverseQuadraticInterpolation(
    x2, f_x2,   // most recent point
    x1, f_x1,   // previous point
    xn, f_xn    // older point (not same side as x2)
);

Secant Method

Computes the x-axis intersection of the secant line through two points. One step of the classic secant root-finding method.

double x0 = secantMethod(2.0, 1.0, 4.0, -1.0);  // secant root ≈ 1.2

When to use basic interpolation vs. PCHIP

Use basic InterpolationAlgorithms when… Use PCHIP when…
You need a single interpolated value from 2-3 points You have a full data table of knots
You're implementing a root-finding iteration step You need a smooth, monotone curve
Zero-allocation, stateless call is required You need derivative evaluation
You want the building blocks Brent-Dekker uses internally You need clamped boundary behavior

ODE Integration Guide

RK4 integration

import com.synerset.numenormath.ode.RungeKutta4Integrator;

BiFunction<Double, double[], double[]> deriv = (t, y) -> new double[]{-y[0]};
double[] y0 = {1.0};

// Integrate from t=0 to t=1 with step dt=0.1
double[] result = RungeKutta4Integrator.of("rk4")
        .integrate(deriv, 0.0, y0, 1.0, 0.1);

Euler integration

import com.synerset.numenormath.ode.EulerIntegrator;

double[] result = EulerIntegrator.of("euler")
        .integrate(deriv, 0.0, y0, 1.0, 0.001);

Manual stepping (for trajectory recording)

OdeIntegrator integrator = RungeKutta4Integrator.of();
double t = 0.0;
double[] y = y0.clone();
while (t < tEnd) {
    y = integrator.step(deriv, t, y, dt);
    t += dt;
    // record trajectory point (t, y)...
}

SolverResult

findRoot(...) returns the computed root as a double. When you also need the iteration count or convergence status, use solve(...) — it returns a SolverResult record:

public record SolverResult(double root, int iterations, boolean converged) { }
SolverResult r = solver.solve(x -> x * x - 2, 0, 2);
double root       = r.root();         // 1.4142...
int iterations    = r.iterations();   // e.g. 7
boolean converged = r.converged();    // true

solve(...) never throws when the iteration limit is reached — it returns converged = false and the best approximation found. findRoot(...) on the Newton-Raphson solver throws IllegalStateException for the same situation (unless failForNaN(false) is set).


Thread Safety

Both solvers are fully immutable and safe to call concurrently:

  • All configuration is final and set at construction time.
  • Every withX(...) returns a fresh instance — shared instances never mutate.
  • Iteration state is kept strictly on the stack of each findRoot / solve call.
BrentDekkerSolver shared = BrentDekkerSolver.of("shared");

// Safe: many threads can use the same instance simultaneously.
IntStream.range(0, 1024).parallel().forEach(i -> {
    double root = shared.findRoot(x -> x - i, -1E6, 1E6);
    assert Math.abs(root - i) < 1E-9;
});

The user-supplied function (DoubleUnaryOperator) remains the caller's responsibility — if you share a function that mutates external state, guard that state yourself.


NaN/Infinity Handling

Both solvers throw NumenorSolverException by default when NaN or Infinity values are produced by the function. Switching to soft mode lets the solver return the best approximation found so far:

// Strict — default
solver.findRoot(func, a, b);

// Soft — returns the last valid approximation instead of throwing
BrentDekkerSolver soft = solver.withFailForNaN(false);
double best = soft.findRoot(func, a, b);

For Newton-Raphson, failForNaN(false) also disables the IllegalStateException thrown when the iteration limit is reached — findRoot then returns the best approximation.


What's New in 1.1.0

  • Thread-safety: Both solvers are now immutable. All iteration state is local to each call; a single instance can be shared between threads without synchronization.
  • Fluent immutable configuration: setX(...) / toggleX(...) replaced by withX(...) copy-methods that return new instances.
  • SolverResult record: solve(...) returns the root, iteration count and convergence flag together — no more mutable lastResultIterationCount field.
  • Hot-loop performance: Brent-Dekker main iteration reduced from 4 to 2 function evaluations per step by reusing already-computed f(x) values after bracket updates; log-string building is now fully guarded behind the showDiagnostics flag.
  • Concurrency test suite: New SolverConcurrencyTest runs 8 000+ solves across 16 threads on one shared instance.

Tech Stack

Core:
Java 21   Maven  

Testing:
JUnit 5  

CI/CD:
GitHub Actions   SonarCloud  


License

This project is licensed under the Apache License 2.0.


Reference

  • [1] BRENT-DEKKER ITERATIVE SOLVER — Modified algorithm proposed by Zhengqiu Zhang, International Journal of Experimental Algorithms (IJEA), Volume 2, Issue 1, 2011.
  • [2] Press, W.H., Teukolsky, S.A., Vetterling, W.T. & Flannery, B.P. (2007). Numerical Recipes: The Art of Scientific Computing, 3rd Edition, Cambridge University Press — Section 9.4: Newton-Raphson Method.
  • [3] Dennis, J.E., Schnabel, R.B. (1983). Numerical Methods for Unconstrained Optimization and Nonlinear Equations, ch. 5 — Newton's method for systems.
  • [4] Nocedal, J., Wright, S. (2006). Numerical Optimization, 2nd ed., Springer — ch. 3.1: Backtracking Line Search.
  • [5] Fritsch, F.N., Carlson, R.E. (1980). Monotone Piecewise Cubic Interpolation, SIAM Journal on Numerical Analysis, Vol. 17(2), pp. 238–246.
  • [6] Hairer, E., Nørsett, S., Wanner, G. (1993). Solving Ordinary Differential Equations I: Nonstiff Problems, 2nd ed., Springer — ch. II.1: Euler and classical RK methods.

Acknowledgments

I extend my heartfelt gratitude to the Silesian University of Technology as it is here that my professional identity was forged, transforming me into an engineer to my very core. Amicus Plato, sed magis amica veritas.

About

A collection of high performance Java root finding algorithms featuring an enhanced Brent-Dekker solver with automated bracketing and a robust numerical Newton-Raphson implementation.

Topics

Resources

License

Stars

Watchers

Forks

Contributors

Languages