You are currently browsing the tag archive for the ‘Numerical Anal’ tag.
Gowers, The Princeton Companion to Mathematics has Trefethen’s Survey of Numerical Analysis a survey of algorithms for solving problems of continuous mathematics. It reviews the main branches and historical accomplishments of floating point algorithm research from a mathematical (as opposed to technology, engineering, or application) perspective. Starts from the assumption that Approximation Theory ( minmax approximation, splines, interpolation, series expansions) is the basis of solving numerical analysis problems. Covers machine arithmetic/ floating point representation and rounding; numerical linear algebra – gaussian elimination, gram-schmidt, SVD; numerical solution of differential equations – Clenshaw-Curtis quadrature, Adams-Bashforth, Runge-Kutta, Dahlquist consistency + stability = convergence, Lax-Wendroff computational fluid dynamics; and finally, numerical optimization – simplex, linear programming, primal-dual methods.
I find it interesting the vast majority of quantitative programming on the Street depends on numerical analysis results established before 1970 or even 1960 (see Trefethen’s timeline pg. 615 in Gowers). Approximation theory, yes loads of it; numerical linear algebra, some, but not massive typically; the odd Crank-Nicolson solver; massive Monte Carlo simulators, hoo ha, absolutely, but massive finite -difference pde solvers, not so much. Of course there are occasions where for example, portfolio optimization folks have given up simplex for interior point optimization, or an occasional exotic derivative product trader/desk spawns the need for a low dimension differential equation solver (that runs on their PC in XL). For the folks running the half a million hours of stress tests, here, I guess that all the mathematical/numerical results they require were solved and taught in undergraduate textbooks before Backus finished the first Fortran specification in the early 50s. The biggest numerical result in the last 60 years for the Financial Quant folks (apart from purely market valuation results like Black-Scholes, HJM, SABR, and Hull-White or risk modeling results like GARCH and VAR) is probably IEEE 754 and that is probably under discussion given the attention to fp computation optimzation via FPGA racks. Of course, more recent numerical results, for example, PCA (see Aspremont et.al.), Krylov subspaces (see Druskin et.al.), Contingent Claim valuation Simulation (see Boyle, Broadie, and Glasserman), and low discrepancy sequences (see Niederreiter) are used and are important contemporary numerical tools but nevertheless the numerical analytic framework for the Street’s quantitative programming has been quite stable for decades. In a typical quant programmer’s library the copy of Wilmott will have more dog eared pages than Press et.al..
Chebfun is a collection of algorithms and an open-source software system in object-oriented MATLAB which extends familiar powerful methods of numerical computation involving numbers to continuous or piecewise-continuous functions. It also implements continuous analogues of linear algebra notions like the QR decomposition and the SVD, and solves ordinary differential equations. The mathematical basis of the system combines tools of Chebyshev expansions, fast Fourier transform, barycentric interpolation, recursive zerofinding, and automatic differentiation. The project was initiated by Nick Trefethen and Zachary Battles in 2002, and the differential equations side of Chebfun was created by Toby Driscoll of the University of Delaware beginning in 2008. see http://www2.maths.ox.ac.uk/chebfun/