Numerical Recipes 3rd Edition

William H. Press

Mentioned 11

Do you want easy access to the latest methods in scientific computing? This greatly expanded third edition of Numerical Recipes has it, with wider coverage than ever before, many new, expanded and updated sections, and two completely new chapters. The executable C++ code, now printed in colour for easy reading, adopts an object-oriented style particularly suited to scientific applications. Co-authored by four leading scientists from academia and industry, Numerical Recipes starts with basic mathematics and computer science and proceeds to complete, working routines. The whole book is presented in the informal, easy-to-read style that made earlier editions so popular. Highlights of the new material include: a new chapter on classification and inference, Gaussian mixture models, HMMs, hierarchical clustering, and SVMs; a new chapter on computational geometry, covering KD trees, quad- and octrees, Delaunay triangulation, and algorithms for lines, polygons, triangles, and spheres; interior point methods for linear programming; MCMC; an expanded treatment of ODEs with completely new routines; and many new statistical distributions. For support, or to subscribe to an online version, please visit www.nr.com.

I am working to become a scientific programmer. I have enough background in Math and Stat but rather lacking on programming background. I found it very hard to learn how to use a language for scientific programming because most of the reference for SP are close to trivial.

My work involves statistical/financial modelling and none with physics model. Currently, I use Python extensively with numpy and scipy. Done R/Mathematica. I know enough C/C++ to read code. No experience in Fortran.

I dont know if this is a good list of language for a scientific programmer. If this is, what is a good reading list for learning the syntax and design pattern of these languages in scientific settings.

this might be useful: the nature of mathematical modeling

Writing Scientific Software: A Guide to Good Style is a good book with overall advice for modern scientific programming.

I'm a scientific programmer who just entered the field in the past 2 years. I'm into more biology and physics modeling, but I bet what you're looking for is pretty similar. While I was applying to jobs and internships there were two things that I didn't think would be that important to know, but caused me to end up missing out on opportunities. One was MATLAB, which has already been mentioned. The other was database design -- no matter what area of SP you're in, there's probably going to be a lot of data that has to be managed somehow.

The book Database Design for Mere Mortals by Michael Hernandez was recommended to me as being a good start and helped me out a lot in my preparation. I would also make sure you at least understand some basic SQL if you don't already.

One issue scientific programmers face is maintaining a repository of code (and data) that others can use to reproduce your experiments. In my experience this is a skill not required in commercial development.

Here are some readings on this:

These are in the context of computational biology but I assume it applies to most scientific programming.

Also, look at Python Scripting for Computational Science.

Press, Teukolsky, Vetterling, Flannery: Numerical Recipes in C++ (the book is great, just beware of the license)

Modern C++ Design

and have a gander at the source code for the GNU Scientific Library.

Just curious how are these implemented. I can't see where I would start. Do they work directly on the `float`'s/`double`'s bits?

Also where can I find the source code of functions from math.h? All I find are either headers with prototypes or files with functions that call other functions from somewhere else.

EDIT: Part of the message was lost after editing the title. What I meant in particular were the `ceil()` and `floor()` functions.

On many platforms (such as any modern x86-compatible), many of the maths functions are implemented directly in the floating-point hardware (see for example http://en.wikipedia.org/wiki/X86_instruction_listings#x87_floating-point_instructions). Not all of them are used, though (as I learnt through comments to other answers here). But, for instance, the `sqrt` library function is often implemented directly in terms of the SSE hardware instruction.

For some ideas on how the underlying algorithms work, you might try reading Numerical Recipes, which is available as PDFs somewhere.

It's been a while since I was in college and knew how to calculate a best fit line, but I find myself needing to. Suppose I have a set of points, and I want to find the line that is the best of those points.

What is the equation to determine a best fit line? How would I do that with PHP?

Method of Least Squares http://en.wikipedia.org/wiki/Least_squares. This book Numerical Recipes 3rd Edition: The Art of Scientific Computing (Hardcover) has all you need for algorithms to implement Least Squares and other techniques.

I am trying to accelerate some computations using OpenCL and part of the algorithm consists of inverting a matrix. Is there any open-source library or freely available code to compute lu factorization (lapack dgetrf and dgetri) of matrix or general inversion written in OpenCL or CUDA? The matrix is real and square but doesn't have any other special properties besides that. So far, I've managed to find only basic blas matrix-vector operations implementations on gpu.

The matrix is rather small, only about 60-100 rows and cols, so it could be computed faster on cpu, but it's used kinda in the middle of the algorithm, so I would have to transfer it to host, calculate the inverse, and then transfer the result back on the device where it's then used in much larger computations.

I don't have an implementation in Open CL, but both "Numerical Recipes" and Gil Strang's "Into to Applied Math" have wonderful explanations that would be easy to code. "NR" has C code that you could adapt.

calculate the inverse

This is incorrect. You aren't calculating an inverse with LU decomposition, you're decomposing the matrix. If you wanted the inverse, you'd have to do forward back substitution with a series of unit vectors. It's a small but important difference.

I read a lot of blogs and forum posts about mathematics in programming and made a conclusion for myself that basic mathematics is needed in programming. I'm not a good mathematician. But is it somehow possible to improve my logical and algorithmic thinking without going deep into science of mathematics? Are there any exercises or some books that could help me to improve these skills so that I could become an good architect?

Work through Project Euler.

The beginning of CLRS Algorithms has a bit on number theory, discrete math, combinatorics, probability, graph theory, and other really useful stuff. It's teaching exactly what is applicable to algorithms, and skipping everything else.

The emphasis on what constitutes "Discrete Mathematics" in Computer Science undergraduate curriculums has shifted in the past forty years. It used to be that such courses would cover material such as abstract algebra and go into concepts such as 'sorts' and 'kinds', which is useful for algebraic specification of programs. If that is the sort of math you think you will enjoy, then buy an old book on Discrete Maths, like this one: Discrete Mathematics in Computer Science (1977) \$5 shipped!

I don't believe the ridiculously expensive Susanna Epps book contains similar material, and should know, since that hideously overpriced book is what I had to use in my freshman Discrete Maths class (2003) -- I can't believe the price has almost doubled since its outrageous price even then!

I'd say that the math you need depends on the problems you're asked to solve.

The problems you're asked to solve depend on the math skills you have.

Anyone who says you only need 4th grade math is also telling you that you cannot reasonably expect to have a chance to solve more challenging problems.

I would point out that computers have changed mathematics and applications. My engineering education consisted of a lot of calculus and closed-form solutions using pencil and paper.

My first career meant applying discrete, numerical analogs to those techniques on computers.

If you want to do that kind of work, it's best to learn a lot about numerical methods and linear algebra.

Google's Page Rank was a \$25B eigenvalue problem when this paper was published. Google's market cap is \$144B today.

Computer graphics are very mathematically intensive. If you like graphics, better learn matricies well.

Statistics are terribly important, especially when you have oceans of data available to you on the web. Learn R and basic statistics cold.

Read something like "Programming Collective Intelligence" to see what kinds of novel problems will require some sophisticated math.

If you want to solve those kinds of problems, best to get busy.

It seems that cmath for visual studio 2005 does not have erf(x). I am using NIST Statistical Test Suite for Random and Pseudorandom Number Generators. In cephes.c's method, double cephes_normal(double x), it uses a C99 math function erf(x) which I don't believe is supported by visual studio 2005.

How can I overcome this problem? I saw a C++ solution here: http://social.msdn.microsoft.com/Forums/en-US/vcgeneral/thread/9f5f4bf4-c0ae-4620-8039-4dc36e98d718/

Someone used the boost C++ math library. But I don't think I can include a c++ header into a C source file.

Numerical Recipes contains an implementation of the error function (Chapter 6 in my edition).

There's a Python implementation in John D. Cook's blog, which looks trivial to translate.

Been searching here and google for a couple of days as well as asking my programming friends. Unfortunately, i still don't understand how to change my code...

My program calculates the factorial of a given number. It then provides a number which represents how many digits the factorials answer includes. Then it sums the values of those digits together to give a total.

My program works for any number between 1! and 31!... If you put in anything over 31! (for example 50! or 100!) it doesn't work and just throws back minus numbers and no total.

I was hoping you guys could point me in the right direction or give me some advice. I understand using BigIntegers maybe a solution however i don't understand them personally, hence coming here.

Any help would be much appreciated. Thanks.

``````    package java20;

/**
* Program to calculate the factorial of a given number.
* Once implemented, it will calculate how many digits the answer includes.
* It will then sum these digits together to provide a total.
* @author shardy
* date: 30/09/2012
*/

//import java.math.BigInteger;
public class Java20 {

/**
* @param args the command line arguments
*/
public static void main(String[] args) {

//Using given number stored in factorialNo, calculates factorial
//currently only works for numbers between 1! and 31! :(
int fact= 1;
int factorialNo = 10;

for (int i = 1; i <= factorialNo; i++)
{
fact=fact*i;
}

System.out.println("The factorial of " + factorialNo +
" (or " + factorialNo + "!) is: " + fact);

//Using answer stored in fact, calculates how many digits the answer has
final int digits = 1 + (int)Math.floor(Math.log10(answerNo));

System.out.println("The number of digits in the factorials "
+ "answer is: " + digits);

//Using remainders, calculates each digits value and sums them together
int number = fact;
int reminder;
int sum = 0;

while(number>=1)
{
reminder=number%10;
sum=sum+reminder;
number=number/10;
}

System.out.println("The total sum of all the " + digits
+ " idividual digits from the answer of the factorial of "
+ factorialNo + " is: " + sum);

}
}
``````

If you're calculating things like (m!/n!), you are better off using logarithms of factorials.

You should also consider these two improvements:

1. Memoize, don't recalculate. Store those hard-earned values once you calculate them.
2. You should be using the gamma function to calculate factorials. The way you're doing it with a loop is the naive thing that is presented to students. There's a nice implementation of `ln(gamma(x))` in Numerical Recipes.

You can always use BigDecimal if you must, but it's a last resort. You should still consider these points.

There's a gamma function implementation available from Apache Commons. The source code might not be as familiar as the naive approach, but if you see how it's done it'll be obvious that it's more efficient even for moderate arguments.

I just wanted to know if any one had any pointers for a library or libraries that support Markov modelling and graphical graph representation, as for a project i must simulate a transport model and be able to develop an interface for it too. I am relatively new to c++.

Have a look at Boost Graph as it will simplify all your graph work. I am unsure if there is a Markov Chain algiorithm (I am looking for one too) but it should be easy to write once you have the graph -- a concurrent monte carlo approach maybe?

Numerical Recipes has many algorithm and code in both C and C++.

The graphviz tools are all you need to draw graphs.

MATLAB has a magnificent `robustfit` function that solves the problem of excluding outliers with linear regression fitting. Is there anything similar written in Java or C (or in language X that could be adopted)?

Numerical Recipes has an implementation of robust fit. It's written in C/C++, so you should be able to port it over to Java without too much trouble.

Is Numerical Recipe is a header only library ?

I have only the header files. I am not sure do I need to buy the license ?

For C++, the answer is yes. This library is not free. You can buy the dvd that has the source files from Amazon.com. In Windows, basically you need to include the path to the library. I usually do

``````cl /EHsc main.cpp /Fetest.exe /I D:\CPP_Libraries\NR\code
``````

What is the fastest algorithm and the code implementation to calculate the value of the following expression?

n! / (q!)r

My code

``````public static double timesbyf(int n,int q,int qt,int qp1,int qp1t)
{
int totaltimes=qt+qp1t;
double ans=1.0d;
for(int i=1;i<=totaltimes;i++)
{
if(i<=qt)
{
for(int j=q;j>0;j--)
{
ans=ans*((double)n/(double)j);
n--;
}
}
else
{
for(int j=qp1;j>0;j--)
{
ans=ans*((double)n/(double)j);
n--;
}

}
}
while(n>0)
{
ans=(ans*n)%3046201;
n--;
}
return ans;
}
``````

That is, `n!` divided by `q!` `r` times.

I'm given that n ≤ 3 × 106 and that q < n, and it is guaranteed that (q!)r will cleanly divide n!.

I think it's a very bad idea to calculate two very large numbers and hope that the quotient comes out as something sensible.

Start by taking the natural log:

``````ln(n!/(q!)^r) = ln(n!) - r*ln(q!)
``````

You can use `gammaln()` for the two function values, simplify, then take `exp()` to get the result you want:

``````value = exp(gammln(n+1) -r*gammln(q+1))
``````

Numerical Recipes has a nice chapter on how to implement functions like `gammln().`