Thursday, October 3, 2013

on Project Euler & BigInts

About a year ago, I discovered Project Euler and started blowing through problems. The first two dozen or so are actually pretty simple and shouldn't take much more than 50 lines of code to do (most require less). I even wrote a MODULE that contained a fair number of repeated functions (like a prime finder and a digit sum) to make life more simple.

But some of the posted solutions for Problem 16 bother me. The question asks,
215 = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26. What is the sum of the digits of the number 21000?

The digit sum is a fairly simple thing to do:
digit = 0
do while(n /= 0)
    digit = digit + mod(n,10)
    n = n/10

But the way I solved this, this part isn't even necessary (though it's still useful to know).
The problem you'll run into is that 21000 contains 302 digits and even using quadruple precision (via the ISO_FORTRAN_ENV module and assuming you have the library for it) the maximum value is a mere 19 digit number: 9223372036854775807. Obviously way too small.
So some people, namely C/C++ and Java programmers, utilized the BigInt package that extends integers to arbitrary positions.

I personally think this is cheating. I think this problem is to be solved without resorting to external libraries that extend precision to arbitrary limits. And this problem can be done in any language without doing so too.

All you have to remember is this: arrays. If you think of any n-digit number as an n-celled array, then a 302-digit number is a 302-celled array.
2 → (2)
22 → (2,2)
222 → (2,2,2)
and so on. If you add 22+222, you get 244:
222+22 → (2,2,2) + (2,2) = (2,4,4)
Adding 66, you have to carry your 10 over:
244+66 → (2,4,4) + (6,6) = (2,10,10) = (3,0,10) = (3,1,0)
With multiplication, you need to multiply each cell by the value & carry the tens multiple times:
222*22 → (2,2,2) * 22 = (2*22,2*22,2*22) = (44,44,44) = (4,8,8,4)
If you understand this, then 21000 is painlessly easy.

I clocked this method with the Fortran intrinsic subroutine cpu_time, which is accurate to the microsecond, and kept getting zero's across the board (i.e., 0.000000E+00).

Tuesday, September 24, 2013

on Traffic

So I have apparently completely forgotten about this blog. Between research and teaching (Intro to Computing in the Spring), posting kept getting further and further back until it was just completely lost on me.

Then this morning, I got an email letting me know that I had a comment! I then checked this and saw that I've had quite a number of visitors over the last several weeks.

It took me a while, but I think that I figured out why people were visiting this site: my stackoverflow account. I guess this makes sense, since I have been pretty active on SO lately.

Given the increase of traffic here, I suppose I can try blogging again.

Monday, December 24, 2012

On Floors and Integers

A couple projects I have worked on in the past have involved Monte Carlo techniques for placing point particles (representing stars in a galaxy) in such a way that we have a constant surface density. In order to test that we do indeed have a constant surface density, we simply count the number of points from r=0 to r=rMax in steps of dr and display the total number per bin over the total radius.

Conceptually, this is fairly simple. Fortunately, this is also a fairly easy program too. If we have a radius of 15 units (for the galaxy I was modeling, this would be 15 kiloparsecs), the easiest thing to do would make an integer array with 15 cells so that the bin size is 1 unit. Then you could just take the floor of the radial position and get the distribution quickly:
do i=1,iMax
where b and hist are appropriately defined as integers.

Alternatively, since Fortran returns integer values for integer math, dividing by 1 can accomplish the same thing:
do i=1,iMax
Upon testing these methods on a the same set of 10 million real values, floor takes about 3.9 seconds compared to 3.8 for dividing by 1. Similar results are found for using ceiling and dividing by 1 and adding 1.

If we want smaller bins, say dr=0.5, we might think we would run into the problem of determining which half-unit this point belongs to in order to add it to the correct bin--this would basically involve something like finding the remainder of real(floor(radius(i)))-radius(i) and seeing if it is greater or lesser than 0.5 and then putting it into the appropriate bin.

However, there is far more simple method of doing this that requires us to first do two things:
  (1) Set the histogram lower index to 0 and the upper index to number of bins - 1 (i.e., allocate(hist(0:31))
  (2) Multiply the radius by 2:
allocate(hist(0:31)) do i=1,iMax
If radius(i)=0.75, multiplying this by 2 gives 1, which is the correct bin. If radius(i)=4.25, then multiplying by 2 gives 8, which is also the correct bin. Note that we could have left the indices as they were and just add 1 to the multiplication of 2, but this method adds a needless (and potentially costly) extra mathematical operation.

So, the next time you have to create a histogram of your data, recall this easy-binning method, which can easily be extended to larger or smaller bin sizes.

Comments always welcome!

Friday, December 14, 2012

On Shaping Arrays

The best thing about Fortran is its handling of arrays, in particular the multi-dimensional ones. As someone who models astrophysical fluid flows for a living, I tend use 4-dimensional arrays (3 for space and 1 for variables) in most of my relevant codes.

In defining arrays, there are three methods by which we can use to define them:
   (1) nested do loops
   (2) forall assignment
   (3) nested implied-do loops

I presume every Fortran user knows of the first of these methods very well, as it is the most used and most versatile method of defining an array. I think most people have heard of the forall assignment, which was once part of the now-defunct High Performance Fortran but added to Fortran 95 update. It is used as follows,
forall(i=1:iMax, j=1:jMax, k=1:kMax) A(i,j,k)=i*k+j/jMax
I can pass the i,j,k indices to a function that returns a value, so it can work much like the nested do loop. The original point of the forall assigment was for optimization in defining arrays, but since most compilers are pretty good at optimizing loops for memory purposes, the use of this command is declining. For an array of size (750,750,750), it takes about half a second to define the array with the above definition using either of the two methods.

The last method, the nested implied do loop, is probably about as useful as the forall assignment, but it introduces the command reshape, which has some useful applications for clusters and file I/O. Suppose we have a 2D array defined in the nested do loop,
do j=1,jMax
   do i=1,iMax
     B(i,j) = i*j
we can collapse the inner loop as,
do j=1,jMax
   B(1:iMax,j) = (/ (i*j, i=1,iMax) /)
But in collapsing this other do loop,
B =(/ ((/ (i*j, i=1,iMax) /), j=1,jMax) /)
we would get an "incompatible ranks" error. This is because B is defined as a 2-dimensional array, B(iMax,jMax), and the above doubly-collapsed loop is a 1-dimensional array with dimensions of (iMax*jMax). To get into the 2D array from the 1D array, we can use the reshape command. For ease, I have written the (iMax*jMax) array as a separate line:
tempArray=(/ ((/ (i*j, i=1,iMax) /), j=1,jMax) /)
B =reshape( tempArray, (/iMax,jMax/) )
Extending this to a 3D array should be straight-forward.

Nested do loops should be the primary method for defining your arrays as it is the most clear method (you just have to remember that Fortran is column-major). The forall assignment, while sometimes convenient, just is not as useful as it was thought to be, probably should be dropped from codes that currently employ it, a should be marked as obsolescent in future updates to Fortran. The nested implied do loops probably should not be used for defining arrays, but it is useful for understanding how reshape works for use in other areas of codes.

Comments are always welcome!

Wednesday, December 12, 2012

On Numerical Recipes

While doing some investigating on employing LAPACK routines for a computational course I am teaching next semester, I found an article written by Dr Benjamin Weiner titled, Boycott Numerical Recipes. Given that I am an avid user and advocator of NR, I opted to take a look.

Dr Weiner has two arguments on that page:
   (1) the routines in NR are old and outdated (I imagine most people have the 1992/1996 versions, though the books were updated in 2007)
   (2) the NR routines given in the book are copyrighted which means you cannot distribute the source of a code that has NR routines in them to people who do not have the NR license (though you can distribute the binary of the code)

I have never really paid attention to the content copyright pages, so I took a look of my copy of Numerical Recipes to find the following:
...These [diskettes] provide demonstration programs that illustrate the use of each subroutine and procedure in this book. They too can be ordered in the above manner.
Unlicensed transfer of Numerical Recipes programs from the abovementioned IBM PC or Apple Macintosh diskettes to any other format, or to any computer except a single IBM PC or Apple Macintosh or compatible for each diskette purchased, is strictly prohibited.
Now, I am no copyright law professional, but it seems like I can write a code with some functions/subroutines based on NR without trouble.

In any event, there are alternatives to Numerical Recipes codes that are more up to date and free. I already mentioned LAPACK at the top of the page, but there is also:
  SLATEC -- a large collection of routines such as ODE solvers, numerical integrators, and so on (it was partially re-written in F90 by John Burkardt, see here)
  BLAS -- Basic Linear Algebra Subprograms, this is included in the SLATEC package, but is available separately
  FGSL -- Fortran interface to the GNU Scientific Library (GSL)
  MINPACK -- collection of non-linear equation solvers

I know that some of my colleagues use external libraries (GSL in particular), but I have yet to use any of them. I am somewhat curious about using LAPACK and SLATEC, but I will have to see if I can employ it beyond teaching a lecture or two on it for my class.

I can understand Dr Weiner's argument: why use routines that you have to pay for and are very restrictive in sharing when you can get it free from someone else without any restrictions? And I completely agree, however, I will not be boycotting Numerical Recipes because it is still a fantastic source of information for beginning programmers.

Comments are welcome!

Thursday, December 6, 2012

On Functions

Like every other programming languages, Fortran allows you to write your own functions to be evaluated in your codes. There are two types of functions in Fortran: statement functions (also called inline functions) and subprogram function (also called external functions). In this post, we will be discussing the use and possible advantages of these different types of functions, starting with the statement function.

In order to use the statement function, you must
  (1) declare the function name and its argument in the variable declarations at the beginning of the program
  (2) define the function before any other variables are defined, even if it depends on other variables not yet defined
A simple example is given below
program fcnTest
   implicit none


   do i=1,1000000
     print *,x,(4.5**x)*sin(pi*f(x))
end program

Note also that multiple statement functions can be employed in Fortran. We could have chosen to define g such that g(a)=4.5**a and used this in the above code (which, though I do not get into it here, does not change the results at the bottom of the page).

The function f is actually from a gnuplot tutorial I have seen online and looks like, using QtiPlot, a free-ware plotting program that I have grown to like.

I also chose to iterate the loop 1,000,000 times because it will give a long enough runtime to make a difference. If I used 1,000 iterations, the millisecond runtimes would not be enough to show a difference between one or the other.

I compiled the code without any optimization flags (i.e., just ifort -o fcnTest fcnTest.f90) and ran it twice, getting 60.013 s and 59.901 s, which is pretty slow to do 5 operations a million times. If I changed the code to output the data to a formatted file,
program fcnTest
   implicit none



   do i=1,1000000
end program

which dropped the runtime to a fairly consistent 3.95 s with deviations of about 0.04 s (I ran it 5 times). Fortran also enables putting data in binary files simply by adding the option form='unformatted' in the open command and replacing write(10,*) with write(10). Doing so, the runtime drops to a slightly-less-fairly consistent 2.78 s with deviations of about 0.09 s (also ran 5 times).

The reason for the significant difference is, hopefully, obvious: Fortran was not designed for putting a large amount of data to the screen, it was designed for dumping it to data files for analyzing later.

A problem that exists with the binary output (form='unformatted') is that the binary output is machine-dependent, so dumping data in that format and then transferring it to another machine can cause issues with reading the file on the second machine. I have my own desktop (running Ubuntu Linux with an AMD Phenom X4 processor), my own laptop (running Arch Linux with an Intel i5 processor), and access to a super computer cluster (running Scientific Linux with Intel Xeon and Itanium processors) and I can tell you now that I have not experienced any file-reading issues between the three of them, but that is not to say this problem does not exist.

If you are a one-computer type of person, this problem should not be an issue. If you are working with multiple computers and are dumping data into binary format, it might be worth your time to write a second program to read in the binary code and output it as a formatted file to prevent these issues. Or, if you are working with multiple computers and large data sets, you might just want to use something like HDF5.

Anyway, now that we have some data on the inline function statement, how does it compare to the external function statement? This requires a significant change to the code, but it is not difficult:
program fcnTest
   implicit none

   pi=acos(-1.0); dx=0.0000012; x=1.0
   do i=1,1000000
     print *,x,4.5**x*sin(pi*f(x))
end program fcnTest

real function f(x) result(a)
   implicit none

   a = x**5
end function f

Compiling and running this in the same manner, it took 59.985 s and 60.010 s to run the display to screen. The formatted output took an average of 4.03 s with a deviation of 0.07 s; the unformatted output took an average of 2.84 s with a small deviation of about 0.06 s.

As a scientist, I like pretty pictures and tables of the data. Taking the data from above:

output inline runtime external runtime
screen 59.96±0.08 60.00±0.18
formatted file 3.95±0.04 4.03±0.07
unformatted file 2.78±0.09 2.84±0.06

The data indicates the following:
  (1) There is no advantage between inline and external functions at least for simple functions
  (2) Unformatted output is faster than formatted output
  (3) You should never put 1,000,000 data points on the screen (probably any output more than 10 lines long should go to a file)
If multiple subroutines require the use of the same function, then clearly the choice should be to use the external function. If the function is rather complex, the external function should also be the clear choice.
Still, it is good to know that, for simple functions in a simple program, writing it once at the beginning of your code can make it more tidy without sacrificing any performance.

Comments always welcome.

Monday, December 3, 2012

Ternary Operator ?

Suppose you had a variable x that has a value depends on the variables a and b. The most obvious choice is to write an if-else statement to determine what x is:
if(a > b) then
   x = a
   x = b

But this seems a bit cumbersome, particularly when you look at a more compressed form using the ternary operator ? as in C (among other languages):
x = a > b ? a : b;

I used to be jealous of C because they had the inline-if and I believed that Fortran did not. Fortunately for us, Fortran added this inline-if into the 1995 standard. Unfortunately, they chose the most random of intrinsic function names I have ever heard of: merge. It is used quite like the above:
x = merge(a,b,a>b)

More explicitly,
variable = merge(value if true, value if false, condition)

The following is a snippet from a simulation I wrote that involved a Monte Carlo sampling of positions in cylindrical coordinates. R,P,Z are the rho-phi-zed coordinate-arrays.
do i=1,numPoints
   call random_number(rTemp)
   call random_number(pTemp)
   call random_number(zTemp)
   call random_number(cTemp)
   R(i) = sqrt(rTemp)*rScale
   P(i) = 2.*pi*pTemp
   Z(i) = merge(zTemp*zScale,-zTemp*zScale, cTemp > 0.5)

A couple notes on the above:
(1) Fortran uses 1-index, rather than C's 0-index--which leads me to the aside: quickly count to 10. Did you start at 0? Fortran can actually start at 0, if you want. You would have to define the array to start there by using

(2) Fortran uses curved braces for the array index, whereas C uses the square braces for the same thing.
(3) The variable pi is not an intrinsic (why not!?) and needs to be declared beforehand; I find the easiest option is using pi=acos(-1.0), though others may suggest pi=4*atan(1.0) (which is fine, but it seems to me that one operation is better/faster than two).
(4) The intrinsic subroutine random_number(variable) was used because it is superior to the intrinsic function rand(). The former has a period of 2123 (~1037) while the latter has a period of (I think) 232 (~109).

Comments always welcome.