Continuing my python extension project, I’ve now successfully built a toy python 2.7 extension in C++ that uses the Epetra package from Trilinos 10.8. I’ve built it on Ubuntu 11.10 (with gcc) and on Windows XP (with Visual Studio 2008) using distutils. It’s a bit of a hack, but it works and is a path forward for me.
I started with Linux, because I figured it would be easier. I learned a few things along the way. First, you have to have built Trilinos with shared libraries enabled. This is required for how Python handles things, I guess, and is made clear under the heading “Shared Libraries” in this file. The PyTrilinos package (which is effectively a professional set of python-trilinos extensions, btw. I needed a custom one!) needs them as well. So run your cmake-gui and check the shared libraries button. Besides the libepetra.a in the build folder, you’ll now also find a libepetra.so. In Windows, you’ll get >500 errors if you compile all of trilinos with shared libraries, but Epetra at least worked.
Next I needed an example Python extension file to work with. I just took my skeletal Python-C API test code and combined it with one of the simple Epetra examples from the Trilinos Wiki.
In my setup.py file for distutils, I studied the Makefile.export.Epetra to figure out which directories and libraries were needed. I then translated it over to the distutils format and got the resulting file.
[python]
from distutils.core import setup, Extension
module1 = Extension(‘cdifnt’, sources = [‘src/powerIterationExample.cpp’],
include_dirs=[‘/home/user/Desktop/trilinos-10.8.3/packages/epetra/src’,
‘/home/user/Desktop/trilinos-10.8.3/build’,
‘/home/user/Desktop/trilinos-10.8.3/build/packages/epetra/src’],
library_dirs=[‘/home/user/Desktop/trilinos-10.8.3/build/packages/epetra/src’],
libraries=[‘lapack’, ‘blas’,’epetra’])
setup (name = ‘Diffusion for Neutron Transport’,
version = ‘1.0’,
description = ‘This does diffusion and burnup.’,
ext_modules = [module1]
)
[/python]
Then just compile with the standard
1 |
python setup.py build |
In the build directory (./build/lib.linux-x86_64-2.7 or something), there will now be a shared library *.so file that was built. To get it to work properly, you’ll have to tell the code where the Trilinos libraries are. On Linux, this is done by setting the LD_LIBRARY_PATH environmental variable. You can do it the easy way and just copy the libepetra.so to the build directory and execute:
1 |
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:. |
This will add the current working directory to the path. There are better, more permanent ways of doing this.
Windows Specific:
To get it working under windows, first try building with shared libraries. In this case, you’ll now find .lib and .dll files for each successful package (including Epetra) in their build/src/Resource/…etc… folders. In this case, I just changed all the paths in the setup.py file to point to the corresponding locations on the Windows machine. In this case, I used raw strings such as r’C:Programstrilinos’ to get it right. I have MinGW installed but I wanted it to compile with MSVC so I used the command
1 |
python setup.py build --compiler=msvc |
Then, I copied the libepetra.dll file into the extension build directory and went to import it and bam! It worked. Values matched Linux values.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 |
In [1]: import cdifnt In [2]: cdifnt.buildMesh(1,2,3,4) (1, 1, 1) <nil> Epetra in Trilinos 10.8.3 Epetra::Comm::Processor 0 of 1 total processors. Iter = 0 Lambda = 1.73973 Residual of A*q - lambda*q = 1.32293 Iter = 100 Lambda = 3.98831 Residual of A*q - lambda*q = 0.0152304 Iter = 200 Lambda = 3.9955 Residual of A*q - lambda*q = 0.00873993 Total Mflop/s for first solve = 281.802 Increasing magnitude of first diagonal term, solving again Iter = 0 Lambda = 1.93001 Residual of A*q - lambda*q = 1.57278 Iter = 100 Lambda = 20.0556 Residual of A*q - lambda*q = 1.69407e-21 Total Mflop/s for second solve = 324.116 |
Boom. Easy. Although it took like, all day.
Errors I encountered along the way:
1 2 3 4 |
relocation R_X86_64_32S against `vtable for Epetra_SerialComm' can not be used when making a shared object; recompile with -fPIC /home/user/Desktop/trilinos-10.8.3/build/packages/epetra/src/ libepetra.a: could not read symbols: Bad value |
This means that you haven’t compiled with shared libraries. Go back through the trilinos build process and select the option about shared libraries and rebuild.
1 2 |
ImportError: ./cdifnt.so: undefined symbol: _ZNK17Epetra_SerialComm7ScanSumEPdS0_i |
This means that while your module compiled fine, it did not link properly. Check your setup.py and make sure all the required libraries are available. The error you get will help figure out which one is missing. For instance, in this case, libepetra.so was not found.
1 2 |
Exception TypeError: 'bad argument type for built-in operation' in <module 'threading' from 'C:Python27libthreading.pyc'> ignored |
This error would come after my program (not the one listed below but a more involved one) had finished executing. I narrowed it down by commenting out various parts of the code until it stopped coming. The culprit was using a PyFloat_AsDouble() call on a Python object that was actually an Int. Solution: Use PyInt_AsDouble().
Here’s all the code of my mini toy python extension package, based on the Trilinos example:
[python]
#include
#include
#include
#include
#include "Python.h"
#include "Epetra_CrsMatrix.h"
#include "Epetra_MultiVector.h"
#include "Epetra_Time.h"
#include "Epetra_Vector.h"
//#include "Epetra_BlockMap.h"
#include "Epetra_Map.h"
#include "Epetra_Version.h"
#include "Epetra_SerialComm.h"
//
// Compute the eigenvalue of maximum magnitude of the given matrix A,
// using the power method.
//
// Input arguments:
//
// A: The matrix to which to apply the power method. It’s not const
// because this method sets its flop counter.
// niters: Number of iterations of the power method.
// tolerance: Iterate until the (absolute) residual is strictly less
// than tolerance.
// verbose: Whether or not to print status output to stdout during
// iterations.
//
// Output argument:
//
// lambda: The eigenvalue of maximum magnitude of the matrix A.
//
// Return value: An integer error code. Zero if no error occured,
// else nonzero.
//
int
powerMethod (double & lambda,
Epetra_CrsMatrix& A,
const int niters,
const double tolerance,
const bool verbose);
int
runPow (void)
{
// These "using" statements make the code a bit more concise.
using std::cout;
using std::endl;
int ierr = 0, i;
// If Trilinos was built with MPI, initialize MPI, otherwise
// initialize the serial "communicator" that stands in for MPI.
Epetra_SerialComm Comm;
const int MyPID = Comm.MyPID();
const int NumProc = Comm.NumProc();
// We only allow (MPI) Process 0 to write to stdout.
const bool verbose = (MyPID == 0);
const int NumGlobalElements = 100;
if (verbose)
cout << Epetra_Version() << endl << endl;
// Asking the Epetra_Comm to print itself is a good test for whether
// you are running in an MPI environment. However, it will print
// something on all MPI processes, so you should remove it for a
// large-scale parallel run.
cout << Comm << endl;
if (NumGlobalElements < NumProc)
{
if (verbose)
cout << "numGlobalBlocks = " << NumGlobalElements
<< " cannot be < number of processors = " << NumProc << endl;
std::exit (EXIT_FAILURE);
}
// Construct a Map that puts approximately the same number of rows
// of the matrix A on each processor.
Epetra_Map Map (NumGlobalElements, 0, Comm);
// Get update list and number of local equations from newly created Map.
int NumMyElements = Map.NumMyElements();
std::vector MyGlobalElements(NumMyElements);
Map.MyGlobalElements(&MyGlobalElements[0]);
// NumNz[i] is the number of nonzero elements in row i of the sparse
// matrix on this MPI process. Epetra_CrsMatrix uses this to figure
// out how much space to allocate.
std::vector NumNz (NumMyElements);
// We are building a tridiagonal matrix where each row contains the
// nonzero elements (-1 2 -1). Thus, we need 2 off-diagonal terms,
// except for the first and last row of the matrix.
for (int i = 0; i < NumMyElements; ++i)
if (MyGlobalElements[i] == 0 || MyGlobalElements[i] == NumGlobalElements-1)
NumNz[i] = 2; // First or last row
else
NumNz[i] = 3; // Not the (first or last row)
// Create the Epetra_CrsMatrix.
Epetra_CrsMatrix A (Copy, Map, &NumNz[0]);
//
// Add rows to the sparse matrix one at a time.
//
std::vector Values(2);
Values[0] = -1.0; Values[1] = -1.0;
std::vector Indices(2);
const double two = 2.0;
int NumEntries;
for (int i = 0; i < NumMyElements; ++i)
{
if (MyGlobalElements[i] == 0)
{ // The first row of the matrix.
Indices[0] = 1;
NumEntries = 1;
}
else if (MyGlobalElements[i] == NumGlobalElements – 1)
{ // The last row of the matrix.
Indices[0] = NumGlobalElements-2;
NumEntries = 1;
}
else
{ // Any row of the matrix other than the first or last.
Indices[0] = MyGlobalElements[i]-1;
Indices[1] = MyGlobalElements[i]+1;
NumEntries = 2;
}
ierr = A.InsertGlobalValues(MyGlobalElements[i], NumEntries, &Values[0], &Indices[0]);
assert (ierr==0);
// Insert the diagonal entry.
ierr = A.InsertGlobalValues(MyGlobalElements[i], 1, &two, &MyGlobalElements[i]);
assert(ierr==0);
}
// Finish up. We can call FillComplete() with no arguments, because
// the matrix is square.
ierr = A.FillComplete ();
assert (ierr==0);
// Parameters for the power method.
const int niters = NumGlobalElements*10;
const double tolerance = 1.0e-2;
//
// Run the power method. Keep track of the flop count and the total
// elapsed time.
//
Epetra_Flops counter;
A.SetFlopCounter(counter);
Epetra_Time timer(Comm);
double lambda = 0.0;
ierr += powerMethod (lambda, A, niters, tolerance, verbose);
double elapsedTime = timer.ElapsedTime ();
double totalFlops =counter.Flops ();
// Mflop/s: Million floating-point arithmetic operations per second.
double Mflop_per_s = totalFlops / elapsedTime / 1000000.0;
if (verbose)
cout << endl << endl << "Total Mflop/s for first solve = "
<< Mflop_per_s << endl<< endl;
// Increase the first (0,0) diagonal entry of the matrix.
if (verbose)
cout << endl << "Increasing magnitude of first diagonal term, solving again"
<< endl << endl << endl;
if (A.MyGlobalRow (0)) {
int numvals = A.NumGlobalEntries (0);
std::vector Rowvals (numvals);
std::vector Rowinds (numvals);
A.ExtractGlobalRowCopy (0, numvals, numvals, &Rowvals[0], &Rowinds[0]); // Get A(0,0)
for (int i = 0; i < numvals; ++i)
if (Rowinds[i] == 0)
Rowvals[i] *= 10.0;
A.ReplaceGlobalValues (0, numvals, &Rowvals[0], &Rowinds[0]);
}
//
// Run the power method again. Keep track of the flop count and the
// total elapsed time.
//
lambda = 0.0;
timer.ResetStartTime();
counter.ResetFlops();
ierr += powerMethod (lambda, A, niters, tolerance, verbose);
elapsedTime = timer.ElapsedTime();
totalFlops = counter.Flops();
Mflop_per_s = totalFlops / elapsedTime / 1000000.0;
if (verbose)
cout << endl << endl << "Total Mflop/s for second solve = "
<< Mflop_per_s << endl << endl;
#ifdef EPETRA_MPI
MPI_Finalize() ;
#endif
return ierr;
}
int
powerMethod (double & lambda,
Epetra_CrsMatrix& A,
const int niters,
const double tolerance,
const bool verbose)
{
// In the power iteration, z = A*q. Thus, q must be in the domain
// of A, and z must be in the range of A. The residual vector is of
// course in the range of A.
Epetra_Vector q (A.OperatorDomainMap ());
Epetra_Vector z (A.OperatorRangeMap ());
Epetra_Vector resid (A.OperatorRangeMap ());
Epetra_Flops* counter = A.GetFlopCounter();
if (counter != 0) {
q.SetFlopCounter(A);
z.SetFlopCounter(A);
resid.SetFlopCounter(A);
}
// Initialize the starting vector z with random data.
z.Random();
double normz, residual;
int ierr = 1;
for (int iter = 0; iter < niters; ++iter)
{
z.Norm2 (&normz); // normz := ||z||_2
q.Scale (1.0/normz, z); // q := z / normz
A.Multiply(false, q, z); // z := A * q
q.Dot(z, &lambda); // lambda := dot (q, z)
// Compute the residual vector and display status output every
// 100 iterations, or if we have reached the maximum number of
// iterations.
if (iter % 100 == 0 || iter + 1 == niters)
{
resid.Update (1.0, z, -lambda, q, 0.0); // resid := A*q – lambda*q
resid.Norm2 (&residual); // residual := ||resid||_2
if (verbose)
cout << "Iter = " << iter << " Lambda = " << lambda
<< " Residual of A*q – lambda*q = " << residual << endl;
}
if (residual < tolerance) { // We’ve converged!
ierr = 0;
break;
}
}
return ierr;
}
static PyObject * buildMesh(PyObject *difntInterface, PyObject *args)
{
PyObject *lookupTable;
int *iMax;
int *jMax;
int *kMax;
PyObject *key;
PyObject *block = Py_None;
if (!PyArg_ParseTuple(args, "Oiii", &lookupTable,&iMax,&jMax,&kMax))
return NULL;
// Demonstrate that the lookupTable is accessible.
key = Py_BuildValue("iii", 1, 1, 1);
//Py_INCREF(key);
PyObject_Print(key,stdout,Py_PRINT_RAW);
printf("n");
block = PyDict_GetItem(lookupTable,key);
//Py_INCREF(block);
PyObject_Print(block,stdout,Py_PRINT_RAW);
printf("n");
runPow();
Py_RETURN_NONE;
}
static PyMethodDef DifntSolverMethods[] =
{
{"buildMesh", buildMesh, METH_VARARGS, "Builds a mesh."},
{NULL, NULL, 0, NULL}
};
PyMODINIT_FUNC
initcdifnt(void)
{
(void) Py_InitModule("cdifnt", DifntSolverMethods);
}
[/python]
Hope this helps someone!