I have just released jblas 1.1 (release notes). The main addition are Singular Value Decomposition (both “full” and “sparse”), and fixing a nasty bug with complex return values. Basically, the way complex return values are treated by g77 and gfortran are drastically different leading to some very odd bugs resulting from a confused stack frame.
Unfortunately, I also had to remove support for 32-bit Mac OS X in the precompiled jar file. The reason is that I lost access to the 32-bit machine I originally used to compile ATLAS for Mac, and any other machine I got my hands on is apparently running with 64-bits. Also, I didn’t quite get the 32-bit version of gcc working. But in case you have a 32-bit machine and need jblas, please contact me and I’ll help you compile the beast.
To make up for that, here are two examples of how to use jblas. These are from my poster at this years MLOSS workshop at ICML.
You can download the slides here.
I’ve also put the source up on github. Note that these sources are not meant to be a good example for coding style or project layout, but just an example of how to use jblas!
Please take a few seconds for jsMath do its magic… .
The first example is taken from machine learning. Let’s implement the following: Noisy sinc data set learned with Kernel Ridge Regression (KRR) with a Gaussian kernel.
First, we define the sinc function, which is usually defined as follows (up to scaling of the $x$): \(\text{sinc}(x) = \sin(x) / x \text{ if } x \neq 0 \text{ and } 1 \text{ if } x = 0.\)
The first try to code this in Java using jblas is to just focus on the part where $x \neq 0$:
Note how sin
and div
“vectorize” over a number of vectors given as
a matrix.
Now let’s add the $x=0$ case. Since the code like this:
Note how we first “patch” for the case where x is zero, and then again add those points to get the required output.
Next, we draw some data from the sinc function and add noise. Our data model looks like this: \(X \sim \text{uniformly from -4, 4}\) \(Y = \text{sinc}(x) + \sigma_\varepsilon^2 \varepsilon\) \(\varepsilon \sim \mathcal{N}(0, 1).\)
The following function generates a data set by returning an array of two DoubleMatrices representing $X$ and $Y$.
For KRR, we need to compute the whole kernel matrix. The kernel function is defined as \(k(x,z) = \exp\left( - \frac{||x - z||^2}{w} \right)\)
You can easily compute the kernel matrix using Geometry.pairwiseSquaredDistances()
.
KRR learns a “normal” kernel model of the form \(f(x) = \sum_{i=1}^n k(x, X_i)\alpha_i\) with \(\alpha = (K + \lambda I)^{-1} Y,\) where $K$ is the kernel matrix \(K_{ij} = k(X_i, X_j).\)
With jblas, you would compute the $\alpha$ as follows:
The function predictKRR
computes predictions on new points stored as
rows of the matrix XE
.
Finally, in order to compute the error of our fit, we would like to compute the mean-squared error. \(\frac 1n\sum_{i=1}^n (Y_i - Y'_i)^2.\)
In jblas:
As a second example, let’s implement the conjugate gradients algorithm. This is an iterative algorithm for solving linear equations of the form $Ax = b$, where $A$ is symmetric and positive definite.
The pseudo-code looks as follows:
In jblas, the algorithm looks as follows (numbers in comments indicate steps in the algorithm above.)
For better speed, I have tried to reduce the number of matrix
allocations and used the in-place variants of the arithmetic
operations where possible. These kinds of tweaks can often give some
performance boost. For example, Ap
is used to hold the result of
$Ap$, and so on.
As a side-effect the assigment $r \gets r’$ actually becomes a swap
between the matrices stored in r
and r2
.
Posted by Mikio L. Braun at 2010-08-16 16:55:00 +0000
blog comments powered by Disqus