The routine may be called by the names e02daf or nagf_fit_dim2_spline_panel.
3Description
e02daf determines a bicubic spline fit $s(x,y)$ to the set of data points $({x}_{r},{y}_{r},{f}_{r})$ with weights ${w}_{r}$, for $\mathit{r}=1,2,\dots ,m$. The two sets of internal knots of the spline, $\left\{\lambda \right\}$ and $\left\{\mu \right\}$, associated with the variables $x$ and $y$ respectively, are prescribed by you. These knots can be thought of as dividing the data region of the $(x,y)$ plane into panels (see Figure 1 in Section 5). A bicubic spline consists of a separate bicubic polynomial in each panel, the polynomials joining together with continuity up to the second derivative across the panel boundaries.
$s(x,y)$ has the property that $\Sigma $, the sum of squares of its weighted residuals ${\rho}_{r}$, for $\mathit{r}=1,2,\dots ,m$, where
is as small as possible for a bicubic spline with the given knot sets. The routine produces this minimized value of $\Sigma $ and the coefficients ${c}_{ij}$ in the B-spline representation of $s(x,y)$ – see Section 9. e02def,e02dffande02dhf are available to compute values and derivatives of the fitted spline from the coefficients ${c}_{ij}$.
The least squares criterion is not always sufficient to determine the bicubic spline uniquely: there may be a whole family of splines which have the same minimum sum of squares. In these cases, the routine selects from this family the spline for which the sum of squares of the coefficients ${c}_{ij}$ is smallest: in other words, the minimal least squares solution. This choice, although arbitrary, reduces the risk of unwanted fluctuations in the spline fit. The method employed involves forming a system of $m$ linear equations in the coefficients ${c}_{ij}$ and then computing its least squares solution, which will be the minimal least squares solution when appropriate. The basis of the method is described in Hayes and Halliday (1974). The matrix of the equation is formed using a recurrence relation for B-splines which is numerically stable (see Cox (1972) and de Boor (1972) – the former contains the more elementary derivation but, unlike de Boor (1972), does not cover the case of coincident knots). The least squares solution is also obtained in a stable manner by using orthogonal transformations, viz. a variant of Givens rotation (see Gentleman (1973)). This requires only one row of the matrix to be stored at a time. Advantage is taken of the stepped-band structure which the matrix possesses when the data points are suitably ordered, there being at most sixteen nonzero elements in any row because of the definition of B-splines. First the matrix is reduced to upper triangular form and then the diagonal elements of this triangle are examined in turn. When an element is encountered whose square, divided by the mean squared weight, is less than a threshold $\epsilon $, it is replaced by zero and the rest of the elements in its row are reduced to zero by rotations with the remaining rows. The rank of the system is taken to be the number of nonzero diagonal elements in the final triangle, and the nonzero rows of this triangle are used to compute the minimal least squares solution. If all the diagonal elements are nonzero, the rank is equal to the number of coefficients ${c}_{ij}$ and the solution obtained is the ordinary least squares solution, which is unique in this case.
4References
Cox M G (1972) The numerical evaluation of B-splines J. Inst. Math. Appl.10 134–149
de Boor C (1972) On calculating with B-splines J. Approx. Theory6 50–62
Gentleman W M (1973) Least squares computations by Givens transformations without square roots J. Inst. Math. Applic.12 329–336
Hayes J G and Halliday J (1974) The least squares fitting of cubic spline surfaces to general data sets J. Inst. Math. Appl.14 89–103
5Arguments
1: $\mathbf{m}$ – IntegerInput
On entry: $m$, the number of data points.
Constraint:
${\mathbf{m}}>1$.
2: $\mathbf{px}$ – IntegerInput
3: $\mathbf{py}$ – IntegerInput
On entry: the total number of knots $\lambda $ and $\mu $ associated with the variables $x$ and $y$, respectively.
Constraint:
${\mathbf{px}}\ge 8$ and ${\mathbf{py}}\ge 8$. (They are such that ${\mathbf{px}}-8$ and ${\mathbf{py}}-8$ are the corresponding numbers of interior knots.) The running time and storage required by the routine are both minimized if the axes are labelled so that py is the smaller of px and py.
4: $\mathbf{x}\left({\mathbf{m}}\right)$ – Real (Kind=nag_wp) arrayInput
5: $\mathbf{y}\left({\mathbf{m}}\right)$ – Real (Kind=nag_wp) arrayInput
6: $\mathbf{f}\left({\mathbf{m}}\right)$ – Real (Kind=nag_wp) arrayInput
On entry: the coordinates of the data point
$({x}_{\mathit{r}},{y}_{\mathit{r}},{f}_{\mathit{r}})$, for $\mathit{r}=1,2,\dots ,m$. The order of the data points is immaterial, but see the array point.
7: $\mathbf{w}\left({\mathbf{m}}\right)$ – Real (Kind=nag_wp) arrayInput
On entry: the weight ${w}_{r}$ of the $r$th data point. It is important to note the definition of weight implied by the equation (1) in Section 3, since it is also common usage to define weight as the square of this weight. In this routine, each ${w}_{r}$ should be chosen inversely proportional to the (absolute) accuracy of the corresponding ${f}_{r}$, as expressed, for example, by the standard deviation or probable error of the ${f}_{r}$. When the ${f}_{r}$ are all of the same accuracy, all the ${w}_{r}$ may be set equal to $1.0$.
8: $\mathbf{lamda}\left({\mathbf{px}}\right)$ – Real (Kind=nag_wp) arrayInput/Output
On entry: ${\mathbf{lamda}}\left(\mathit{i}+4\right)$ must contain the $\mathit{i}$th interior knot ${\lambda}_{\mathit{i}+4}$ associated with the variable $x$, for $\mathit{i}=1,2,\dots ,{\mathbf{px}}-8$. The knots must be in nondecreasing order and lie strictly within the range covered by the data values of $x$. A knot is a value of $x$ at which the spline is allowed to be discontinuous in the third derivative with respect to $x$, though continuous up to the second derivative. This degree of continuity can be reduced, if you require, by the use of coincident knots, provided that no more than four knots are chosen to coincide at any point. Two, or three, coincident knots allow loss of continuity in, respectively, the second and first derivative with respect to $x$ at the value of $x$ at which they coincide. Four coincident knots split the spline surface into two independent parts. For choice of knots see Section 9.
On exit: the interior knots ${\mathbf{lamda}}\left(5\right)$ to ${\mathbf{lamda}}\left({\mathbf{px}}-4\right)$ are unchanged, and the segments ${\mathbf{lamda}}\left(1:4\right)$ and ${\mathbf{lamda}}\left({\mathbf{px}}-3:{\mathbf{px}}\right)$ contain additional (exterior) knots introduced by the routine in order to define the full set of B-splines required. The four knots in the first segment are all set equal to the lowest data value of $x$ and the other four additional knots are all set equal to the highest value: there is experimental evidence that coincident end-knots are best for numerical accuracy. The complete array must be left undisturbed if e02defore02dff is to be used subsequently.
9: $\mathbf{mu}\left({\mathbf{py}}\right)$ – Real (Kind=nag_wp) arrayInput/Output
On entry: ${\mathbf{mu}}\left(\mathit{i}+4\right)$ must contain the $\mathit{i}$th interior knot ${\mu}_{\mathit{i}+4}$ associated with the variable $y$, for $\mathit{i}=1,2,\dots ,{\mathbf{py}}-8$.
On exit: the same remarks apply to mu as to lamda above, with y replacing x, and $y$ replacing $x$.
On entry: indexing information usually provided by e02zaf which enables the data points to be accessed in the order which produces the advantageous matrix structure mentioned in Section 3. This order is such that, if the $(x,y)$ plane is thought of as being divided into rectangular panels by the two sets of knots, all data in a panel occur before data in succeeding panels, where the panels are numbered from bottom to top and then left to right with the usual arrangement of axes, as indicated in Figure 1.
Figure 1
A data point lying exactly on one or more panel sides is considered to be in the highest numbered panel adjacent to the point. e02zaf should be called to obtain the array point, unless it is provided by other means.
11: $\mathbf{npoint}$ – IntegerInput
On entry: the dimension of the array point as declared in the (sub)program from which e02daf is called.
12: $\mathbf{dl}\left({\mathbf{nc}}\right)$ – Real (Kind=nag_wp) arrayOutput
On exit: gives the squares of the diagonal elements of the reduced triangular matrix, divided by the mean squared weight. It includes those elements, less than $\epsilon $, which are treated as zero (see Section 3).
13: $\mathbf{c}\left({\mathbf{nc}}\right)$ – Real (Kind=nag_wp) arrayOutput
On exit: gives the coefficients of the fit.
${\mathbf{c}}\left(({\mathbf{py}}-4)\times (\mathit{i}-1)+\mathit{j}\right)$ is the coefficient ${c}_{\mathit{i}\mathit{j}}$ of Sections 3 and 9, for $\mathit{i}=1,2,\dots ,{\mathbf{px}}-4$ and $\mathit{j}=1,2,\dots ,{\mathbf{py}}-4$. These coefficients are used by e02defore02dff to calculate values of the fitted function.
14: $\mathbf{nc}$ – IntegerInput
On entry: the value $({\mathbf{px}}-4)\times ({\mathbf{py}}-4)$.
15: $\mathbf{ws}\left({\mathbf{nws}}\right)$ – Real (Kind=nag_wp) arrayWorkspace
16: $\mathbf{nws}$ – IntegerInput
On entry: the dimension of the array ws as declared in the (sub)program from which e02daf is called.
On entry: a threshold $\epsilon $ for determining the effective rank of the system of linear equations. The rank is determined as the number of elements of the array dl which are nonzero. An element of dl is regarded as zero if it is less than $\epsilon $. Machine precision is a suitable value for $\epsilon $ in most practical applications which have only $2$ or $3$ decimals accurate in data. If some coefficients of the fit prove to be very large compared with the data ordinates, this suggests that $\epsilon $ should be increased so as to decrease the rank. The array dl will give a guide to appropriate values of $\epsilon $ to achieve this, as well as to the choice of $\epsilon $ in other cases where some experimentation may be needed to determine a value which leads to a satisfactory fit.
18: $\mathbf{sigma}$ – Real (Kind=nag_wp)Output
On exit: $\Sigma $, the weighted sum of squares of residuals. This is not computed from the individual residuals but from the right-hand sides of the orthogonally-transformed linear equations. For further details see page 97 of Hayes and Halliday (1974). The two methods of computation are theoretically equivalent, but the results may differ because of rounding error.
19: $\mathbf{rank}$ – IntegerOutput
On exit: the rank of the system as determined by the value of the threshold $\epsilon $.
${\mathbf{rank}}={\mathbf{nc}}$
The least squares solution is unique.
${\mathbf{rank}}\ne {\mathbf{nc}}$
The minimal least squares solution is computed.
20: $\mathbf{ifail}$ – IntegerInput/Output
On entry: ifail must be set to $0$, $\mathrm{-1}$ or $1$ to set behaviour on detection of an error; these values have no effect when no error is detected.
A value of $0$ causes the printing of an error message and program execution will be halted; otherwise program execution continues. A value of $\mathrm{-1}$ means that an error message is printed while a value of $1$ means that it is not.
If halting is not appropriate, the value $\mathrm{-1}$ or $1$ is recommended. If message printing is undesirable, then the value $1$ is recommended. Otherwise, the value $0$ is recommended. When the value $-\mathbf{1}$ or $\mathbf{1}$ is used it is essential to test the value of ifail on exit.
On exit: ${\mathbf{ifail}}={\mathbf{0}}$ unless the routine detects an error or a warning has been flagged (see Section 6).
6Error Indicators and Warnings
If on entry ${\mathbf{ifail}}=0$ or $\mathrm{-1}$, explanatory error messages are output on the current error message unit (as defined by x04aaf).
Errors or warnings detected by the routine:
${\mathbf{ifail}}=1$
At least one set of knots is not in nondecreasing order.
${\mathbf{ifail}}=2$
More than four knots coincide at a single point.
${\mathbf{ifail}}=3$
Array point does not indicate the data points in panel order.
${\mathbf{ifail}}=4$
On entry, ${\mathbf{m}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{m}}>1$.
On entry, ${\mathbf{nc}}=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{px}}=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{py}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{nc}}=({\mathbf{px}}-4)\times ({\mathbf{py}}-4)$.
On entry, ${\mathbf{npoint}}=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{m}}=\u27e8\mathit{\text{value}}\u27e9$, ${\mathbf{px}}=\u27e8\mathit{\text{value}}\u27e9$ and ${\mathbf{py}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{npoint}}\ge {\mathbf{m}}+({\mathbf{px}}-7)\times ({\mathbf{py}}-7)$.
On entry, nws is too small. ${\mathbf{nws}}=\u27e8\mathit{\text{value}}\u27e9$. Minimum possible dimension: $\u27e8\mathit{\text{value}}\u27e9$.
On entry, ${\mathbf{px}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{px}}\ge 8$.
On entry, ${\mathbf{py}}=\u27e8\mathit{\text{value}}\u27e9$.
Constraint: ${\mathbf{py}}\ge 8$.
${\mathbf{ifail}}=5$
All the weights are zero, or rank determined as zero.
${\mathbf{ifail}}=-99$
An unexpected error has been triggered by this routine. Please
contact NAG.
See Section 7 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-399$
Your licence key may have expired or may not have been installed correctly.
See Section 8 in the Introduction to the NAG Library FL Interface for further information.
${\mathbf{ifail}}=-999$
Dynamic memory allocation failed.
See Section 9 in the Introduction to the NAG Library FL Interface for further information.
7Accuracy
The computation of the B-splines and reduction of the observation matrix to triangular form are both numerically stable.
8Parallelism and Performance
e02daf is not threaded in any implementation.
9Further Comments
The time taken is approximately proportional to the number of data points, $m$, and to ${(3\times ({\mathbf{py}}-4)+4)}^{2}$.
The B-spline representation of the bicubic spline is
summed over $i=1,2,\dots ,{\mathbf{px}}-4$ and over $j=1,2,\dots ,{\mathbf{py}}-4$. Here ${M}_{i}\left(x\right)$ and ${N}_{j}\left(y\right)$ denote normalized cubic B-splines, the former defined on the knots ${\lambda}_{i},{\lambda}_{i+1},\dots ,{\lambda}_{i+4}$ and the latter on the knots ${\mu}_{j},{\mu}_{j+1},\dots ,{\mu}_{j+4}$. For further details, see Hayes and Halliday (1974) for bicubic splines and de Boor (1972) for normalized B-splines.
The choice of the interior knots, which help to determine the spline's shape, must largely be a matter of trial and error. It is usually best to start with a small number of knots and, examining the fit at each stage, add a few knots at a time in places where the fit is particularly poor. In intervals of $x$ or $y$ where the surface represented by the data changes rapidly, in function value or derivatives, more knots will be needed than elsewhere. In some cases guidance can be obtained by analogy with the case of coincident knots: for example, just as three coincident knots can produce a discontinuity in slope, three close knots can produce rapid change in slope. Of course, such rapid changes in behaviour must be adequately represented by the data points, as indeed must the behaviour of the surface generally, if a satisfactory fit is to be achieved. When there is no rapid change in behaviour, equally-spaced knots will often suffice.
In all cases the fit should be examined graphically before it is accepted as satisfactory.
The fit obtained is not defined outside the rectangle
The reason for taking the extreme data values of $x$ and $y$ for these four knots is that, as is usual in data fitting, the fit cannot be expected to give satisfactory values outside the data region. If, nevertheless, you require values over a larger rectangle, this can be achieved by augmenting the data with two artificial data points $(a,c,0)$ and $(b,d,0)$ with zero weight, where $a\le x\le b$, $c\le y\le d$ defines the enlarged rectangle. In the case when the data are adequate to make the least squares solution unique (${\mathbf{rank}}={\mathbf{nc}}$), this enlargement will not affect the fit over the original rectangle, except for possibly enlarged rounding errors, and will simply continue the bicubic polynomials in the panels bordering the rectangle out to the new boundaries: in other cases the fit will be affected. Even using the original rectangle there may be regions within it, particularly at its corners, which lie outside the data region and where, therefore, the fit will be unreliable. For example, if there is no data point in panel $1$ of Figure 1 in Section 5, the least squares criterion leaves the spline indeterminate in this panel: the minimal spline determined by the subroutine in this case passes through the value zero at the point $({\lambda}_{4},{\mu}_{4})$.
10Example
This example reads a value for $\epsilon $, and a set of data points, weights and knot positions. If there are more $y$ knots than $x$ knots, it interchanges the $x$ and $y$ axes. It calls e02zaf to sort the data points into panel order, e02daf to fit a bicubic spline to them, and e02def to evaluate the spline at the data points.
Finally it prints:
–the weighted sum of squares of residuals computed from the linear equations;
–the rank determined by e02daf;
–data points, fitted values and residuals in panel order;
–the weighted sum of squares of the residuals; and
–the coefficients of the spline fit.
The program is written to handle any number of datasets.
Note: the data supplied in this example is not typical of a realistic problem: the number of data points would normally be much larger (in which case the array dimensions and the value of nws in the program would have to be increased); and the value of $\epsilon $ would normally be much smaller on most machines (see Section 5; the relatively large value of ${10}^{\mathrm{-6}}$ has been chosen in order to illustrate a minimal least squares solution when ${\mathbf{rank}}<{\mathbf{nc}}$; in this example ${\mathbf{nc}}=24$).