Recursive Gauss-Helmert model with equality constraints applied to the efficient system calibration of a 3D laser scanner

: Sensors for environmental perception are nowadays applied in numerous vehicles and are expected to be used in even higher quantities for future autonomous driving. This leads to an increasing amount of observation data that must be processed reliably and accurately very quickly. For this purpose, recursive approaches are particularly suitable in terms of their efficiency when powerful CPUs and GPUs are uneconomical, too large, or too heavy for certain applications. If explicit functional relationships between the available observations and the requested parameters are used to process and adjust the observation data, complementary approaches exist. The situation is different for implicit relationships, which could not be considered recursively for a long time but only in the context of batch adjustments. In this contribution, a recursive Gauss-Helmert model is presented that can handle explicit and implicit equations and thus allows high flexibility. This recursive estimator is based on a Kalman filter for implicit measurement equations, which has already been used for georeferencing kinematic multi-sensor systems (MSS) in urban environments. Furthermore, different methods for introducing additional information using constraints and the resulting added value are shown. Practical application of the methodology is given by an example for the calibration of a laser scanner for a MSS.


Introduction
The usage of high-resolution sensor technologies (such as laser scanners (LS) or cameras) has steadily increased over the last years as part of applications such as mobile mapping or autonomous driving [65]. This trend is particularly favoured by the availability of precise sensors with a steadily improved price/performance ratio. In particular, several automotive LS already exist, which are relatively cheap compared to their remarkable uncertainty level [27,45,60]. At the same time, the resolution, range and density of these three-dimensional (3D) sensors are also increasing. As a result, individual sensors, such as LS with a measurement rate of up to 500,000 scan points per second generate a multitude of observational information to be taken into account within a very short time. Furthermore, in so-called multi-sensor systems (MSS), several different sensor types are often used simultaneously [37,49], which has a corresponding additional effect on the available data volumes [12,40,48]. For this reason, the terminology of 'big data' is applicable in this context. However, such a combination of multiple sensors results in further advantages that are helpful for an improved and more reliable environmental perception or localization.
Increasingly, the acquired sensor data is processed directly within the framework of adjustment or filtering approaches. The purpose of this can be the identification and segmentation of different objects in the environment, or also the contribution to the localisation task [64]. In the context of the huge amounts of data, efficient algorithms are becoming essential for applying appropriate methods. It also counts whether the data are timely independent or whether they are available in the context of several individual epochs. If all observations to be taken into account are available at once, a so-called batch approach is generally recommended [39,66]. However, if all acquired data is used within one big adjustment, this batch processing quickly reaches its limits with such vast amounts of data. Even though such batch methods deliver reliable results, in general, they need to be performed on powerful computers in post-processing [3,31]. This contradicts the requirements for online approaches (e. g., in the case of autonomous vehicles). Such applications require recursive approaches that process only a certain subset of the data when it is available. Therefore, for applications where the sensors or an entire MSS moves kinematically within a certain environment over time, a recursive approach is advis-able in contrast to an overall adjustment [24,61]. In addition, a recursive approach can also be useful when all data is available at the same time. This enables shorter computing times to be realised and larger amounts of data to be taken into account at all [26,50]. Alternative techniques, such as data reduction by down-sampling [8,57] or octrees [12], can thus be avoided.
In general, the theory of parameter estimation provides a comprehensive framework to solve overdetermined systems of equations for both batch processing [28,32,35,39,66] and recursive approaches [30,39,41,66]. However, the applicability of these two categories depends on the mathematical relationship, called the functional model, between the observation vector l and the unknown parameter vector x. Here the observations are random variables. In the case of an explicit relationship, which is also referred to as the Gauss-Markov model (GMM), there is a solution for the functional model using both batch processing and recursive approaches.
Here, h (⋅) are any suitable real-valued and non-linear functions. Furthermore, (⋅) denotes the unknown expected value, since, strictly speaking, the functional relations are only valid for the true observationsl and parametersx . In terms of an implicit relationship, which is also referred to as the Gauss-Helmert model (GHM), the solution for the functional model h ( (l) , x) = 0 (2) previously existed only for batch approaches. The first recursive estimation approach for efficient epoch-wise estimation was initially mentioned in [64] and has also been applied to the georeferencing of different kinematic MSSs in urban environments. For this reason, the realisation for such a recursive GHM approach, using an iterated extended Kalman filter (IEKF), is presented in this paper in detail and analysed more thoroughly. The versatility of the new procedure should be emphasised, as this recursive GHM is also directly applicable for explicit equations, since every explicit relationship (cf. Equation (1)) can be transformed into an implicit relationship according to As an additional novelty, different methods for the consideration of equality constraints and the related gain from this additional information are described.
The paper is structured as follows. In the next section, a standard batch processing method and an extension for consideration of equality constraints are presented. The recursive GHM is introduced in section 3 and also extended by different constraint methods. The methodology is applied to a system calibration task of an LS-based MSS using simulated and real data in section 4. Results and their discussion are presented in section 5. The paper ends with a conclusion in section 6.

Batch processing 2.1 Gauss-Helmert model
In general, the use of a suitable estimator is required to perform parameter estimation. In the context of batch processing, least-squares can be applied. For this estimator the optimisation criterion is to minimize the sum of the squared residuals v between real observations l and their corresponding expected values (l), which are functions of the unknown parameters x [32]. This method is referred to as least-squares adjustment and is widely used in the geodetic community where it is well described by [28,32,35,39,66].
The residuals v are assumed to be Gaussian with expected value (v) = 0 v ∼ N (0, Σ ll ) , Σ ll = σ 2 0 ⋅ Q ll = σ 2 0 ⋅ P −1 , where Σ ll is the known variance-covariance matrix (VCM) of the observations, Q ll the related cofactor matrix, σ 2 0 the a priori variance factor and P the weight matrix. Together, these parameters describe the weighting of the observations among themselves and are referred to as a stochastic model. In the context of this contribution, a simplified stochastic model with independent and identically distributed random variables is assumed for the application in section 4.
Since the adjustment approach requires linear functional models, Taylor series expansion is performed to linearise h (⋅). In the context of linearisation and subsequent optimisation, this is an iterative process in which the individual variables must be updated in each iteration step. Suitable approximate values of l 0 and x 0 for both the observations (l) and the unknown parameters x are needed for the first epoch [35]. Their reasonable selection has a direct impact on the number of required repetitions as well as on the successful convergence to the global optimum. For the sake of clarity, detailed information on the individual iterations is not given here, but readers are referred to the detailed descriptions in previous works (e. g. [32,33,35]). Specifying the theoretical quantities, the following applies where B is the condition matrix with partial derivatives with respect to the observations, A is the design matrix with partial derivatives with respect to the parameters, Δx is the reduced parameter vector and w refers to the contradictions according to By using Lagrangian multipliers λ the objective function L GHM (v, Δx, λ) with respect to least-squares estimation can be established and minimized according to Using the utility matrix Q bb the solution of the objective function L GHM (v, Δx, λ) leads to the linear normal equation system in block structure, when setting the related partial derivatives equal to zero Here N GHM is referred to as the normal equation matrix, which must be regular to be invertible. The estimate of the requested parameter vectorx can be obtained by the inverse of the normal equation matrix N GHM , considering the relationship to the reduced parameter vector Δx as followsx The residualsv result from the estimated observationsl as followsλ The (co-)variances of the estimated parameters can be derived from the cofactor matrix Qxx , which can be obtained according to As this is an iterative process, the approximate values l 0 and x 0 are subsequently updated by the current estimated values.

Gauss-Helmert model with equality constraints
If reliable and suitable prior knowledge about the parameters is available, it is always advisable to integrate this additional information into the estimation process [4,47,51]. Any mathematical definitions, physical laws, geometric constraints or other practical or logical specifications can be taken into account. An extension of the GHM by equality constraints (referred to as constrained GHM (C-GHM)) is therefore introduced in [22,36,44,46,55] according to where D is the matrix of equality constraints and d is the related constant vector. In general, however, such constraints are described by non-linear relationships which must be linearised by a first-order Taylor expansion to obtain Equation (18) according to With regard to Equation (13), the following modification has to be made Taking the constraints into account requires an extension of the objective function L GHM (v, Δx, λ) (cf. Equation (10)) from which follows In order to find the minimum of the objective function L C-GHM (v, Δx, λ 1 , λ 2 ), the partial derivatives must be set to Thus the normal equation matrix N C-GHM for the constrained minimisation problem can be set up in block structure The cofactor matrix Qxx can be derived by with the regular auxiliary matrix The estimated residualsv can be obtained by substitution and transformation of Equation (5) and Equation (24) as followsv According to [22], the aforementioned substitution can be used to reduce the normal equation system by eliminating the Lagrange multiplier which means thatλ 1 can be substituted in Equation (25) leading to Using this substitution the simplified normal equation system consists of the following Even though the normal equation system has already been set up in Equation (28) and contains the solution for the requested parameters, the advantage of Equation (34) is that it is less dimensional, which can have a positive ef-fect on the run time with regard to big data. Furthermore, the application to the data set presented in section 4 has indicated that in individual cases the solution via Equation (34) is numerically more stable and therefore more reliable depending on the geometric configuration. This relates directly to the number of constraints to be considered, which affects the condition of the normal equation matrix due to multiple zero elements. Again, several iterations for repeated linearisation are required to obtain the parameters requested by the inverse of the normal equation matrix N * C-GHM .

Recursive estimation
The batch processing methods presented in section 2 are suitable if all observations to be considered are available as a whole and have no temporal relationship. However, if new observations are available epoch by epoch, recursive estimation of the requested parameters should be used instead [30,41]. Such an approach is particularly recommended when evaluating mass data. Otherwise it can happen that the normal equation systems in Equation (12), (28) and (34) become so large that numerical instabilities occur or no solution is possible without extensive computing power. For this reason, a recursive estimation can be worthwhile for the necessary computing time, even if the observations are not available epoch by epoch but as a whole without a temporal reference. In the following, an approach is presented that enables recursive estimation also for functional relationships with reference to a GHM. The basic idea for this recursive GHM is already presented in [64] and is a special case of an IEKF for implicit measurement equations, originally introduced in [5,6]. In general, there are only very few contributions to such implicit filter approaches. While [53,54] also use an IEKF for implicit relations, [14] consider linear implicit relations through a KF. The consideration of non-linear relationships, but without an iteratively improved linearisation, is done in [15,52]. Except for [64], all these approaches mentioned do not consider state constraints at all. We first briefly introduce this filtering approach of the IEKF with an implicit measurement equation.

Recursive state-space filtering
In contrast to parameter estimation, recursive state-space filtering uses additional system information besides the measurement information. This makes it possible to consider suitable physical models that mathematically de-scribe the temporal and spatial dynamics of the requested state parameters. Thus, both measurements and physical properties are considered. In general, an IEKF is an advancement of the linear Kalman filter (KF) and enables the possibility to consider non-linear relationships [7]. Both are applied by means of a recursive two-step procedure for state estimation over a theoretically unlimited amount of epochs. Arbitrary types of state parameters x k (e. g., position, orientation, velocity) are forecasted by a suitable process model f (⋅) during the prediction step and corrected by latest sensor measurements subsequently within each epoch k. Selected physical relationships (e. g. motion models) are applied to the previous state parameters x k−1 from the last past epoch. Further affecting influencing factors like zero-mean process noise w k with VCM Σ ww,k and controls u k are also applied. According to [50], the non-linear process model is given by During the update step, the predicted state parameters are corrected using the latest sensor observations l k using the non-linear measurement equations h(⋅). In most approaches, only explicit relationships (cf. Equation (1)) can be applied. However, the IEKF from [5,6] enables application of implicit relations between observations and states with analogies to the implicit functional model in section 2.1 (cf. Equation (2)). The measurement noise is described by the residuals v k with the VCM Σ ll,k , identical to Equation (4), within each epoch k.
To distinguish between the predicted and updated state parameters, x − k is referred to the a priori estimate and x + k to the a posteriori estimate of the state vector. The corresponding VCM for the predicted state is given by Σ − xx,k and for the updated state by Σ + xx,k . Consideration of this implicit measurement equation during the update step is done by the objective function L IEKF according to [5,6] as follows where l + k are the updated observations and both A k and B k are the Jacobian matrices with respect to the states and the Here,x k andľ k are corresponding development points of the first-order linearisation. Furthermore, the Lagrangian multiplier is given by λ k . Minimisation of L IEKF leads to the linear normal equation system in block structure where The filtered observations and states can be obtained by the inverse of the normal equation matrix N IEKF , similar to the solution of GHM in the batch processing approach. In contrast to an extended Kalman Filter (EKF), which can also handle non-linear relations, the IEKF comprises repeated iteration runs for m = 0 . . . M − 1 during the update step within each epoch k. This reduces the impact of the non-linearities. This is comparable to the GHM in batch processing (cf. section 2). In each epoch k, the development pointsx k,m andľ k,m are initialized by They are iteratively replaced by the currently filtered state estimatesx + k,m and filtered observationsl + k,m until a predefined number of iterations M or a corresponding abort criterion is reached. For all subsequent epochs the recursive procedure between prediction and update step is then performed. After building the inverse of N IEKF and multiplying respective equations, the whole algorithm can be summarized (cf. Algorithm 1 as well as the schematic flowchart in Figure 1). Further details are given in [64], whereas the full derivation is given in [5,6]. Algorithm 1: Versatile recursive state-space filter based on an IEKF for non-linear implicit and explicit measurement equations. 1 System model Initial state vector and its VCM: In this context, it should also be mentioned that there are also corresponding uncertainty information in the form of the VCM Σ +ll

Recursive Gauss-Helmert model
The IEKF, described in Algorithm 1, is open towards different physical motion and measurement models. In con-trast to the vast majority of existing IEKF realisations, both implicit and explicit relations can be applied (cf. Equation (3)). This is a crucial advantage and allows more flexibility regarding new applications. Furthermore, a recursive GHM can be introduced on the basis of this IEKF, which did not exist before. This new method was first introduced in [64] and enables the possibility to receive state estimates on an epoch-wise basis. As already motivated in section 1, such a recursive approach is particularly suitable for more efficient processing of big data. The main difference between the recursive parameter estimation and the recursive state estimation is the consideration of a dynamic model within the prediction step (cf. Equation (35)), if a filter approach is used. Therefore, to realise such a recursive GHM, the prediction step and related system model of the IEKF are disregarded. In this case, the predicted state x − k (cf. line 7 in Algorithm 1) is identical to the filtered state x + k−1 from the last past epoch. The same applies for the related VCM Σ − xx,k (cf. line 8 in Algorithm 1), whereby a suitable process noise Σ ww,k can still be considered. Suitable in this context means that numerical instabilities are avoided, but no excessive influence results from its choice. In theory, the process noise should be zero for a recursive GHM. Since the realisation here is based on an IEKF, which requires a sufficiently accurate initialisation, the consideration of a non-zero process noise has a regularising effect. This behaviour could be identified within the framework of numerical investigations but will be analysed in more detail in the future. It has been shown that the total disregard of the system noise has had a particularly negative effect on the estimation results of the first epochs, as these have been given too much weight. Thus, the approximate values for the subsequent epochs were also influenced, and the adjustment diverged in some cases. The respective order of magnitude and physical interpretation should be considered depending on the specific application. In each case, it is also important to investigate the extent to which the additional information in the stochastic model could lead to a possible bias in the estimate. Since taking system noise into account influences the weighting in the update step between the predicted and observed quantities, too strong input can lead to inaccurate results. In particular, comprehensive sensitivity analyses are recommended for this purpose. Further investigations on this can be found in [64]. The following applies for the prediction step of the recursive GHM while the complete update step remains unaffected from this modification. Neglecting this prediction within a KF thus provides the basis for applying a recursive state estimator as a recursive parameter estimation. In the case of KFs with explicit relationships, this procedure is also referred to as random walk-based model [38]. Since there is no impact from a system model anymore, the state parameters are only affected by the availability of new observations for each epoch. However, there are some minor differences towards the estimation of a GHM in batch processing (cf. section 2.1). The overall adjustment iterates and linearises several times but always directly for the entire data set. In the recursive approach, linearisation is much more frequent because of the partition into artificial epochs. This leads to linearisation at different development points, which affects the result. In the recursive GHM, each epoch defines a new start of the linearisation with new subsets of observations. This results in different locations at which linearisation is performed in the comparison. This is also evident in the application example in section 5.

Recursive Gauss-Helmert model with equality constraints
Consideration of suitable state constraints for arbitrary linear and non-linear KF approaches with explicit measurement equation are well-practised [18,50,51]. However, the combination of Kalman filtering with implicit measurement equations and state constraints is almost nonexisting. A first approach is given by [62], where non-linear equality constraints are considered by means of a projection method. An extension regarding non-linear inequality state constraints using probability density function (PDF) truncation is given in [63]. However, both projection and PDF truncation in terms of implicit relations will lead to unfulfilled contradictions. This issue is stated in [63] and occurs due to the subsequent constraint step between the update and prediction step. Only the states are affected during the constraint step for the projection or PDF truncation method, while observations remain unaffected. For this reason, we present two methods for consideration of equality constraints, which overcome this issue and were initially introduced in [64]. As these state constraints can be applied by the IEKF for implicit measurement equation, introduced at the beginning of section 3, they are also applicable for the recursive GHM (cf. section 3.2). This will be referred to as recursive C-GHM with reference to the respective constraint method.

Constraint objective function (COF)
As also applied for the GHM with equality constraints in the batch processing (cf. section 2.2) the objective function of the recursive GHM (cf. Equation (39)) can be extended by an equality constraint according to Equation (18), following Here, λ 1 and λ 2 are the Lagrangian multipliers of the constrained objective function (COF) L C-IEKF . Setting the partial derivatives to zero results in the linear normal equation system in block structure By considering the constraints, the dimension of the normal equation system has increased accordingly compared to Equation (42). The basic calculation procedure of the IEKF, as well as the calculation of A k and B k remains unchanged. Again this requires, that N C-IEKF is regular, which is given, as long as linear independent restrictions are applied. For the calculation of the associated VCM Σ cxx ,k with respect to the constrained states, the following applies

Perfect measurements (PM)
In terms of explicit measurement equations, consideration of equality constraints by means of so-called perfect measurements (PM) is commonly used. Each single equality constraints is treated as an additional PM with zero measurement noise [42,50,51]. This idea can also be adapted for implicit relations. The available number of measurement equations is extended by artificial equations for the consideration pseudo-observations where the measurement noise v d,k of the constraint is always equal to zero. This will result in the extension of the linearised measurement equations (cf. Equation (5)) as follows with This extension leads to additional rows in the extended design matrix A * and condition matrix B * . In addition, the number of columns of B * also increases due to the additional perfect observations in the extended observation vector l * k . Apart from that, the IEKF's calculation process does not change. However, due to the insertion of numerous zero elements, this method can lead to singularity problems [16,51]. For this reason, the use of Moore-Penrose inverse according to [32] is recommended. Further possibilities for dealing with such possible numerical instabilities are described in [64].

Efficient calibration of laser scanner-based multi-sensor systems
Multi-sensor systems (MSS) are characterised by the fact that they consist of several individual sensors, which support each other optimally and ensure redundancy to fulfil the respective task. The realisation of such an MSS requires the assurance of precise synchronisation and knowledge of the spatial relationships between the individual sensors. In general, this process is referred to as calibration and connects the respective local sensor own coordinate systems (SOCS) with a fixed superordinate platform coordinate system (PCS). Mathematically this is described by a 3D Helmert transformation with 6 degrees of freedom (DoF) for each sensor. A high precision and accuracy of these parameters are crucial since all subsequent calculation and processing tasks are based on the correctness of these values [11,20,23,58]. For example, an angular deviation of about 0.1°already leads to a deviation of 1.7 cm on the object at a distance of 10 m. In general, it is recommended to determine the corresponding calibration parameters in advance in a laboratory under controllable conditions and with a highprecision reference sensor. Furthermore, it is also possible to determine the necessary DoF on-the-fly during the actual measurement process. However, this requires higher demands on the necessary computing times since realtime applications (such as autonomous driving) are becoming increasingly important. Especially when using 3D LS, this can become a challenge due to the continuously increasing amount of measurement data. For this reason, recursive approaches are indispensable in this context. Nevertheless, even if there are no real-time requirements and a calibration can be performed in advance, recursive approaches can be helpful as well. Depending on the available observations, the normal equation systems of common batch adjustments can quickly reach high dimensions, which leads to long computation times or higher demands on the computation power. More often, available observations have to be omitted through homogeneous or random subsampling, such that a calculation with the available computing capacities can be performed at all [2,43]. Therefore, a transfer to recursive approaches is advisable, which allows to consider significantly more information. However, it is important to keep in mind that even if the integration of more observational data can positively impact the redundancy and uncertainty of the estimation results, this is not automatically advantageous or even always necessary. More observations do not necessarily contribute additional information content. For example, additional point observations of a plane do not provide directly any significant added value if a multitude of observations homogeneously covered the plane. Furthermore, recursive estimation can be beneficial when a suitable data reduction is not straightforward or does not exist.
In the following, we present a state of the art approach for the calibration of an LS-based MSS using geometric object space information. The recursive estimation (recursive C-GHM) is compared and discussed based on simulated data and real data with the results from the classical batch solution (C-GHM).

Experimental setup
For the sake of clarity, only a simplified representation of an MSS is given below. Nevertheless, the basic procedure can be transferred directly to any MSS with an LS to be calibrated. In particular, the SOCS of a Velodyne Puck VLP-16 is to be calibrated with respect to a specific PCS. The platform for mounting the LS has drilling holes that define the PCS (cf. Figure 2(a)). By using these drilling holes, the joint adaptation can be applied universally on any other platform. The spatial relationship to the SOCS of the LS can then be established via the drilling holes and the calibration parameters. The platform with the LS is shown in Figure 2 together with both PCS and SOCS. The 6-DoF calibration parameters between these two coordinate systems are thus to be determined. The used LS measures with 16 individual scan lines, which are almost perpendicular to its vertical axis and have an angular resolution of around 2°. Combining these single scan lines and an internal rotation results in a panoramic 3D point cloud (PCL). Altogether, the LS has a field of view of 30°× 360°. The manufacturer specifies a range accuracy of typically up to 3 cm [60], without indicating the associated quality parameter.
To determine the calibration parameters between the SOCS of the LS and the PCS, multiple reference planes are arranged in the field of view of the LS within the 3D-Laboratory of the Geodetic Institute of the Leibniz University Hannover. There are 12 planes in the field of view of the LS, which differ in their distance to the MSS and their respective size. A detailed description of the reference planes used regarding their spatial distribution and orientation is given in [21]. Attention must be paid to a versatile alignment of the individual planes in order to ensure high sensitivity and quality for the individual coordinate axes [21,23]. In this context, it should be noted that only a few (but at least two) of all 16 scan lines per reference plane will capture them at all. This results from the distance and size of the reference planes. In order to obtain known nominal values for the reference planes, they must be determined and mathematically described with a sensor of superordinate accuracy. This can be done with a terrestrial LS or a total station, for example. For the investigations shown here, a highly-accurate Leica Absolute Tracker AT960 was used which, according to [25], has an maximum permissible error (MPE) of ±15 µm+6 µm/m for the 3D position. With its specific SOCS, the laser tracker defines a superordinate world coordinate system (WCS). Since this referencing sensor can also measure the drilling holes on the platform, an unambiguous relationship between all the coordinate systems mentioned can be realised. A part of the experimental setup, the relevant observations and related coordinate systems are shown in Figure 3.
A rotation speed of 10 Hz is selected for acquisition by the LS, with a total measuring time of 5 seconds. After cropping the PCL of unnecessary points of the laboratory environment and assigning the points to the reference planes, in total, between 500 and 6000 points per plane are available for the calibration. To avoid outliers in the later analysis, the assignment is done manually. The whole data set was collected in the course of the work of [13].

State-of-the-art calibration approach
As a state-of-the-art approach, the calibration procedure for LS-based MSS using reference geometries (RFG) from [56] is used, which is inspired by the work of [44] and [17]. This methodical approach states that the distances between the observed reference planes and the measured 3D point observations of the LS to be calibrated should be minimal. To describe this condition in a functional model, where P MSS,e,j e are the transformed LS 3D points with respect to the PCS. To establish a connection to the WCS, a further transformation is necessary where P LT,e,j e are the transformed LS 3D points with respect to the WCS. In both equations (58) and (59), R stands for the respective 3D rotation matrix resulting from the corresponding three Euler angles. Together with the respective translations t SOCS-PCS and t PCS-WCS , this results in the requested 6-DoF transformation parameters In order to estimate these parameters, the approach of [56] solves the calibration task by constraining the distances d j e within a GHM (cf. Equation (2)). The functional relationship between the observations and parameters reads where the distances d j e are to be interpreted as contradictions of the GHM. Special attention must be paid to the plane parameters a RFG,e as well as the transformation parameterst PCS-WCS . These are quantities derived from observations of the laser tracker, but are also used as parameters to be estimated. The reason for this is corresponding approximate values are required for the initialisation, which can theoretically even be measured and taken into account for each epoch in the recursive estimate.In addition to the sequential laser scanner observations over time, the referencing observations can thus generally also be introduced multiple times. Although these observations from the laser tracker dominate due to its measurement uncertainty, both have an influence on the plane parameters. Therefore it is advisable to estimate these quantities as additional parameters accordingly. This also allows the possibility of imposing constraints.
The first three elements describe the 3 × 1 unit normal vector n RFG,e and d RFG,e the distance to the origin. The observed quantities in this implicit relationship are arranged in the observation vector l as follows where e = 1, . . . , E. In [56], both the plane parameters a RFG,e as well as the transformation parametert PCS-WCS are considered to be original observations. This leads to a reduced adjustment problem and is recommended due to the high information density that already occurs. The stochastic model chosen for the adjustment sets all LS observations as uncorrelated and with the same standard deviation of σ LS,0 = 0.01 m. Thus, with respect to the observation vector from Equation (64), the VCM Σ LS,ll for the 3D LS points P LS,e,j e can be given as follows where I LS is the identity matrix with the appropriate dimension in terms of the number of all LS observations. For the entire VCM Σ ll , uncertainty information of the 6-DoF calibration parameterst PCS-WCS as well as the plane parameters a RFG,e need to be added. This variance-covariance information can be obtained in the context of a simplified pre-adjustment and is considered as stochastic prior information by means of a fully populated VCM. Thus, a comparability to the recursive approach presented in section 4.3 can be realised.
In [19,56], the application of this method is based on the assumption that the transformation parameters between WCS and PCS and the plane parameters themselves are error-free quantities (deterministic) that are known and constant. This is justified by using a reference sensor with superordinate accuracy in relation to the uncertainty level of the LS. However, in the context of this paper, these parameters shall be considered as stochastic quantities. This is reasonable since using a 3D Velodyne VLP-16 with multiple scan lines allows estimation of the individual reference planes. In [19,56], this was not possible due to the respective use of a 2D profile LS. To enable a more general approach, the parameter vector here results as follows Since the plane parameters a RFG,e are part of the parameter vector, the consideration of constraints is necessary, so that their normalized unit length can be guaranteed and geometrical or numerical instabilities do not occur. This leads to a non-linear equality constraint (cf. Equation (19)) for all e reference planes regarding the respective normal vector n RFG,e as follows g e (x) = n 2 RFG,e,x + n 2 RFG,e,y + n 2 RFG,e,z ! = 1 .
For the consideration of this additional information in the batch approach, the C-GHM from section 2.2 has to be applied. The concatenated transformations with the inclusion of rotations bring challenges for adjustment. As already mentioned in section 2.1, suitable approximate values are required for the solution of the C-GHM to converge. These can be based on rough dimensions and can be updated after a successful adjustment, if necessary.
As a further extension to [19,56], it can be reasonable for the quality of the calibration parameters to perform the acquisition process of the static reference geometries by the MSS from multiple different positions. In [13], it was shown that this allowed the individual transformation parameters to be decorrelated. As long as the planes are constant over time between the individual measurements, a better distribution of the measuring points on the planes could be realised. The use of multiple viewpoints may also allow certain planes that do not have geometrically sufficient coverage from a single viewpoint to be considered at all. For the adjustment approach this means that one set of 6-DoF transformation parameters between PCS and WCS (cf. Equation (61) and accordingly, further measurement equations must be included. The plane parameters a RFG,e remain numerically and in number unchanged. Even if the number of individual configurations is theoretically unlimited and is advantageous, the amount of data to be taken into account is increased enormously. For the batch approach in particular, this is not practical. For this reason, this article will only deal with the combined use of two standpoints. This allows a comparison between the batch and the subsequent recursive approach, even if the latter has fewer difficulties in considering multiple viewpoints. The parameter vector from Equation (66) thus changes to

Novel recursive and constrained calibration approach
In order to obtain the 6-DoF calibration parameters given in Equations (60) and (61), a recursive estimation can be performed in addition to the batch approach from section 4.2, thus achieving a higher efficiency. However, this requires a transition to discrete epochs. For this purpose, the available LS observations of all reference planes are subdivided into k = 1, . . . , K individual epochs. With regard to the experimental setup described in section 4.1, a complete 360°rotation of the LS is used for the definition of one single epoch k. It must be ensured that in each epoch every plane is represented by at least three to five point observations so that the recursive estimation can converge. For redundancy reasons, more observations are generally more appropriate, but this is the minimum configuration.
In the specific case here, this is mainly due to the fact that only a few point observations cover the smallest reference planes, but an equal number of points per plane should be realised overall. As already mentioned in section 4.
The basic workflow for the PCL processing is shown schematically in Figure 4. In this context, the allocation of the entire PCL (for each position s) with regard to the individual reference planes as well as the individual epochs is described. Compared with the parameter vector in Equation (69), the estimation of the state vector x k is now also recursive for each epoch k. As already mentioned in section 4.2, parts of the observation vector are also considered in the recursive parameter vector as quantities to be estimated. Therefore, in the specific experiment here, this comprises in detail 66 states and the following applies . (72) Furthermore, the non-linear constraint regarding the plane parameters according to equation (67) has to be considered, which is also done epoch by epoch The techniques (namely COF and PM) described in section 3.3 are applied to account for these constraints within the recursive C-GHM. A comparison of the results among each other and in relation to the batch C-GHM follows in section 5.
The results include a comparison to another special method for considering the constraints, which was initially shown in [13]. This approach is based on a modified functional model and is not directly transferable to other application examples. Here, the idea is to supplement the functional relationship according to equation (62) by a normalisation (norm) of the plane parameters n RFG,k,e as follows d j e = 0 = n T RFG,i P LT,k,e,1:N e − d RFG,k,e n 2 RFG,k,e,x + n 2 RFG,k,e,y + n 2 RFG,k,e,z . (74) However, in general, the techniques from section 3.3 offer greater flexibility towards other applications.
Just like the C-GHM in batch processing, the recursive C-GHM also requires sufficiently accurate approximate values for the state vector x 0 to initialize. Since the recursive C-GHM is based on a IEKF, it is necessary to specify an initial VCM Σ xx,0 for the state vector and to consider additional process noise Σ ww,k in general. The uncertainty information regarding the initial values can be provided through a simplified pre-adjustment, already mentioned in section 4.2. This information is already considered in the GHM as stochastic prior information to ensure comparability between batch and recursive adjustment. The initial values for the recursive approach thus also result from such a pre-adjustment. Their specific numerical values depend on a random selection of the point observations used. The break criterion for the iteration in both approaches is also the same. The iteration process terminates when all contradictions are smaller than 1 ⋅ 10 −10 .
As already mentioned in section 3.2, the process noise σ w should theoretically be zero. However, investigations in [64] also indicate a regularising influence when using a suitable order of magnitude for the noise. This cannot be stated universally and depends on the respective application scenario and the corresponding influencing factors. For the recursive C-GHM here, a process noise Q ww = I ⋅ σ w , with σ w = 1 ⋅ 10 −6 is introduced to overcome numerical instabilities and to reduce the impact of observations in earlier epochs compared to the later ones. As the dimension of the VCM Σ ww,k is identical to the dimension of the VCM Σ xx,k , the units of the individual elements σ w are analogous too.

Simulated calibration environment
Since the state-of-the-art calibration procedure using the batch approach from section 4.2 cannot necessarily be regarded as the superior reference, another reliable validation option for the new calibration approach based on the recursive C-GHM with equality constraints (cf. section 4.3) is required. This enables an independent comparison between the two approaches. For this reason, the experimental setup from section 4.1 is additionally realised as a virtual simulation. The arrangement of the reference planes is reproduced faithfully with respect to their respective size, position, and orientation in relation to each other and the SOCS of the LS. All this information can be derived from the original measurements and generated based on the manufacturer's specifications of the Velodyne VLP-16. A significant weak point of the simulation is the implementation of the realistic stochastic model of the LS. This model is unknown and can be very complex. As already mentioned, the manufacturer only refers to a range accuracy of typically up to 3 cm [60], without indicating the associated quality parameter. There are no statements about the angle measurements, so they are not represented in the simulation. Due to the controlled environment and additionally available real empirical values, the Gaussian noise of σ LS = 1 cm is selected for the LS observations along all coordinate axis. This is the same value as the stochastic model in Equation (65). The main advantage of this simulated calibration environment is that the true values are known, and thus both precision and accuracy can be investigated. The reconstruction of the reference planes together with the SOCS of the LS is shown in

Results
The results shown in the following refer in section 5.1 to the measurement data of the simulated calibration environment (already introduced in section 4.4) and in section 5.2 to the real measurement data (already introduced in section 4.1). The applied methodological approach is identical in both cases and refers to the batch approach from section 4.2 and the novel recursive approach from section 4.3. Depending on the respective data source, the results are presented in different ways, which are briefly introduced and explained at the beginning of each subsection.
For practical reasons, all the results are based on 15 rotations of the LS only, which corresponds to a total measuring time of 1.5 seconds. One full rotation corresponds to one epoch k. In addition, to ensure comparability, only five LS point observations P LS,k,e are used per epoch k on each of the e = 1, . . . , 12 reference planes. The selection of this subset is made randomly, ensuring a geometrically stable distribution for the estimation of the plane parameters. For the batch data set, the observations of all 15 epochs are merged accordingly. This subsampling to only 5 points per plane is not compulsory but allows, besides improved comparability, the efficient execution of additional statistical Monte Carlo (MC) investigations. Thus, a meaningful comparison of the C-GHM in the classical batch approach for the different recursive methods (recursive C-GHM with COF, PM and norm) can be performed.

Simulated calibration data
The realisation and usage of simulated observations for the calibration of the LS with respect to the PCS allow statistical comparisons against the true nominal values. No large outliers were simulated within the model, but rather controlled conditions were established that correspond to the character of a well-controllable laboratory calibration. In order to make statistically verifiable statements, a closed-loop MC simulation (cf. [34]) is applied. Thus, a variety of slightly different realisations can be achieved. For this purpose, after simulating perfect 3D PCLs for both positions of the LS and assigning them to the individual e reference planes per epoch k, random subsampling is performed. As an additional variation, random normally distributed observation noise with a fixed value of 1 cm standard deviation for all axes is added within each of the S independent realisations. The following results refer to S = 1000 independently simulated data sets, each of which is evaluated using the different methodological calibration approaches.
At first, the required run time should be discussed, which is shown in Table 1 using the mean value over all S = 1000 realisations. The calculations were all performed in parallel on equivalent nodes as part of a computing cluster, allowing a relative comparison between the different methods to be made. The fastest mean run time for the recursive C-GHM (norm) is around 3.9 s. The main impact for the run time is the size of the normal equation system that has to be inverted. This depends directly on the number of 3D point observations used. Therefore, processing all observations at once in the batch C-GHM shows a significantly higher run time than all recursive methods. The temporal differences in the recursive methods in comparison arise because of the varying number of iterations until the abort criterion is satisfied within each epoch k. With regard to the standard deviations, it can also be seen that these slightly increase as a function of the mean run time.
Here, the largest values are again clearly apparent for the batch approach. Table 1: Run times of the different methods with respect to the mean and the standard deviation (std) based on S = 1000 independent realisations.

Method mean [s] std [s]
batch C-GHM 184.9 27.5 recursive C-GHM (PM) 5.2 0.9 recursive C-GHM (COF) 4.5 0.8 recursive GHM (norm) 3.9 0.7 In addition to the necessary run time, the actual calibration parameters are of interest. By knowing the true nominal values within the simulation, the root mean squared error (RMSE) serve as a statistical measure for comparing the different methods. The RMSE can be estimated within every S realisations for each epoch k according to using the differences between the nominal valuesx and the estimated valuesx k,S . Here, the parameter n x indicates the number of state parameters used. The RMSE value can be specified with respect to individual elements of the state vector (e. g. translation in X direction), to aggregations (e. g. 3D translation) or to the whole set of state parameters. In each case, attention must be paid to the consideration of the corresponding units. Two different types of calculations are presented for the following results. One with respect to only the first six parameters (the 6-DoF calibration parameters between the SOCS of the LS and the PCS) and one with respect to the entire state vector with 66 elements, including all the plane parameters.
The results of the RMSE are presented in box plots grouped by the individual epochs. This is useful to show the whole set of results for all S realisations and reduces the impact of single failed runs or outliers at the same time. In a box plot, the circle indicates the median value, and the box describes all values between the 25th and 75th percentile. The whiskers span across all values in the range of 1.5 times the box interval. All other values are marked as outliers (red dots) but may only represent the tail end of the distribution. Due to the spread of resulting values, the vertical axis is scaled logarithmically. A comparison with the batch approach can be made by also calculating the RMSE for the estimated parameters and giving them independently of the epochs. Figure 6 shows the results when the RMSE is based on the 6-DoF calibration parameters only. All recursive methods show comparable results since the median is almost identical. Therefore, there are no major improvements over the epochs in the recursive procedures. This indicates that additional observations have no significant influence on the parameters. It is noticeable, that the batch solution (last column) has a slightly smaller RMSE value, which is due to the entirety of observations taken into account at the same time.
If the RMSE is determined on the basis of the entire state parameters, box plots as shown in Figure 7 are obtained. There is a slight increase in the RMSE values across all methods compared to Figure 6. In addition, all the investigated approaches behave identically and there is no change over the individual epochs. The previously occurring advantage of the batch approach is no longer present when the entire parameter vector is taken into account.
Furthermore, it is advisable to check consistency of the recursive filter approaches, which forms the basis for the recursive GHM. A state estimator is consistent if the estimates converge to the true value. Possible reasons why this is not the case are modeling, numerical or program- ming errors. A statistical test to verify the filter consistency is introduced in [1]. Since the true nominal values are known within the framework of this simulation, this verification is recommended. For this, the Normalized (state) estimation error squared (NEES) e k is defined as where the differences between true statex k and estimated statex k are considered for each epoch k. In addition, a weighting is applied via the VCM Σ +xx ,k of the states estimated. Since the random variable e k is a quadratic quantity, it is assumed that it follows a χ 2 distribution with u x degrees of freedom, indicating the number of states. Here, similar to the RMSE comparison, two different scenarios for u x are chosen. Once with respect to only the first six parameters and once with respect to the entire state vector with 66 elements. This χ 2 -test is accepted if is fulfilled. In case of a significance level of α = 5 % and u x = 6 this will lead to the critical values r 1 and r 2 The NEES is thus determined individually for each epoch k using all above mentioned methods. Even though the batch approach is not based on a recursive filter, we have also applied this static test to the estimated parameters. Finally, the results of all independent S = 1000 realisations can be summarised using the box plot. The borders of the acceptance intervals for the NEES are shown as horizontal lines. Figure 8 shows the NEES for the first six parameters. Generally, the behaviour is similar to that shown in Figure 6 when using the RMSE. The only difference is that the NEES value for the batch solution deviates slightly from the recursive values, but apart from the outliers it is always within the limits. The same applies to the recursive approaches. Also here, no changes can be detected over the individual epochs.
Significantly different results are shown when the entire state vector for the NEES is considered. The results in this case are shown in Figure 9. Considering the complete state vector for the NEES leads to a significant increase in all the random variables, whereas for the RMSE, the change is only about one order of magnitude. Therefore, all test values are larger than the χ 2 consistency interval. A reason for this are too optimistic (co-)variances for the plane parameters. Over the individual epochs, all recursive methods show an improvement from the first to the second epoch, whereby this only improves further for the normalised method. A clear reason for this could not be found, even though this also indicates an overoptimistic VCM for the plane parameters in relation to the representation of the RMSE. Since the mentioned jump is not so strong in the approach with the normalised functional relationship, the consideration of constraints in the other two recursive methods may have an impact on this behaviour. Overall, the significantly more frequent occurrence of outliers in the recursive C-GHM with perfect measurements is striking. It becomes evident, that the numerical instabilities due to a partially ill-conditioned normal equation system only affect the additional parameters and not the actual 6-DoF calibration parameters, as can be seen in comparison with Figure 8.

Real calibration data
In contrast to the simulated observation data in section 5.1, the nominal 6-DoF calibration parameters for the real observations are unknown. Furthermore, no reference values can be realised on the basis of a sensor of superior accuracy and precision since high-precision sensor technology, and state-of-the-art methods are already in use. As already Table 2: Results of real data processing compared (mean and standard deviation) for the values between the 0.5% and the 99.5% quantile.

Method t SOCS-PCS,x [mm] ω SOCS-PCS [ ∘ ]
batch C-GHM 15.45 ± 0.29 0.151 ± 0.027 recursive C-GHM (PM) 13.76 ± 0.31 0.153 ± 0.053 recursive C-GHM (COF) 13.77 ± 0.31 0.155 ± 0.053 recursive GHM (norm) 13.78 ± 0.31 0.153 ± 0.055 mentioned above, the use of an MC simulation is therefore advisable. Thereby, many different data sets are realised, which differ exclusively based on the random subsampling of the LS observations. This enables an analysis of the scatter of the estimated parameters and a comparison between the individual methods. The actual comparison at the level of the parameters is done exclusively for the translation t SOCS-PCS,x and the angle around this axis ω SOCS-PCS . For the other 6-DoF transformation parameters, the results are similar, although they are omitted here. For the recursive methods, the estimated value of the last epoch k = 15 is used. As discussed in section 5.1 the usage of the recursive C-GHM (PM) may lead to numerical issues with strongly deviating parameters in some of the 1000 independent realisations. For the used data set, the number of deviating realisations is less than 10. To compare the distributions of the resulting parameters, all values below the 0.5 % and above the 99.5 % quantiles were removed from the results of all methods. The remaining values are used to compute the statistics in Table 2. It can be noted that the batch solution reaches a minimum that differs by 1.7 mm for the translation and 0.03°for the rotation compared to the three recursive approaches. This deviation in the transformation parameters was already apparent previously on the basis of the RMSE in Figure 6. In addition, the slightly lower standard deviations for the batch method are noticeable, whereby this is clearly evident for the rotation.
The comparison of the different adjustment techniques in Table 2 is based on a variety of different subsampled realisations. However, this also provides insufficient insight into which approach best fits the true values. Thus, only conclusions regarding the precision of the individual approaches can be stated. Another possibility for a statistically verifiable analysis of the different methods without known nominal values is the application of a (K-fold) cross-validation. According to [9,10,29], this method is based on the strategy of randomly assigning the available measurement data into K, exclusive of equally sized subsets (also referred to as folds) within a multitude of indi-vidual runs. Thus, all available observations are used, and no subsampling is carried out as before. Therefore, K − 1 subsets are selected and summarised as the training data set. The one remaining subset is referred to as the test data set. Using the larger training set, the unknown 6-DoF calibration parameters are estimated as before. Afterwards, the smaller test set is used to verify the results (by means of the contradictions d j e according to Equation (62)) based on the training set. This approach thus allows a relative comparison between the different approaches based on the distribution of the contradictions that occur. Subsequently, the assignment of the individual subsets to the two types of data sets changes K − 1 times and the contradictions are calculated again in each case. Overall, this procedure is referred to as one single run of the crossvalidation. The usage of K-fold thus ensures that each observation is used once for testing. For statistically reliable statements, a multitude of such runs have to be performed, whereby the assignment of the original observation data to the K subsets is always random. All individual contradictions over all runs can then be represented in a histogram and the statistical central moments determined. For the investigations presented here, K = 10 is chosen, and a total of 100 runs are carried out. This allows a comprehensive distribution of the points and a variety of independent results within a feasible period of time.
The resulting large amount of observations prohibits timely processing with the batch C-GHM, so it is omitted and the investigations are only done between the recursive approaches. Instead, a recursive GHM without constraints is applied, whereas the referencing observations from the laser tracker are treated as deterministic variables. This can again indicate the added value of using geometric constraints. The histograms of all contradictions for the corresponding four recursive methods are shown in Figure 10. The contradictions of the recursive GHM without taking constraints into account show the most extensive spread in the results. In addition, a multimodal distribution and a slight bias can be identified for this approach. This is assumed to be caused by the LT observations, which were only considered as deterministic quantities in this specific approach. This behaviour thus illustrates the significance of estimating the additional parameters within the other methods, although this needs to be analysed in more detail in further investigations. Also, some of the contradictions are beyond the value range of ±4 cm given here. This indicates that considering the laser tracker observations as stochastic variables leads to improved precision and accuracy of the adjustment. When constraints are taken into account, there are only minor differences in the results regarding the spread.

Discussion
Based on the simulated and the real calibrations, the following findings can be made. In general, both the batch and the different recursive approaches do not differ significantly in terms of the 6-DoF calibration parameters. However, the required computation time is significantly lower with the recursive approach. This is a significant advantage, especially concerning larger data sets. In addition, it is shown that for the recursive calculation already one or two epochs can be sufficient for accurate calibration parameters. This can positively affect the required run time since additional observations do not necessarily lead to a steady improvement, at least in the application presented here. In this context, however, it is necessary to investigate in the future also for the batch approach to what extent the number of observations affects the quality of the parameters and the required computation times.
For the consideration of the constraints in the recursive approaches, it is noticeable that numerical instabilities in the PM method may cause wrong parameters conspicuously often. Consideration using the COF method and the normalized functional relationship are more reliable in this respect. The latter is characterized by the lowest overall run times but leads to less accurate estimates of the plane parameters. It has to be noted that the comparison of the plane parameters to the nominal values tech-nically is not possible when using the normalized functional relationship. Because of the missing constraint, the parameters can just be scaled in comparison to the true values. However, the parameters of the reference geometries are consistently inaccurate with all applied methods. A reason for this behaviour could not be found. This situation is also shown because the calculated co-variances of the plane parameters are too optimistic and lead to an inconsistent estimation. However, if only the 6-DoF calibration parameters are considered, the consistency of the recursive estimator based on the χ 2 -test can be confirmed.
The general advantage of using constraints when calibrating with reference geometries is shown in the context of cross-validation. If the plane parameters are introduced as deterministic quantities, this leads to a significantly wider dispersion of the contradictions. The specific method by which the constraints are taken into account is of secondary importance. The accuracy and precision with respect to the pure 6-DoF calibration parameters are similar overall, although there are relevant differences with respect to the numerical stability.
The major advantage of the recursive C-GHM is the significant run time improvement compared with a batch approach. This can be of particular relevance for potential real-time applications or steadily increasing data sets. With respect to the calibration of an LS, for example, this may be the case if the sensor has multiple scan lines (e. g. Velodyne HDL-64 [59]), the calibration setup uses more individual reference geometries (for higher redundancy) or multiple individual positions (for decorrelation) for a more reliable estimation are used.

Conclusion
The steadily increasing application of high-resolution sensors for environment acquisition and the simultaneous combination of multiple individual sensors as MSSs results in increasingly large amounts of data. In this paper, a recursive GHM was presented, allowing more efficient processing of mass data than existing batch methods with unchanged uncertainty characteristics. The basis for this is an IEKF, which can take implicit observation functions into account. Depending on the respective application of this new methodology, it is necessary to select suitable approximation values and a suitable process noise to avoid numerical instabilities. To consider additional information, various possibilities for introducing appropriate constraints and their corresponding added value have also been pointed out. This additional information is currently limited to equality constraints, although the transition to inequality constraints opens up further possibilities and should therefore be aimed for. In contrast, the consideration of soft constraints is possible without further effort, although numerical issues may arise and were therefore not investigated here.
As a practical application example for the methodology, the determination of calibration parameters of a 3D LS concerning a fixed PCS was presented. This is a mandatory procedure for the precise operation of any MSS. Especially for mobile mapping applications or autonomous driving, the novel recursive calculation offers new possibilities (e. g. in-situ calibration). However, this requires robustification of the approach against the influence of outliers and the automatic detection and assignment of suitable geometric primitives in object space. With respect to the actual 6-DoF calibration parameters, there were no significant differences, but the recursive computations were significantly faster than the batch approach. However, methodologically induced numerical instabilities may appear when the constraints were considered as PM. For this particular application example, further investigations into the joint estimation of more than two positions are foreseen in the future to achieve an improved decorrelation of the calibration parameters.
In addition, the recursive C-GHM methodology is also transferable to any other application where efficient adjustment of epoch-wise observations or mass data is required. To further validate the methodology, it is recommended to use applications that enable an independent and higher-level reference solution.