An overview of lab-based micro computed tomography aided ﬁ nite element modelling of wood and its current bottlenecks

: Microscopic lab-based X-ray computed tomography (XµCT) aided ﬁ nite element (FE) modelling is a popular method with increasing nature within material science to predict local material properties of heterogeneous materials, e.g. elastic, hygroexpansion and di ﬀ usion. This method is relatively new to wood and lacks a clear methodology. Research intended to optimise the XµCT aided FE process often focuses on speci ﬁ c aspects within this process such as the XµCT scanning, segmentation or meshing, but not the entirety of the process. The compatibility and data transfer between aspects have not been investigated to the same extent, which creates errors that propagate and negatively impact the end results. In the current study, a methodology for the XµCT aided FE process of wood is suggested and its bottlenecks are identi ﬁ ed based on a thorough literature review. Although the complexity of wood as a material makes it di ﬃ cult to automate the XµCT aided FE process, the proposed methodology can assist in a more considered design and execution of this process. The main challenges that were identi ﬁ ed include an automatic procedure to reconstruct the ﬁ bre orientation and to perform segmentation and meshing. A combined deep-learning segmentation method with geometry-based meshing can be suggested.


Introduction to CT
Ionising radiation CT, such as X-ray CT and γ-ray CT, can be used to determine the heterogeneous structure of materials in a non-destructive and non-invasive manner providing three-dimensional (3D) and four-dimensional (4D, 3D + time) information (Bucur 2003).The technique gives detailed tomograms reconstructed from two-dimensional (2D) radiographs, with greyscale information at each voxel (volume element).A radiograph is an image formed by digitally collected transmitted X-rays through an object, whereas greyscale indicates the range of voxel values within a tomographic data set (Withers et al. 2021).Radiographs of an object at a given angle are called projections, which combined allow for a 3D reconstruction of the object.
The most critical resolutions in CT scanning are spatial, contrast and temporal resolution.Spatial resolution refers to the size of the smallest possible feature that can be detected inside a tomogram.As a rule of thumb, three times the pixel size can be assumed (Lindgren 1992).The contrast resolution quantifies the ability to accurately measure slight differences in density between neighbouring regions within a tomogram, whereas temporal resolution is defined as the amount of time needed to capture two consecutive S. Florisson and E. K. Gamstedt: Lab-based micro computed tomography aided FE modelling of wood.tomograms (Lindgren 1992;Withers et al. 2021).For high (spatial) resolution microscopic (micro) CT, the voxel size is generally >0.1 µm (micron) and for low resolution macroscopic (macro) CT, the voxel size is generally >100 µm.A smaller voxel size inevitably necessitates a smaller specimen, hence the difference in specimen size between industrial and lab-based scanners.
X-ray µCT is becoming increasingly popular in material science (e.g.Auenhammer et al. 2022;Buljac et al. 2018;Salvo et al. 2003;Stock 2013;Zauner 2014), since it allows for the structural characterisation of heterogeneous materials at the micro material level.This ability makes it highly suitable for wood.At the micro material level, a direct correlation can be seen between the length, diameter and shape of the wood tracheids and its density and hygromechanical behaviour (Persson 2000).The most commonly used techniques for X-ray µCT are laboratory-based X-ray tube µCT (XµCT) and synchrotron radiation-based X-ray µCT (SRµCT).As the names suggest, the main difference between XµCT and SRµCT is the radiation source.XµCT uses an X-ray tube, whereas SRµCT uses a cyclic particle accelerator.Their beams differ in terms of X-ray flux, source size and X-ray energy spectrum.The tube sources used in XµCT emit a wide range of X-ray energy often in a cone shaped beam referred to as polychromatic.The scan times range between minutes to hours depending on the required resolutions.SRµCT can generate more flux than tube sources using a monochromatic beam (one X-ray energy).This creates a sensitivity to small differences in X-ray adsorption and limits certain artefacts (Withers et al. 2021).The scan time ranges from subseconds to minutes.Although XµCT is becoming rapidly faster (Zwanenburg et al. 2022), SRµCT is more suited when high temporal resolution is needed.

XµCT aided FE modelling
XµCT aided FE modelling uses XµCT data for fast and accurate prediction of critical parameters for modelling.Adopting suitable image-processing techniques, the method uses information from tomograms to create e.g., the geometry of the model, orthotropic material orientation, and boundary conditions, and to perform needed model calibrations and validations (Auenhammer et al. 2021(Auenhammer et al. , 2022;;Florisson et al. 2022;Huber et al. 2022).Here, calibration indicates the iterative adjustment of model material properties until good agreement is found between modelled and expected behaviour.And, validation is the procedure where the ability of the model to predict the material behaviour is assessed using a set of chosen material properties.
The research conducted on XµCT aided FE modelling of wood is limited.Hachem et al. (2018) used the method to determine the thermal conductivity and diffusion coefficient of Norway spruce, whereas Badel and Perré (2002) used XµCT aided FE modelling to predict the elastic and hygroexpansion properties of oak.In Kamke et al. (2014) and Hammerquist and Nairn (2018), the method was used to study the mechanical behaviour of a phenol-formaldehyde adhesive bond line in wood through a combination of FE analysis and material point method.XµCT aided FE modelling was also found suitable to study wood composites.For example in Miettinen et al. (2016), and later in Fortino et al. (2017), the hygroexpansion coefficients of polylactic acid (PLA) reinforced birch pulp were estimated using this method.Verho et al. (2022) used the method to obtain the elastic moduli of wood composites in combination with a homogenisation modelling strategy.XµCT aided FE modelling was also employed to study the mechanical behaviour of nanocellulose foams (Srinivasa 2017).
Currently, no clear integrated methodology for XµCT aided FE modelling of wood exists.Most research focuses on specific aspects of the XµCT aided FE procedure separately, such as image resolution, image segmentation or model meshing.This discontinuous approach undermines the compatibility between the steps that make up the XµCT aided FE process and complicates an efficient transfer of information between steps (Auenhammer et al. 2021;Keyak et al. 1990).Non-automated, i.e. manual procedures, can lead to realistic and specimen-specific estimation of material behaviour (Keyak et al. 1990), but can also create labourintensive processes and human-induced errors within and between each step (Auenhammer et al. 2021).

Research focus
The current paper proposes a methodology for XµCT aided FE modelling of wood and identifies the bottlenecks associated with this process.The methodology focuses on fast, accurate and repeatable data transfer with a high degree of automation by adopting commercial software.The proposed methodology is based on an extensive literature review.The literature review also attempts to present the state-of-the-art for XµCT aided FE modelling of wood.Studies with a focus on materials whose methods are equally applicable to wood, e.g.bone, foams and fibre composites, have also been included.A methodology is suggested in Section 2 and tested in detail in Florisson et al. (2023).A supporting background of the methodology is provided in Section 3 and the bottlenecks are discussed in Section 4.

Methodology
The suggested XµCT aided FE methodology for wood is presented in Figure 1.The methodology was developed for heterogeneous microstructures such as wood, with a temperature and moisture dependent mechanical material behaviour.Based on the literature review, five different steps were identified, labelled source, experiment, image processing, model development, and product.Each step consists of several relevant sub-steps, which are presented in Figure 1 and supported by Table 1.A reference to the relevant sub-sections can be found in Table 1.Each step will be discussed in detailed in Section 3 and their bottlenecks in Section 4. Illustrations from literature are used throughout each section to support the text.In addition, results from a static state XµCT scan of compression wood obtained from a Norway spruce tree branch is used to clarify the specimen preparation, XµCT scanning, image reconstruction, segmentation and mesh step and some of their bottlenecks.The main aim of the scan was to obtain a spatial and contrast resolution with a XµCT scanner high enough to identify the microstructure in the proposed steps.
The specimen preparation step is basedamongst other referenceson the research conducted by Zauner (2014), whom expresses the importance of specimen geometry and conditioning on the quality of tomograms.The inclusion of a XµCT scanning and image-reconstruction step are straightforward, and their importance on image quality have been discussed in great detail since the arrival of the first lab-based scanner in the eighties.The material characterisation step is based on the extensive work by Lindgren (1988Lindgren ( , 1992)), whom created the foundation for the use of X-ray computed tomography in the Northern Swedish sawmills today.The segmentation step is identified as a standard procedure when geometry (complex geometry, geometry-based meshing) is an important aspect in the development of the FE model (Auenhammer et al. 2021).Since wood is a fibrous material and the orientation of the fibres largely contribute to the hygromechanical behaviour of wood, the fibre orientation step is of essence.The importance is well illustrated byfor example -Huber et al. (2022).The meshing, mapping and modelling steps were identified in the work by Florisson (2022).Whereas, review articles, such as Roux et al. (2012), Maire and Withers (2014), Buljac et al. (2018), Withers et al. (2021), show the compatibility between methods such as X-ray computed tomography, finite element modelling, and DIC and DVC.
3 Literature review

Specimen preparation
The step specimen preparation focuses on the proper preparation of specimens to facilitate the subsequent steps of the XµCT aided FE process.The step necessitates decisions on specimen size, geometry and preparatory measures (conditioning, contrast agents) to minimise image artefacts and optimise the spatial, contrast and temporal resolution of the tomogram.Size and geometry should accommodate the confined testing space associated with XµCT and prevent artefacts.Zauner (2014) shows that specimens used in compression tests requires horizontally symmetric surfaces to mount for loading in vertical direction, and that a cylindrical specimen shape can prevent tomographic artefacts and lead to more accurate predictions of compression stress.Since wood is a hygroscopic material, it will always try to establish an equilibrium with the outside environment.Correct environmental conditioning is therefore important to prevent artefacts (Florisson et al. 2022).In Figure 2, a rendered tomogram of the 1 mm 3 cube of Norway spruce compression wood is presented.The tomogram was obtained with a Zeiss Xradia 510 Versa XµCT scanner.The cube was conditioned at room climate and scanned in a small sealed enclosure to prevent motion artefacts caused by differences in relative humidity between room and scanner.
In addition, the dimensions of the specimens were chosen to benefit the desired spatial resolution.Materials that have large differences in electron density will experience a relevant attenuation contrast (Withers et al. 2021).The attenuation coefficient increases with increasing atomic number due to the scattering of electrons and decreases with increasing X-ray energy (voltage, keV).In case of low contrast materials, the attenuation coefficient, and there with the contrast resolution, can be promoted by manipulating the chemical composition of materials by adding a contrast agent (e.g.Ching et al. 2018;Kamke et al. 2014;Kibleur et al. 2022a;Li et al. 2013;Paris et al. 2014Paris et al. , 2015;;Withers et al. 2021).

XµCT scanning
The step XµCT scanning is designed to select the type of XµCT scanner, experiment and scanning parameters to optimise the quality of the images and aid the overall XµCT aided FE process.

Computed tomography methods
The most popular categories of XµCT are attenuation contrast and phase contrast imaging.The attenuation contrast, or absorption mode, is the most conventional mode to perform XµCT (Derome et al. 2011;Salvo et al. 2003).This mode is founded on the Beer-Lambert law, (1) where, for an object comprising of multiple materials (i = 1 … n) in the beam path, I is the intensity of the transmitted X-ray beam, I 0 is the intensity of the original X-ray beam, μ is the linear attenuation coefficient, and z is the thickness of the specimen.The law states that as the radiation moves through the material, it attenuatesor in other wordsgradually loses flux intensity.Phase contrast imaging can be used for materials that attenuate similarly (Endrizzi 2018), making use of the phase shift that defines the refractive index.Since the detectors cannot measure a phase shift, this must be retrieved from the recorded patterns of the intensity, called phase retrieval.Patera et al. (2018b) used phase contrast imaging to study sorption and swelling behaviour of spruce.A deeper understanding of attenuation contrast and phase contrast imaging can be obtained by reading Maire and Withers (2014), Endrizzi (2018), Withers et al. (2021).

Experimental methods
Although the scanning times of XµCT are becoming shorter, the technique is most often used to image static state problems (Garcea et al. 2018;Zwanenburg et al. 2022).The method is also suitable to image dynamic processes that occur over a longer period of time.For dynamic processes that are not easily controlled, a post-mortem imaging approach can be maintained (Cordes et al. 2015).In such a case, tomograms will be acquired before and after the dynamic process has occurred.An interrupted in-situ approach can be taken for processes that can be easily controlled.The specimen will then be incrementally exposed to the dynamics process, with tomograms acquired between each interrupted step to create a time-lapse sequence of images during deformation and damage development (Cordes et al. 2015;Sisodia et al. 2019).Specimens can be tested using ex-situ and in-situ XµCT testing techniques (Buljac et al. 2018).Ex-situ testing means that the test rig is positioned outside of the scanner.In such a situation, the specimen is scanned prior, after andin case of steady-stateduring testing.In-situ testing indicates that the test rig is mounted inside the scanner (Buffiere et al. 2010).This requires a suitable specimen and test-rig size, and a test setup that does not affect the X-ray propagation (Zauner 2014).Wood is most often imaged using an ex-situ post-mortem approach, since environmental differences and time-dependent behaviour easily affect the material.Nevertheless, more recent studies have used an interrupted in-situ approach to investigate the swelling interactions of earlywood and latewood of spruce (Patera et al. 2018b) and water transport in medium-density fibre boards and oriented strand boards (Li et al. 2016).Uninterrupted in-situ testing of dynamic processes, such as moisture flow and elastic behaviour of wood, can be done using SRµCT (Couceiro et al. 2020;Forsberg 2008;Forsberg et al. 2008Forsberg et al. , 2010;;Zauner 2014;Zauner et al. 2016).Fast imaging (around 0.3 s) in an uninterrupted in-situ setting can also be achieved using lab-based CT scanners, but usually at the costs of spatial resolution (Florisson et al. 2022;Garcea et al. 2018).

Scanning parameters
Identifying the appropriate scanning parameters begins with the needed spatial, contrast and temporal resolutions, which can be linked to the required object size, features of interest, and allowable dose of radiation (du Plessis et al. 2020).These decisions will determine the field of view (FOV) and the type of scanning: region of interest (ROI) or image stitching (Withers et al. 2021).The material type and the pathlength through the specimen determines the maximum X-ray energy of the beam.The maximum energy of the polychromatic spectrum is given in voltage (keV), where the current (mA) is the amount of charge generated by the X-ray tube per unit time.Voltage and current together amounts to power (Watt) produced by the scanner.A lower energy gives a higher attenuation contrast between different phases, but leads to lower transmission.A rule of thumb to determine the lowest energy given is a required transmission of >10-20 % through all projections (Withers et al. 2021).The last step is the choice of optical scanning parameters.In Section 4.2, a thorough overview of such parameters is given together with their influence on image quality.The wood cube shown in Figure 2 was scanned using attenuation contrast imaging.The scanning involved a static state problem, since the purpose of the scanning was the investigation of the microstructure.The specimen geometry and scanning settings were chosen to obtain a spatial and contrast resolution that can capture the difference between lumen and cell wall assuming a low dense material.Before scanning an estimation of the size of these detectable objects were made and several test runs were performed to determine the ideal scanning parameters.The scanning settings that were used were a voltage of 50 kV, a power of 4.5 W, a source distance of 4.81 mm, an objective of 20×, a detector distance of 5.78 mm, a camera binning of 2, an exposure time of 0.7 s, and 2801 projections.This resulted in a spatial resolution of 0.61 um and a scan time of 1.5 h.No X-ray filters were needed to account for, for example beam hardening.But, to optimise the quality of the tomograms and account for potential ring artefacts, a low energy filter (LE1, Zeiss Xradia 510 Versa XµCT scanner) was used.

Image reconstruction
In the image reconstruction step, a mathematical procedure is selected to generate tomograms from the X-ray projections acquired at different angles around the scanned object.During image reconstruction, the attenuation coefficients (or phase decrements for phase contrast) are computed for different X-ray absorption paths obtained as a set of projections (Endrizzi 2018;Murphy and Haouimi 2022).The relationship between the projections is described by the Radon transform, which is an integral transform used to determine the one-dimensional profile of many projection: a sinogram (Withers et al. 2021).Reconstruction algorithms can be subdivided into analytic and iterative methods.The most common techniques available are the iterative algorithm without statistical modelling, the iterative algorithm with statistical modelling, the back projection, and the filtered back projection (Murphy and Haouimi 2022).The tomogram of the compression wood cube displayed in Figure 2 was created using a filtered back projection provided by the supplier of the scanner.The most recent development in image reconstruction is the adoption of machine learning methods.Machine learning methods can produce much better reconstructions than the conventional methods mentioned above.In addition, this category of reconstruction methods allows for a higher temporal resolution, due to shorter detector integration times and a possibility for viewer projections (Withers et al. 2021;Zwanenburg et al. 2022).

CT number and density
The material characterisation step focuses on retrieving information from tomograms using a direct or indirect relation between CT number and material property (de Ridder et al. 2011;Maire and Withers 2014).This step is most often seen for attenuation contrast computed tomography.Each voxel in a CT scan can be labelled with a CT number (CT#) expressed in Hounsfield unit (HU).The HU is a dimensionless unit used to express the CT number in a standardised and convenient way (Greenway and Gaillard 2022), although other methods exists that do not rely on HU (de Ridder et al. 2011;Stubbs et al. 2020).The CT number can be obtained through a linear transformation of the linear attenuation coefficient, where μ w is the attenuation coefficient of water (0 HU) and μ a is the attenuation coefficient of air (−1000 HU).For wood, a proportionality exists between CT number and density, as well as density and moisture content (Hattori and Kanagawa 1985;Kanagawa and Hattori 1985;Lindgren 1985Lindgren , 1991aLindgren , 1992)).Lindgren (1992) showed that the relation between CT number and density is linear.Furthermore, Hansson and Cherepanova (2012) and Watanabe et al. (2012) used this relation to determine moisture content in wood using a tomogram of a moist wood piece and a tomogram of the same wood piece after oven-drying (dry-density image).

Density and material properties
The relationship between CT number and density can also be adopted to formulate other material properties (Keyak et al. 1997).Based on experiments, a linear correlation was obtained between the dry density of wood and elastic modulus (Kollmann and Côté 1968), hygroexpansion coefficient (Boutelje 1972), and diffusion coefficient (Sehlstedt- Persson 2001).Such correlations are not generic, though, and need to be validated for each situation.The justified relationships can then be used to obtain 3D information on material properties from tomograms (Florisson et al. 2022;Hartig et al. 2021;Huber et al. 2022).Such 3D profiles can be employed for situation specific simulations and a more realistic simulation of material behaviour (Keyak et al. 1997).For example, in Florisson et al. (2022), the dry-density profiles obtained with a medical CT scanner were used to quantify the diffusion and surface emission coefficients for Norway spruce for simulations of material behaviour during kiln drying.In Keyak et al. (1990), an empirical equation describing the relationship between elastic modulus and apparent density obtained from the grey levels in the reconstructed tomograms were used to simulate the stress fields in a human femur.

Segmentation
Image segmentation is the partitioning of digital images into multiple non-overlapping image segments (objects or phases) based on their greyscale level (Maire and Withers 2014; Withers et al. 2021).The voxels in the digital image are assigned a label, such that voxels with the same label share certain characteristics.The outcome of the image segmentation is a set of contours.Segmentation methods can be categorised as automatic, interactive (semi-automatic) or manual, depending on the degree of user involvement in performing the segmentation (Amrehn et al. 2019;Wang et al. 2016).
The objectives of the segmentation should be defined at the beginning of the XµCT aided FE process to decide on a correct segmentation method (Auenhammer et al. 2021).Manual, threshold-based, boundary-based and regiongrowing approaches are most prominently used (Withers et al. 2021).Manual segmentation can be advantageous or even necessary for complex materials (Viceconti et al. 1998).Some popular automatic methods are the thresholding method (Otsu 1979), the watershed method (Beucher and Meyer 1993), and the deep-learning method (Akkus et al. 2017;Minaee et al. 2022;Seo et al. 2020).Figure 2 shows the segmented microstructure of a Norway spruce branch using manual (lasso, magic wand, brush) and thresholding techniques available in Avizo™.Image-processing techniques to fill holes, to discard small objects and remove unwanted voxel islands were also used to improve the segmentation.The thresholding method is a standard segmentation tool (Auenhammer et al. 2021).It uses the different greyscale levels of interesting objects within the tomogram to separate them from each other.The watershed method interprets the greyscale values of each voxel as altitudes.The morphological gradient of the original grey-scale image can be regarded as a topographic surface.The idea behind watershed algorithms is to compute watershed lines from this topographic image.The resulting catchment basins are the image partitions.A rapidly developing alternative to conventional methods is machine learning/deep learning for segmentation, which is most developed in medical CT image analysis (Litjens et al. 2017).However, Kibleur et al. (2022b) successfully implemented deep learning segmentation to obtain fibre bundles in medium-density fibreboards.To get a more thorough introduction on available segmentation methods, literature such as Russ and Neal (2016) can be consulted.

Fibre orientation
An important step in the XµCT aided FEM process for wood is defining the fibre orientation, and on a larger scale also annual ring curvature, spiral grain and conical shape.Two methods can be distinguished to reconstruct the fibre orientation based on tomograms.The first method relies on greyscale variation within an image to estimate the orthotropic material orientation.Within this method, techniques such as the Hough transformation for circles is commonly used to detect the location of the pith (Hansson and Cherepanova 2012;Huber et al. 2022) and the Gradient Structure Tensor to reconstruct the annual ring pattern and fibre deviation around knots (Hansson et al. 2016;Huber et al. 2022); see Figure 3.In addition to these methods, Ekevad (2004) used a moment of inertia tensor for spherical bodies to detect spiral grain and conical shape.The second method requires highresolution tomograms and uses techniques, such as Avizo™ fibre tracking, to find the centre line of individual fibres to provide insight into the exact fibre orientation, length and diameter.The method can reconstruct the fibre orientation around knots, as was seen in Hu et al. (2022).

Meshing
The two most popular meshing approaches in XµCT aided FE modelling are voxel-based and geometry-based (Auenhammer et al. 2021;Keyak et al. 1990;Lengsfeld et al. 1998;Viceconti et al. 1998).A representation of these approaches is given in Figure 4, together with a manually generated mesh.The principles are equally applicable to boneas illustratedand wood.The choice of meshing approach is often made based on how accurate the surface and boundary of a modelled object needs to be represented.In the voxel-based approach each element represents a voxel (Hartig et al. 2021).This results in a structured cubic lattice, which follows the main directions set by the Cartesian coordinate system linked to the tomogram.Often, an eightnode isoparametric brick element is used to construct the mesh, resulting in a stacked representation of the geometry.This can be computationally expensive.The most commonly used meshing approach is geometry-based meshing.Different techniques are reported in literature, such as the Delaunay triangulation, the marching cubes and the advancing front technique (Lobos et al. 2010).Compared to voxel-based, the geometry-based approach is more timeconsuming, but produces smooth surfaces.The approach can be subdivided in three steps: surface mesh reconstruction, mesh adaptation and volume mesh generation (Pagѐs et al. 2005;Lobos et al. 2010).The surface mesh is generated on the inner and outer contours of objects obtained with segmentation.Generally, a tetrahedral element is used, which is stiffer than a brick element but complies better with generating a shape.Commercial image-processing software commonly allow the creation of a surface and volume mesh after segmentation (Auenhammer et al. 2021;Pyrkosz et al. 2010).In Figure 5, the segmented microstructure of Norway spruce is meshed using the mesh tool provided by commercial software Avizo™.After construction, the surface mesh was coarsened to reduce computational time.The shown surface and volume mesh were adapted and improved (element reduction, element shape, intersecting elements) to suffice as a computational mesh and to increase numerical stability (Lobos et al. 2010;Pagѐs et al. 2005).This was done using a mesh quality check provided by Avizo™.A quality control of the volume mesh before simulation is highly recommended using adequate shape quality parameters (Auenhammer et al. 2021;Keyak et al. 1990;Pyrkosz et al. 2010;Viceconti et al. 1998).In the medical field, machine learning is making an entrance to aid in the generation of FE meshes from tomographic images (Pak et al. 2021).This method minimises computational complexity, improves mesh quality and speeds up the mesh generation (Pan et al. 2023).

Mapping
During mapping, information from tomograms (e.g.fibre orientation, density, moisture content) is assigned to the FE mesh.A distinction can be made between an integrationpoint-wise, a node-wise or an element-wise approach (Auenhammer et al. 2021;Florisson et al. 2022).For example, mapping of density and moisture content data is done using a node-wise approach, which can be semi-automatic or automatic (Florisson et al. 2022;Huber et al. 2022).In the semi-automatic approach different programs are used to pre-process the data and to perform the mapping.Figure 6 shows the semi-automatic mapping of moisture content and dry density using a node-wise approach.During such mapping, the information at the voxel level is then transferred to the FE node level using a form of interpolation.Here, the mapping was performed in FE software Abaqus FEA ® .The fibre or material orientation of wood can be mapped using an integration-point-wise or an element-wise approach (Auenhammer et al. 2021;Huber et al. 2022).In Huber et al. (2022), the material orientation of wood retrieved from tomograms was automatically mapped in a commercial FE software using the integration-point-wise approach.Auenhammer et al. (2021) discuss the mapping of fibre direction of composite materials using the elementwise approach.This approach requires the estimation of the centre of gravity for each element, after which the estimated fibre orientation at the voxel position closest to the centre of gravity is extracted and assigned as the local element orientation.

Material model
With the prediction of material properties as the goal of the XµCT aided FE process, the choice of material model dictates the amount and type of properties to be determined.It also influences the design of the experiment and the imageprocessing step.The majority of material models available for wood focus on the macro material level and upwards.In brief, the state-of-the-art mechanical models treat the hygromechanical (Huč et al. 2018), long-term (Bengtsson et al. 2022;Florisson et al. 2021b;Huč et al. 2020) and plastic behaviour (Oudjene and Khelifa 2009;Pech et al. 2021;Yu et al. 2022).Since wood is hygroscopic, also a strong interest lies with mass and heat transfer (Autengruber et al. 2020;Eitelberger and Hofstetter 2011;Florisson et al. 2020;Frandsen et al. 2007aFrandsen et al. , 2007b) ) and moisture-induced fracture in wood (Autengruber et al. 2021;Brandstätter et al. 2023).Some efforts have also been made on the micro material level, such as the hygroelastic behaviour of fibres (Persson 2000) and fracture behaviour of fibre clusters (Carlsson and  Isaksson 2020).However, a thorough investigation into mechanosorption on micro material level using FEM is still lacking.Research has also been focussed on bridging the different hierarchical levels of mass transfer, elastic behaviour and large deformations using homogenisation methods (Eitelberger et al. 2011;Hofstetter et al. 2005;Holmberg et al. 1999;Zhong et al. 2021).
The material model step is also used to determine the level of detail of the FE model.For example, the model depends on the material level (cell wall, cell cluster, annual ring), the definition of properties (moisture, temperature and density-dependent), boundary conditions and initial simulation states.Therefore, a precise aim and objective should be set at the beginning of the process, and the model should be designed accordingly.Florisson et al. (2022) gives a good example where information from tomograms is incorporated into the details of an FE model to simulate moisture flow in boards of Norway spruce.This technique was used to define the geometry, initial simulation states and diffusion and surface emission coefficients; see Figure 6.Another example is from Hartig et al. (2021), who showed detailed stress distribution in a moulded wood tube loaded in compression using a geometry and correlation between density and elastic moduli from tomographic data; see Figure 7.The results showed the clear effect of a density dependent elastic modulus on stress patterns.
Within the XµCT aided FE process, a direct or indirect approach can be adopted towards the prediction of material properties.The direct approach relies on other methods to aid in the prediction, such as digital volume correlation (DVC) (see Section 3.10) or other image-processing algorithms.For example in Stubbs et al. (2020), the properties were calibrated using data from compression tests.The indirect approach builds on the assumption that a relationship exists between the attenuation coefficient, density and the material property of interest (Boutelje 1972; Hartig  Lindgren 1992;Taddei al. 2004).For wood, experimental investigations have shown that a linear correlation exists between density and elastic modulus (Kollmann and Côté 1968), hygroexpansion coefficient (Boutelje 1972), and diffusion coefficient (Sehlstedt-Persson 2001).The indirect approach can use pre-existing assumptions of such relationships or require a calibration as suggested for the direct approach.However, the indirect approach adequately takes into account variation of material and therefore leads to situation specific predictions of material behaviour, such as the stress distributions during static loading (Hartig et al. 2021;Huber et al. 2022;Keyak et al. 1990) or moisture distributions within a timber board during kiln drying (Florisson et al. 2022).

DIC and DVC
Digital image correlation (DIC) and DVC are popular imageprocessing techniques that are often used in conjunction with XµCT (Buljac et al. 2018;Maire and Withers 2014;Roux et al. 2012).DIC is a contactless spatial measurement technique to obtain the deformation or strain field of a surface of an object (Zink et al. 1995), which can be used to validate the numerical results from XµCT aided FE modelling (Hartig et al. 2021;Keunecke et al. 2012).DVC is an extension of the more conventional DIC and can estimate the threedimensional deformation or strain field of an object based on a reference image (fixed image) and its deformed state (moving image) (Bay et al. 1999).The method can be used to  calibrate XµCT aided FE models (Buljac et al. 2018;Forsberg et al. 2010;Hild al. 2016) or as a Dirichlet boundary condition (Leclerc et al. 2010).
The most popular DVC approaches can be categorised as local or global (Buljac et al. 2018;Madi et al. 2013;van Dijk et al. 2019).The ROI in local DVC is subdivided into small volumes that are independently registered (Bay et al. 1999), where image registration is the process of transforming an image of an object into the same coordinate system.Local DVC has been successfully applied to analyse the elastic behaviour of wood (Forsberg et al. 2008(Forsberg et al. , 2010;;Tran et al. 2013); see Figure 8. Global DVC uses global image registration and relies on a linear inversion problem (Madi et al. 2013).The analysis results in a continuous displacement field.Global image registration uses a single equation to map the entire image.Examples of global registration methods are affine registration and non-rigid (or elastic) registration.Affine registration includes translation, rotation and scaling, whereas non-rigid registration can locally warp the image by using e.g., B-splines (Madi et al. 2013;Patera et al. 2018a).Global DVC has been successfully applied to analyse the swelling behaviour of spruce (Patera et al. 2018a) and medium-density fibreboards (Kibleur et al. 2022b).

Current bottlenecks
In the following subsections, a summary of expected bottlenecks associated with the XµCT aided FE process of wood are outlined.A focus is put on bottlenecks that influence the automation of the process and errors within certain steps that can directly or indirectly affect the quality of later steps.
If these impediments could be alleviated, a momentous increase in efficiency is expected in solving scientific and engineering problems with CT and FEM.Throughout the section, mention is made of different relevant specimenbased, physics-based, and hardware-based artefacts (Cuete and Murphy 2022).A proper mitigation of artefacts within the XµCT aided FE process is important, since they complicate material characterisation, segmentation, image processing and model development.A full overview of possible XµCT image artefacts can be found e.g. in Hsieh (2015).

Artefacts
The importance of specimen preparation to reduce image artefacts was primarily addressed by Zauner (2014).The research stated that fewer disturbances were achieved during scanning by using a rotationally symmetric design for wood specimens.This shape led to a more accurate prediction of stress caused by compression.In this section the focus is on specimen-based artefacts and hardwarebased artefacts that can arise due to specimen design.A mentionable hardware-based artefact, known as out-offield, is caused by the specimen being too large for the field of view (FOV) (Zauner 2014).This artefact leads to increased or decreased density values and can lead to streaking.Therefore, the specimen should fill a high percentage of the FOV, without parts being outside of the FOV.Withers et al. (2021) mentions that the consequences of this artefact are minor (slight shifts in contrast) and that the diameter of the specimen can be ten times larger than the FOV.Motion artefacts can arise when the specimen experiences dimensional changes during scanning larger than the voxel size due to creep or changes in temperature or relative humidity.The artefact presents itself as blurs, streaks or shades (Kamke et al. 2014).In ex-situ testing, this artefact arises due to environmental differences between storage and scanner and can be mitigated by proper climatisation of the specimens and coverage of specimens using plastic during scanning.With in-situ testing, motion artefacts can be prevented by using a built-in climate chamber inside the scanner.This testing method also requires fewer specimens and adds to the validity of observed features by enabling continuous measurements (Buffiere et al. 2010;Garcea et al. 2018).Additionally, de Schryver et al. (2018) added to this solution a motion compensated reconstruction method for the in-situ analysis of dynamic processes using DVC.

Contrast enhancement
Another challenge in specimen preparation contrast sensitivity, which indicates to what extend small nuances in attenuation coefficient can be detected.Materials with similar atomic number tend to produce limited absorption contrast (Withers et al. 2021).Low-contrast materials can lead to difficulties to distinguish small and closely spaced features, which can lead to challenges during segmentation and DVC (Roux et al. 2012).Unfortunately, the microstructure of natural materials is difficult to modify.In the case of engineered materials, this challenge can be overcome by enhancing the attenuation coefficient of the material that overlaps with other materials.Such contrast agents are highly attenuating particles, gasses or stains (Withers et al. 2021).In Li et al. (2013), the attenuation difference between wood and water was increased by using water doped with caesium chloride as a contrast agent.In addition, commercial wood adhesives can be tagged with iodine to improve the contrast between glue and wood (Ching et al. 2018;Kamke et al. 2014;Paris et al. 2014Paris et al. , 2015)).In Kibleur et al. (2022a), the resin in wood fibreboards was doped with Potassium bromide to enhance the attenuation coefficient, where in Paris et al. (2015), iodinated phenol formaldehyde resin was used to enhance the contrast between bond line and wood; see Figure 9.

Image quality
Image quality evaluation and achieving a sufficient image quality are essential for an adequate image analysis (du Plessis et al. 2020).However, a standardised image quality metric to stimulate reproducible results is still lacking, which introduces reliability issues caused by instrument type, hardware and software used for image analysis, scanning environment and skill and experience level of operators (du Plessis et al. 2020;Withers et al. 2021;Zwanenburg et al. 2022).In Table 2 a summary is given of variables influencing the quality of scanning.
A general challenge with XµCT scanning is the wide choice of power (current, voltage), exposure time and filter to enhance the spatial, contrast and temporal resolution (Zwanenburg et al. 2022).The selected spatial resolution must be significantly smaller than the size of the expected features or their separation (Withers et al. 2021).Despite such an assumption, small features (cracks, defects) can still be difficult to detect.Power is a limiting factor when it comes to spatial resolution, since a larger spot size increases the penumbra effect (see next subsection) (Zwanenburg et al. 2022).The focal spot size is the area of the X-ray tube where the X-ray radiation is emitted to the specimen.A higher power increases the intensity of the electron beam, the heat in the focal point and the focal spot size.A smaller spot size can be accomplished with a longer exposure time, which is proportional to the number of photons detected per projection.A longer exposure time leads to a lower temporal resolution, but brighter images with lower noise.
The contrast resolution of a tomogram is affected by both voltage and current.High voltages are not suitable for a low-density material such as wood, since the X-rays will move through the material without much attenuation, producing a low contrast image.High currents will result in a brighter image and lower noise, i.e. random variations in voxel values, but a too high current gives saturated images.Image noise affects image reconstruction and needs to be managed during scanning or removed before segmentation using a median or other filter (Withers et al. 2021).Because the polychromatic beam is far from uniform, and the detectors show pixel-to-pixel variation in sensitivity, a projection must be acquired without the specimen in the FOV to compensate for these variations during reconstruction.This is called a flat field correction (Seibert et al. 1998).Filters can be used to increase the mean energy of the spectrum by absorbing the lower energy X-rays, which will result in an improved penetration of the filtered spectrum.

Artefacts
A general challenge in the XµCT aided FE process is the presence of artefacts in tomograms.In this section hardware-based artefacts are discussed, which arise as part of the XµCT scanning procedure.The focal spot size and an increasing spot size, can contribute to the penumbra effect (Brunke et al. 2008;Kueh et al. 2016).This effect is observed as light (cell-wall side) and dark (lumen side) shadows in the region where air and cell wall intersect, and results in poorly defined edges (Kamke et al. 2014).Thermal drift of the X-ray emission point can lead to artificial motion and magnification changes in projections (Wang et al. 2017).This is caused by a change in source-object distance and object-detector distance during scanning due to generated heat in the X-ray tube (Limodin et al. 2011;Wang et al. 2017).A proper system warm-up or correction using a reference scan taken at the start can prevent this phenomenon from occurring.The detector can be prone to defective or uneven pixel response, which can result in ring artefacts (du Plessis et al. 2020).This artefact presents itself as circular rings around the rotational axis, which can be mistaken for pores (du Plessis et al. 2020).The long scanning time of XµCT compared to SRµCT can lead to motion artefacts the reconstructed tomograms (Roux et al. 2012).This makes it difficult to use XµCT to conduct real time monitoring of phenomena such as crack initiation and propagation, abrupt material failure, viscoelastic behaviour, hygromechanical behaviour and diffusion processes (Auenhammer et al. 2021;Cordes et al. 2015;Forsberg et al. 2008;Pyrkosz et al. 2010).Mechanical instability of setup and specimen due to the fixture or mounting can also lead to blurring and can be induced by an offset of the rotation centre (du Plessis et al. 2020;Withers et al. 2021).The polychromatic beam used in XµCT

Category
Examples of influencing variables Computed tomography system X-ray source, detector, axes, hardware filtering, scan mode, system type and other components of the scanner Application Object materials and geometry, fixturing, scanning parameters, reconstruction parameters, other settings Analysis Algorithms and software for reconstruction, segmentation and data analysis Environment Temperature, humidity, vibrations, other ambient conditions Operator Choices made by operator on measurement procedure and implementation, such as scan time, voltage change, mounting errors, binning, averaging, number of projections can lead to beam hardening, which increases the apparent density towards the edges of a tomogram (Bryant et al. 2012).Maire and Withers (2014) showed that this impairs global thresholding of bone based on a single greyscale value.

Data amount
A general challenge with XµCT is the enormous amount of data generated during scanning to obtain high temporal and spatial resolutions.This requires adequate transportation of data and storage space (Withers et al. 2021).

Image reconstruction 4.3.1 Projections
Image reconstruction is often performed using software provided by the machine supplier (Zwanenburg et al. 2022), as was seen in Figure 2, which can lead to unilateral and possibly inconsiderate choices in reconstruction approach.In XµCT, the reconstruction time is a limiting factor in fast scanning (Zwanenburg et al. 2022), where machine learning methods can reduce detector integration times and decrease the number of projections (Withers et al. 2021).In a traditional setting, the minimum amount of projections that is recommended for a quality image is qπ/2, where q is the number of pixels across the diameter of the object (Withers et al. 2021;Zwanenburg et al. 2022).Going below the recommended amount of projections or applying non-uniform projections leads to noisy images with artefacts, although a slight reduction in projections is possible.

Artefacts
Beam hardening correction is often provided by reconstruction software of the machine supplier.However, an increase in correction factor often reduces the contrast in different parts of the tomogram (du Plessis et al. 2020;Wang et al. 2017).

Material characterisation
XµCT is not the most suitable method for the quantification of the linear attenuation coefficient and therewith the CT number.Such quantification would require a well-defined source, a monochromatic beam and a simple attenuation application (Maire and Withers 2014; Stubbs et al. 2020).In contrast, lab-based scanners produce a polychromatic beam (white radiation) and scattered photons, while the detector is prone to defective or uneven pixel response (ring artefacts) and can charge bleeding (full pixels) (Maire and Withers 2014).The CT number is known to be energy-dependent, which is problematic when using a polychromatic beam (Bryant et al. 2012).Therefore, when using XµCT to determine density, the proposed sample set needs to be scanned under the same conditions as the unknown specimen.In this realm, the established relationship is only valid for the specific scanner and setting (de Ridder et al. 2011;Lindgren 1991b).When evaluating the density value, it is important to determine whether the observed greyscale fluctuation is due to a change in density, composition of material or imaging artefacts (Maire and Withers 2014).

Segmentation
Segmentation can be a bottleneck in an automated XµCT aided FE modelling process (Auenhammer et al. 2021;Keyak et al. 1990).According to Auenhammer et al. (2021), after p and h mesh refinement (higher degree of nodes and finer mesh, respectively), the error between segmentation, meshing, modelling and calibration is due to the segmentation process.

Image quality
The success of the segmentation process is closely linked to the spatial and contrast resolution of the tomogram and existing artefacts.Limited spatial and contrast resolution causes contours of closely positioned objects to combine.This was for example seen in tomograms of the acetabulum and the femoral head when scanning bone (Keyak et al. 1990).Small adjustments made in segmentation thresholding can highlight features that are actually noise or can join objects that are actually gaps (Withers et al. 2021).Therefore, decisions made in previous steps of the XµCT aided FE process directly influence the segmentation step.Pyrkosz et al. (2010) mentions that most standard segmentation methods that rely on the density variation within tomograms are unsuitable for wood, since the density variation of sapwood, heartwood, latewood, earlywood and transition wood largely overlap.Manual segmentation methods can be more suitable in such situations, but are time consuming, non-repeatable, a source of human error, and compromises the automation of the XµCT aided FE process.Figure 10 visualises this challenge for the 1 mm 3 cube of Norway spruce branch wood, where the lumen is segmented from the cell wall.The figure shows that automated segmentation methods do not hold due to the overlap of density values, and manual tools are needed for a precise segmentation.

Artefacts
A general concern of the segmentation step is the presence of image artefacts.An important physics-based artefact is beam hardening.Maire and Withers (2014) showed that for bone this artefact prohibits the use of global thresholding based on a single greyscale value.An artefact particularly interesting for wood is the partial volume averaging effect that occurs when two or more phases with different densities are encompassed at the same voxel (Cuete and Murphy 2022).This produces an average attenuation coefficient of those phases, which challenges the segmentation process (Hartig et al. 2021;Pyrkosz et al. 2010).For wood, this artefact can occur at the surface of a specimen, where a voxel can contain both air and wood (Huber et al. 2022), or when scanning thin adhesive coatings in wood fibre materials (Kibleur et al. 2022a).The latter can be overcome with lab-based dual-energy CT.

Fibre orientation and mapping
A mapping accuracy check should be performed for both material orientation and material properties (Florisson et al. 2022;Stubbs et al. 2020).Mapping deviations can influence the FE results and therefore the determined material properties.For example, misalignment of fibres during mapping can lead to stiffness reduction (Auenhammer et al. 2021;Huber et al. 2022).This can affect the FE analysis convergence rate and the stress and strain output.Mapping information onto a new set of coordinates can also lead to errors depending on the interpolation method.Florisson et al. (2022) showed a small deviation between the original and mapped moisture content data.A linear interpolation was performed in commercial FE software Abaqus FEA ® .

Meshing
The quality of the FE mesh can be assessed based on computational weight (total operator time and total CPU time) and computational accuracy (in comparison with an analytical solution or experimental results) (Lobos et al. 2010).For instance, a manually created mesh requires a long operator time compared to a voxel-based mesh or a geometry-based mesh, but can result in efficient and accurate models.The adequacy of a mesh generated from tomograms is often not verified (Keyak et al. 1990;Viceconti et al. 1998), since it is a very time consuming process.A mesh quality check is especially important when handling complex geometries, and is needed particularly to omit elements with severe shape distortions and large aspect ratios (Lobos et al. 2010;Song et al. 2017).These issues are visualised in Figure 11, together with an indication of the bounded surface deviation error.This is the gap between the mesh and the analytical surface, and needs to be considered when an accurate representation of surface is required (Pagѐs et al. 2005).

Voxel-based mesh
Voxel-based meshing is an easy and fast way to generate a 3D mesh from tomograms (Viceconti et al. 1998).The method is insensitive to complex geometries and low image resolution, but cannot give a precise representation of geometry and boundaries between phases.This mesh type should be avoided when analysing surface phenomena, such as fracture initiation and stress-strain concentrations.A voxel-based mesh needs a high level of mesh refinement for accurate results, which leads to large computational efforts and long operator times (Song et al. 2017;Viceconti et al. 1998).It was mentioned by Marks and Gardner (1993) that the jagged pattern of the inner and outer surfaces can cause for numerical problems, since the unsmoothed surface can result in convergence issues for elements with sharp

Geometry-based mesh
The general challenges that effect the segmentation step also influence geometry-based meshing (Auenhammer et al. 2021), such as complex geometries in combination with low image quality (resolution and artefacts).The complexity of generated surfaces meshes also forms a bottleneck, especially in an automated XµCT aided FE process (Auenhammer et al. 2021), since it requires a mesh simplification to prevent mesh quality issues and mesh penetration.In Figure 12, mesh clusters are displayed, which were obtained for the example of the compression wood specimen, and occur when thin segmented objects lie beneath the surface of a ROI.These clusters are difficult to remove and require adjustments of the segmentation.In Figure 12, it can be seen how the segmentation is levelled to the surface, which removes the unwanted mesh cluster.A geometry-based mesh is often constructed from tetrahedral elements, which are computationally costly.

Material model
A general challenge in the material model step is an insufficient correlation between experimental design, image processing and material model, leading to an inaccurate prediction of the material parameters.A proper inventory is needed of required variables, such as material level, level of modelling detail, selection of material properties, physical phenomena influencing the observed system (temperature, relative humidity) and studied physical phenomena.Some physical phenomena are difficult to study experimentally, such as mechanosorption, which always occurs in combination with an elastic, creep and hygroexpansion component, and requires a well-designed experimental methodology (Florisson et al. 2021a).It should also be stated that a description of the hygromechanical behaviour of wood needs a large set of material properties, which necessitates many experiments (Huč et al. 2018(Huč et al. , 2020)).Other phenomena need a different level of modelling detail dependent on the material level.In Florisson et al. (2022), the diffusion and surface emission coefficient were determined using macroscopic tomograms of the kiln-drying process of wood.A nonlinear transient moisture flow analysis was made using a single Fickian equation, and moisture, temperature and density dependent material properties.However, in Frandsen et al. (2007b), a similar analysis was made, but using a multi-Fickian approach.This approach makes a distinction between diffusion in the cell wall and cell lumen, allowing for the incorporation of sorption hysteresis into the simulation.This is not possible using a single Fickian approach.In Section 3.8, it was also noted that many material models have been developed and validated on the macro material level, which poses a question whether such models still hold when describing the behaviour of the cell wall or an individual fibre.For example, one of the well-known material models to describe mechanosorption has only been applied to simulate wood behaviour on meso to timber scale (Salin 1992).4.9 DIC and DVC 4.9.1 Image quality DVC The accuracy of a 3D imaging technique based on DVC largely depends on image resolution (spatial and contrast) and algorithm quality (Buljac et al. 2018;Leclerc et al. 2012;Wang et al. 2017).This section will focus on the first component.For a thorough review on DVC and its challenges, Buljac et al. (2018) can be consulted.DVC is largely dependent on spatial contrast, where insufficient voxel size can make it difficult to detect important features, such as crack openings (Roux et al. 2012).In addition, the images need sufficient contrast to represent the microstructure, since the algorithm needs a recognisable random pattern to converge (Roux et al. 2012).DVC can pose challenges when applied on materials without significant density gradients and with noticeable artefacts.Softwoods e.g. have a rather uniform longitudinal material direction, which can form a risk for DVC decorrelation (Forsberg et al. 2010).This can be solved by using tomograms with sufficient spatial and contrast resolution to emphasize the bordered piths.This approach led to good DVC results for Norway spruce beams tested in bending, without the need for contrast-enhancing particles (Forsberg 2008;Forsberg et al. 2008Forsberg et al. , 2010)).A general limitation of DVC is the handling of large data files (Roux et al. 2012).Here, commercial software, such as Avizo™ (Thermo Fisher Scientific 2022), can aid in the management of such large datasets by providing efficient data-processing tools and efficient GPU implementation.

Artefacts DVC
DVC decorrelation also arises from noise and artefacts in tomograms, such as ring artefacts, beam hardening and motion artefacts (Buljac et al. 2018;Limodin et al. 2011;Roux et al. 2012).Artefacts can lead to spurious DVC results, especially when tomograms lack contrast (Buljac et al. 2018).
Modern XµCT scanners come with tools to numerically filter artefacts from tomograms, such as beam hardening reduction (Wang et al. 2017).As previously mentioned, such filters can reduce density gradients.Thermal drift of the X-ray emission point can also negatively affect the DVC analysis (Limodin et al. 2011;Wang et al. 2017), which leads to displacement and strain errors.This error increases with scan duration, but can be minimised with a warm-up scan (Wang et al. 2017).Rotational and translational misalignment of specimens due to ex-situ testing or loading procedures create a general source of error in DVC (Forsberg et al. 2010).Such misalignment can often be manually corrected during reconstruction.Another challenge with DVC and exsitu testing is the intensity differences in the background between tomograms, which can lead to large correlation errors (Buljac et al. 2018).

DIC
Image resolution for a 2D imaging techniques such as DIC is less challenging.The homogeneity, pattern correlation length and image contrast can be enhanced by adjusting the signature speckle pattern (Buljac et al. 2018).A general difficulty with surface imaging methods, such as DIC, is the production of plane surfaces without introducing artefacts by machining tools (Forsberg et al. 2008).The surface should also remain plane after deformation of the specimen.Most modern DIC systems come with two cameras to detect out-of-plane deformations, which can be considered during post-processing.

Concluding remarks
The proposed XµCT aided FE methodology for wood covers the process from specimen preparation to material property estimation.The literature review gives underlying information for each step that make up this process.The main bottlenecks that can occur between and within each step have been identified.The review covers research also for other materials (e.g.bone, concrete, fibre composites, foams and plastics) whose results are applicable for wood materials.Conversely, the XµCT aided FE methodology is equally applicable for these materials, disregarding the specificities of wood materials.
A general conclusion that can be drawn from the literature review is that the methodology can assists in a considerate design and execution of the XµCT aided FE process for wood.This means that the best modelling results are obtained with the definition of a clear aim and objective at the beginning of the XµCT aided FE process.This should steer towards an experimental step that is designed and executed to benefit the image processing and model development step.The tomograms should be of sufficient quality (see spatial, contrast and temporal resolution, and lack of artefacts) to be able to perform the material characterisation, segmentation, fibre reconstruction, DVC and meshing step.
There are a couple of current bottlenecks for wood applications that are pressing to resolve in short-term.The absence of a standardised method for in-situ testing of wood prohibits the extensive testing of the hygromechanical behaviour using XµCT.Also, the prevalent traditional segmentation and meshing techniques prevent an automated XµCT aided FE process for wood.Therefore, a combination of deep-learning segmentation methods combined with geometry-based meshing is suggested to stimulate this development.As a last note, it can be mentioned that simple and effective algorithms to generate the orthotropic material orientation of wood from tomograms, including spiral grain, conical shape, annual ring curvature, and grain deviation around knots would be very useful.

Figure 1 :
Figure 1: Suggested methodology for the microscopic lab-based computed tomography (XµCT) aided finite element modelling of wood, where DIC indicates digital image correlation (2D) and DVC indicates digital volume correlation (3D).The numbers indicate the subsections dealing with each step, where the underlying references of this review can be found.

Figure 2 :
Figure 2: Example of segmented and meshed Norway spruce microstructure obtained with commercial software Avizo™: (a) render of microstructure, (b) segmented structure (green is lumen and purple is cell wall), (c) segmented lumen, (d) meshed geometry, (e) cross-section of meshed geometry in xz-plane, (f) cross-section of meshed geometry in yz-plane, and (g) cross-section of meshed geometry in xy-plane.

Figure 3 :
Figure 3: Example of reconstructed fibre orientation around knots based on tomograms from macro computed tomography: (a) diving vector field and (b) flow vector field (Huber et al. 2022, https://creativecommons.org/licenses/by/4.0/,changes were made to this illustration).

Figure 5 :
Figure 5: Example of mesh optimisation in three steps in commercial software Avizo™: (1) original mesh, (2) mesh after optimised segmentation and (3) mesh after quality check and automatic optimisation of bad tetras, where (a) is the meshed microstructure, (b) is a closeup of the microstructure, (c) are the identified bad tetras, and (d) is a closeup of the bad tetras.

Figure 6 :
Figure 6: Example of mapped material characteristics on model geometry obtained from computed tomography data: (a) moisture content at the start of the simulation and (b) dry density profile used by the finite element program to describe diffusion and surface emission coefficient (Florisson et al. 2022, https://creativecommons.org/licenses/ by/4.0/,changes were made to this illustration).

Figure 7 :
Figure 7: Example of simulated stress differences by integrating the variation in density into a finite element model: (a, c) longitudinal and transverse stress distributions for finite element simulations of a wood tube with varying elastic properties through computed tomography (b, d) and constant elastic properties (Hartig et al. 2021).

Figure 8 :
Figure 8: Example of digital volume correlation used to obtain displacement fields during bending of a piece of wood: (a) 3D representation of the transverse displacement and (b) front view of the same displacement field (Forsberg et al. 2008).

Figure 9 :
Figure 9: Example of contrast enhancement in XµCT of wood-based materials: iodinated phenol formaldehyde resin bond lines, (a) glued Douglas fir, (b) segmented bond line Douglas fir, (c) glued loblolly pine, and (d) segmented bond line loblolly pine (Paris et al. 2015).

Figure 10 :
Figure 10: Example of segmentation of the lumen that make up the microstructure of Norway spruce using commercial software Avizo™: (a) segmentation based on only thresholding and (b) segmentation based on multiple segmentation methods such as thresholding and manual (lasso, magic wand, brush) and image-processing techniques to fill holes, discard small objects and remove voxel islands.

Figure 11 :
Figure 11: Example of mesh adequacy issues: (a) the mesh domain, (b) the meshed domain with a visible bounded surface deviation error (light grey areas), (c) mesh refinement to minimise the bounded surface deviation error, and (d) a mesh shape check of the added elements during mesh refinement (light is bad shape, dark is good shape) (Lobos et al. 2010).

Figure 12 :
Figure 12: Example of surface mesh issues caused by thin objects produced during segmentation: (a) surface mesh for a Norway spruce microstructure using geometry-based meshing approach in commercial software Avizo™, (b) close-up, (c) a thin object underneath the surface and a thin object at the surface, and (d) issues resolved by increasing the size of the objects.

Table  :
Short description of steps and sub-steps that make up the XµCT aided FE methodology for wood, including the main bottleneck and a reference to corresponding subsections.