Accessible Requires Authentication Published by De Gruyter June 9, 2018

The deal.II library, Version 9.0

Giovanni Alzetta, Daniel Arndt, Wolfgang Bangerth, Vishal Boddu, Benjamin Brands, Denis Davydov, Rene Gassmöller, Timo Heister, Luca Heltai, Katharina Kormann, Martin Kronbichler, Matthias Maier, Jean-Paul Pelteret, Bruno Turcksin and David Wells

Abstract

This paper provides an overview of the new features of the finite element library deal.II version 9.0.

Classification: 65M60; 65N30; 65Y05
4

4 Acknowledgments

deal.II is a world-wide project with dozens of contributors around the globe. Other than the authors of this paper, the following people contributed code to this release:

Julian Andrej, Rajat Arora, Lucas Campos, Praveen Chandrashekar, Jie Cheng, Emma Cinatl, Conrad Clevenger, Ester Comellas, Sambit Das, Giovanni Di Ilio, Nivesh Dommaraju, Marc Fehling, Niklas Fehn, Menno Fraters, Anian Fuchs, Daniel Garcia-Sanchez, Nicola Giuliani, Anne Glerum, Christoph Goering, Alexander Grayver, Samuel Imfeld, Daniel Jodlbauer, Guido Kanschat, Vishal Kenchan, Andreas Kergassner, Eldar Khattatov, Ingo Kligge, Uwe Köcher, Joachim Kopp, Ross Kynch, Konstantin Ladutenko, Tulio Ligneul, Karl Ljungkvist, Santiago Ospina, Alexey Ozeritsky, Dirk Peschka, Simon Puchert, E. G. Puckett, Lei Qiao, Ce Qin, Jonathan Robey, Alberto Sartori, Daniel Shapero, Ben Shields, Simon Sticko, Oliver Sutton, Zhuoran Wang, Xiaoyu Wei, Michał Wichrowski, Julius Witte, Feimi Yu, Weixiong Zheng.

Their contributions are much appreciated!

  1. Funding: deal.II and its developers are financially supported through a variety of funding sources:

    D. Arndt, K. Kormann and M. Kronbichler were partially supported by the German Research Foundation (DFG) under the project “High-order discontinuous Galerkin for the exa-scale” (ExaDG) within the priority program “Software for Exascale Computing” (SPPEXA).

    W. Bangerth and R. Gassmöller were partially supported by the National Science Foundation under award OCI-1148116 as part of the Software Infrastructure for Sustained Innovation (SI2) program; and by the Computational Infrastructure in Geodynamics initiative (CIG), through the National Science Foundation under Awards No. EAR-0949446 and EAR-1550901 and The University of California – Davis.

    V. Boddu was supported by the German Research Foundation (DFG) under the research group project FOR 1509.

    B. Brands was partially supported by the Indo-German exchange programm “Multiscale Modeling, Simulation and Optimization for Energy, Advanced Materials and Manufacturing” (MMSO) funded by DAAD (Germany) and UGC (India).

    D. Davydov was supported by the German Research Foundation (DFG), grant DA 1664/2-1.

    T. Heister was partially supported by NSF Award DMS-1522191, by the Computational Infrastructure in Geodynamics initiative (CIG), through the NSF under Award EAR-0949446 and EAR-1550901 and The University of California – Davis, and by Technical Data Analysis, Inc. through US Navy SBIR N16A-T003.

    M. Maier was partially supported by ARO MURI Award No. W911NF-14-0247.

    J-P. Pelteret was supported by the European Research Council (ERC) through the Advanced Grant 289049 MOCOPOLY.

    B. Turcksin: This material is based upon work supported by the U.S. Department of Energy, Office of Science, under contract number DE-AC05-00OR22725. This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).

    D. Wells was supported by the National Science Foundation (NSF) through Grant DMS-1344962.

    The Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University has provided host-ing services for the deal.II web page.

References

[1] P. R. Amestoy, I. S. Duff, J. Koster, and J.-Y. ĽExcellent, A Fully asynchronous multlfrontal solver using distributed dynamic scheduling, SIAM J. Matrix Anal. Appl. 23 (2001), No. 1, 15–41.10.1137/S0895479899358194 Search in Google Scholar

[2] P. R. Amestoy, A. Guermouche, J.-Y. ĽExcellent, and S. Pralet, Hybrid scheduling for the parallel solution of linear systems, Parallel Computing32 (2006), No. 2, 136–156.10.1016/j.parco.2005.07.004 Search in Google Scholar

[3] P.R. Amestoy, I.S. Duff, and J.-Y. ĽExcellent, Multifrontal parallel distributed symmetric and unsymmetric solvers, Comput. Methods Appl. Mech. Engrg. 184 (2000), 501–520.10.1016/S0045-7825(99)00242-X Search in Google Scholar

[4] E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen, LAPACK User’s Guide, third ed, Society for Industrial and Applied Mathematics, Philadelphia, PA, 1999. Search in Google Scholar

[5] D. Arndt, W. Bangerth, D. Davydov, T. Heister, L. Heltai, M. Kronbichler, M. Maier, J.-P. Pelteret, B. Turcksin, and D. Wells, The deal.II library, Version 8.5, J. Numer. Math. 25 (2017), No. 3, 137–146. Search in Google Scholar

[6] S. Balay, S. Abhyankar, M. F. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, V. Eijkhout, W. D. Gropp, D. Karpeyev, D. Kaushik, M. G. Knepley, D. May, L. Curfman Mclnnes, R. Mills, T. Munson, K. Rupp, P. Sanan B. F. Smith, S. Zampini, H. Zhang, and H. Zhang, PETSc Users Manual, Argonne National Laboratory, Report No. ANL-95/11 - Revision 3.9, 2018. Search in Google Scholar

[7] S. Balay, S. Abhyankar, M. F. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, V. Eijkhout, W. D. Gropp, D. Karpeyev, D. Kaushik, M. G. Knepley, D. May, L. Curfman Mclnnes, R. Mills, T. Munson, K. Rupp, P. Sanan B. F. Smith, S. Zampini, H. Zhang, and H. Zhang, PETSc Web Page, http://www.mcs.anl.gov/petsc, 2018. Search in Google Scholar

[8] W. Bangerth, C. Burstedde, T. Heister, and M. Kronbichler, Algorithms and data structures for massively parallel generic adaptive finite element codes, ACM Trans. Math. Softw. 38 (2011), 14/1-28.10.1145/2049673.2049678 Search in Google Scholar

[9] W. Bangerth, D. Davydov, T. Heister, L. Heltai, G. Kanschat, M. Kronbichler, M. Maier, B. Turcksin, and D. Wells, The deal.II library, Version 8.4, J. Numer. Math. 24 (2016), No. 3, 135–141. Search in Google Scholar

[10] W. Bangerth, R. Hartmann, and G. Kanschat, deal.II — a general purpose object oriented finite element library, ACM Trans. Math. Softw. 33 (2007), No. 4. Search in Google Scholar

[11] W. Bangerth, T. Heister, L. Heltai, G. Kanschat, M. Kronbichler, M. Maier, and B.Turcksin, The deal.II library, Version 8.3, Archive Numer. Software4 (2016), No. 100, 1–11. Search in Google Scholar

[12] W. Bangerth, T. Heister, L. Heltai, G. Kanschat, M. Kronbichler, M. Maier, B.Turcksin, and T. D.Young, The deal.II library, Version 8.0, arXivPreprinthttp://arxiv.org/abs/1312.2266v3 (2013). Search in Google Scholar

[13] W. Bangerth, T. Heister, L. Heltai, G. Kanschat, M. Kronbichler, M. Maier, B.Turcksin, and T. D.Young, The deal.II library, Version 8.1, arXiv Preprint http://arxiv.org/abs/1312.2266v4 (2013). Search in Google Scholar

[14] W. Bangerth, T. Heister, L. Heltai, G. Kanschat, M. Kronbichler, M. Maier, B.Turcksin, and T. D.Young, The deal.II library, Version 8.2, Archive Numer. Software3 (2015). Search in Google Scholar

[15] W. Bangerth and G. Kanschat, Concepts for Object-Oriented Finite Element Software – the Deal.II Library, SFB 359, Preprint No. 1999-43, Heidelberg, 1999. Search in Google Scholar

[16] W. Bangerth and O. Kayser-Herold, Data structures and requirements for hp finite element software, ACM Trans. Math. Softw. 36 (2009), No. 1, 4/1–4/31. Search in Google Scholar

[17] R. A. Bartlett, D. M. Gay, and E. T. Phipps, Automatic Differentiation of C++ Codes for Large-Scale Scientific Computing, Int. Conf. on Computational Science – ICCS 2006 (Eds. V. N. Alexandrov, G. D. van Albada, P. M. A. Sloot, and J. Dongarra), Springer, Berlin–Heidelberg, 2006, pp. 525–532. Search in Google Scholar

[18] L. S. Blackford, J. Choi, A. Cleary, E. D’Azevedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley, ScaLAPACK User’s Guide, Society for Industrial and Applied Mathematics, Philadelphia, PA, 1997. Search in Google Scholar

[19] J. L. Blanco and P. K. Rai, Nanoflann: AC++ Header-Only Fork of FLANN, a Library for Nearest Neighbor (NN) with KD-Trees, https://github.com/jlblancoc/nanoflann, 2014. Search in Google Scholar

[20] C. Burstedde, L. C. Wilcox, and O. Ghattas, p4est: Scalable algorithms for parallel adaptive mesh refinement on forests of octrees, SIAM J. Sci. Comput. 33 (2011), No. 3, 1103–1133.10.1137/100791634 Search in Google Scholar

[21] Coverity Scan (SYNOPSYS, INC.), https://scan.coverity.com. Search in Google Scholar

[22] CuSOLVER Library, https://docs.nvidia.com/cuda/cusolver/index.html. Search in Google Scholar

[23] CuSPARSE Library, https://docs.nvidia.com/cuda/cusparse/index.html. Search in Google Scholar

[24] T. A. Davis, Algorithm 832: UMFPACK V.4.3 — an unsymmetric-pattern multifrontal method, ACM Trans. Math. Softw. 30 (2004), 196–199.10.1145/992200.992206 Search in Google Scholar

[25] D. Davydov, T. Gerasimov, J.-P. Pelteret, and P. Steinmann, Convergence study of the h-adaptive PUM and the hp-adaptive FEM applied to eigenvalue problems in quantum mechanics, Adv. Modeling Simul. Engrg. Sci. 4 (2017), No.1, 7.10.1186/s40323-017-0093-0 Search in Google Scholar

[26] A. DeSimone, L. Heltai, and C. Manigrasso, Tools for the Solution of PDEs Defined on Curved Manifolds with Deal.II, SISSA, Report No. 42/2009/M, 2009. Search in Google Scholar

[27] M. Galassi, J. Davies, J. Theiler, B. Gough, G.Jungman, P. Alken, M. Booth, F. Rossi, and R. Ulerich, GNU Scientific Library Reference Manual, Edition 2.3, 2016. Search in Google Scholar

[28] R. Gassmöller, E. Heien, E. G. Puckett, and W. Bangerth, Flexible and Scalable Particle-In-Cell Methods for Massively Parallel Computations, arXiv:1612.03369, Report, 2016. Search in Google Scholar

[29] R. Gassmöller, H. Lokavarapu, E. Heien, E. G. Puckett, and W. Bangerth, Flexible and scalable particle-in-cell methods with adaptive mesh refinement for geodynamic computations, submitted (2018). Search in Google Scholar

[30] C. Geuzaine and J.-F. Remacle, Gmsh: A 3-D finite element mesh generator with built-in pre-and post-processing facilities, Int. J. Numer. Methods Engrg. 79 (2009), No. 11, 1309–1331.10.1002/nme.2579 Search in Google Scholar

[31] N. Giuliani, A. Mola, and L. Heltai, π-BEM: A flexible parallel implementation for adaptive, geometry aware, and high order boundary element methods, Adv. Engrg. Software121 (2018), No. March, 39–58.10.1016/j.advengsoft.2018.03.008 Search in Google Scholar

[32] A. Griewank, D. Juedes, and J. Utke, Algorithm 755: ADOL-C: a package for the automatic differentiation of algorithms written in C/C++, ACM Trans. Math. Software (TOMS)22 (1996), No. 2, 131–167.10.1145/229473.229474 Search in Google Scholar

[33] T. Heister, J. Dannberg, R. Gassmöller, and W. Bangerth, High accuracy mantle convection simulation through modern numerical methods. II: Realistic models and problems, Geophys.J. Int. 210 (2017), 833–851.10.1093/gji/ggx195 Search in Google Scholar

[34] L. Heltai and A. Mola, Towards the Integration of CAD and FEM Using Open Source Libraries: a Collection of Deal.II Manifold Wrappers for the OpenCASCADE Library, SISSA, Report, 2015, Submitted. Search in Google Scholar

[35] V. Hernandez, J. E. Roman, and V. Vidal, SLEPc: A scalable and flexible toolkit for the solution of eigenvalue problems, ACM Trans. Math. Software31 (2005), No. 3, 351–362.10.1145/1089014.1089019 Search in Google Scholar

[36] M. A. Heroux, R. A. Bartlett, V. E. Howle, R. J. Hoekstra, J. J. Hu, T. G. Kolda, R. B. Lehoucq, K. R. Long, R. P. Pawlowski, E. T. Phipps, A. G. Salinger, H. K. Thornquist, R. S. Tuminaro, J. M. Willenbring, A. Williams, and K. S. Stanley, An overview of the Trilinos project, ACM Trans. Math. Softw. 31 (2005), 397–423.10.1145/1089014.1089021 Search in Google Scholar

[37] M. A. Heroux et al., Trilinos Web Page, 2018, http://trilinos.org. Search in Google Scholar

[38] A. C. Hindmarsh, P. N. Brown, K. E. Grant, S. L. Lee, R. Serban, D. E. Shumaker, and C. S. Woodward, SUNDIALS: Suite of nonlinear and differential/algebraic equation solvers, ACM Trans. Math. Software (TOMS)31 (2005), No. 3, 363–396.10.1145/1089014.1089020 Search in Google Scholar

[39] B. Janssen and G. Kanschat, Adaptive multilevel methods with local smoothing for H1 - and Hcurl-conforming high order finite element methods, SIAM J. Sci. Comput. 33 (2011), No. 4, 2095–2114.10.1137/090778523 Search in Google Scholar

[40] G. Kanschat, Multi-level methods for discontinuous Galerkin FEM on locally refined meshes, Comput. & Struct. 82 (2004), No. 28, 2437–2445.10.1016/j.compstruc.2004.04.015 Search in Google Scholar

[41] G. Karypis and V. Kumar, A fast and high quality multilevel scheme for partitioning irregular graphs, SIAM J. Sci. Comput. 20 (1998), No. 1, 359–392.10.1137/S1064827595287997 Search in Google Scholar

[42] M. Kronbichler, T. Heister, and W. Bangerth, High accuracy mantle convection simulation through modern numerical methods, Geophys.J. Int. 191 (2012), 12–29.10.1111/j.1365-246X.2012.05609.x Search in Google Scholar

[43] M. Kronbichler and K. Kormann, A generic interface for parallel cell-based finite element operator application, Comput. Fluids63 (2012), 135–147.10.1016/j.compfluid.2012.04.012 Search in Google Scholar

[44] M. Kronbichler and K. Kormann, Fast Matrix-Free Evaluation of Discontinuous Galerkin Finite Element Operators, arXiv:1711.03590, Report, 2017. Search in Google Scholar

[45] R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARPACK User’s Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods, SIAM, Philadelphia, 1998. Search in Google Scholar

[46] List of Changes for 9.0, https://www.dealii.org/developer/doxygen/deal.II/changes_between_8_5_and_9_0.html. Search in Google Scholar

[47] LLVM, Clang-Tidy, http://clang.llvm.org/extra/clang-tidy/. Search in Google Scholar

[48] M. Maier, M. Bardelloni, and L. Heltai, LinearOperator — a generic, high-level expression syntax for linear algebra, Comput. Math. Appl. 72 (2016), No. 1, 1–24.10.1016/j.camwa.2016.04.024 Search in Google Scholar

[49] M. Maier, M. Bardelloni, and L. Heltai, LinearOperatorBenchmarks, Version1.0.0, March 2016. Search in Google Scholar

[50] MUMPS:aMUltifrontalMassivelyParallelsparsedirectSolverhttp://graal.ens-lyon.fr/MUMPS/. Search in Google Scholar

[51] MuParser: Fast Math Parser Library, http://muparser.beltoforion.de/. Search in Google Scholar

[52] OpenCASCADE:OpenCASCADETechnology,3DModeling&NumericalSimulation, http://www.opencascade.org/. Search in Google Scholar

[53] J-P. V. Pelteret and A. McBride, The Deal.II Code Gallery: Quasi-Static Finite-Strain Compressible Elasticity, 2016, Accessed April 2018. doi: 10.5281/zenodo.1228964. Search in Google Scholar

[54] J. Reinders, Intel Threading Building Blocks, O’Reilly, 2007. Search in Google Scholar

[55] R. Rew and G. Davis, NetCDF: an interface for scientific data access, IEEE Computer Graph. Appl. 10 (1990), No. 4, 76–82.10.1109/38.56302 Search in Google Scholar

[56] D. Ridzal and D. P. Kouri, Rapid Optimization Library, Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States), Report, 2014. Search in Google Scholar

[57] T. Schulze, A. Gessler, K. Kulling, D. Nadlinger, J. Klein, M. Sibly, and M. Gubisch, Open asset import library (assimp), Computer Software, URL: https://github.com/assimp/assimp (2012). Search in Google Scholar

[58] The HDF Group, Hierarchical Data Format, Version 5, 1997–2018, http://www.hdfgroup.org/HDF5/. Search in Google Scholar

[59] B. Turcksin, M. Kronbichler, and W. Bangerth, WorkStream – a design pattern for multicore-enabled finite element computations, ACM Trans. Math. Software43 (2016), No.1, 2/1–2/29. Search in Google Scholar

[60] A. Walther and A. Griewank, Getting started with ADOL-C, In: Combinatorial Scientific Computing (Eds. U. Naumann and O. Schenk), Chapman-Hall CRC Computational Science, pp. 181–202, 2012. Search in Google Scholar

Received: 2018-05-22
Accepted: 2018-06-03
Published Online: 2018-06-09
Published in Print: 2018-12-19

© 2018 Walter de Gruyter GmbH, Berlin/Boston