Jump to ContentJump to Main Navigation
Show Summary Details
More options …

Journal of Numerical Mathematics

Editor-in-Chief: Hoppe, Ronald H. W. / Kuznetsov, Yuri

Managing Editor: Olshanskii, Maxim

Editorial Board: Benzi, Michele / Brenner, Susanne C. / Carstensen, Carsten / Dryja, M. / Feistauer, Miloslav / Glowinski, R. / Lazarov, Raytcho / Nataf, Frédéric / Neittaanmaki, P. / Bonito, Andrea / Quarteroni, Alfio / Guzman, Johnny / Rannacher, Rolf / Repin, Sergey I. / Shi, Zhong-ci / Tyrtyshnikov, Eugene E. / Zou, Jun / Simoncini, Valeria / Reusken, Arnold


IMPACT FACTOR 2018: 3.107

CiteScore 2018: 2.43

SCImago Journal Rank (SJR) 2018: 1.252
Source Normalized Impact per Paper (SNIP) 2018: 1.618

Mathematical Citation Quotient (MCQ) 2017: 1.68

Online
ISSN
1569-3953
See all formats and pricing
More options …
Just Accepted

Issues

The deal.II Library, Version 9.1

Daniel Arndt
  • Computational Engineering and Energy Sciences Group, Computional Sciences and Engineering Division, Oak Ridge National Laboratory, 1 Bethel Valley Rd., TN 37831, Oak Ridge, USA
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Wolfgang Bangerth / Thomas C. Clevenger / Denis Davydov
  • Chair of Applied Mechanics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Egerlandstr. 5, 91058, Erlangen, Germany
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Marc Fehling / Daniel Garcia-Sanchez
  • Sorbonne Universités, UPMC Univ, Paris 06, CNRS-UMR 7588, Institut des NanoSciences de Paris, F-75005, Paris, France
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Graham Harper / Timo Heister
  • School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC, 29634, USA
  • Scientific Computing and Imaging Institute, 72 S Central Campus Drive, Room 3750 Salt, Lake City, UT 84112, USA
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Luca Heltai / Martin Kronbichler / Ross Maguire Kynch
  • Zienkiewicz Centre for Computational Engineering, College of Engineering, Swansea University,Bay Campus, Fabian Way, Swansea SA1 8EN, Wales, UK
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Matthias Maier / Jean-Paul Pelteret
  • Chair of Applied Mechanics, Friedrich-Alexander-Universität Erlangen-Nürnberg, Egerlandstr. 5, 91058, Erlangen, Germany
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ Bruno Turcksin
  • Computational Engineering and Energy Sciences Group, Computional Sciences and Engineering Division, Oak Ridge National Laboratory, 1 Bethel Valley Rd., TN 37831, Oak Ridge, USA
  • Email
  • Other articles by this author:
  • De Gruyter OnlineGoogle Scholar
/ David Wells
Published Online: 2019-06-30 | DOI: https://doi.org/10.1515/jnma-2019-0064

Abstract

This paper provides an overview of the new features of the finite element library deal.II, version 9.1.

References

  • [1]

    G. Alzetta, D. Arndt, W. Bangerth, V. Boddu, B. Brands, D. Davydov, R. Gassmoeller, T. Heister, L. Heltai, K. Kormann, M. Kronbichler, M. Maier, J.-P. Pelteret, B. Turcksin, and D. Wells. The deal.II library, version 9.0. J. Numer. Math., 26(4):173–184, 2018.CrossrefGoogle Scholar

  • [2]

    P. Amestoy, I. Duff, and J.-Y. ĽExcellent. Multifrontal parallel distributed symmetric and unsymmetric solvers. Comput. Methods in Appl. Mech. Eng., 184:501–520, 2000.CrossrefGoogle Scholar

  • [3]

    P. R. Amestoy, I. S. Duff, J. Koster, and J.-Y. ĽExcellent. A fully asynchronous multifrontal solver using distributed dynamic scheduling. SIAM Journal on Matrix Analysis and Applications, 23(1):15–41, 2001.CrossrefGoogle Scholar

  • [4]

    P. R. Amestoy, A. Guermouche, J.-Y. ĽExcellent, and S. Pralet. Hybrid scheduling for the parallel solution of linear systems. Parallel Computing, 32(2):136–156, 2006.CrossrefGoogle Scholar

  • [5]

    E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Green- baum, S. Hammarling, A. McKenney, and D. Sorensen. LAPACK Users’ Guide. Society for Industrial and Applied Mathematics, Philadelphia, PA, third edition, 1999.Google Scholar

  • [6]

    D. Arndt, W. Bangerth, D. Davydov, T. Heister, L. Heltai, M. Kronbichler, M. Maier, J.-P. Pelteret, B. Turcksin, and D. Wells. The deal.II library, version 8.5. Journal of Numerical Mathematics, 25(3):137–146, 2017.Google Scholar

  • [7]

    S. Balay, S. Abhyankar, M. F. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, V. Eijkhout, W. D. Gropp, D. Karpeyev, D. Kaushik, M. G. Knepley, D. May, L. C. McInnes, R. Mills, T. Munson, K. Rupp, P. S. B. F. Smith, S. Zampini, H. Zhang, and H. Zhang. PETSc users manual. Technical Report ANL-95/11 - Revision 3.9, Argonne National Laboratory, 2018.Google Scholar

  • [8]

    S. Balay, S. Abhyankar, M. F. Adams, J. Brown, P. Brune, K. Buschelman, L. Dalcin, V. Eijkhout, W. D. Gropp, D. Karpeyev, D. Kaushik, M. G. Knepley, D. May, L. C. McInnes, R. Mills, T. Munson, K. Rupp, P. S. B. F. Smith, S. Zampini, H. Zhang, and H. Zhang. PETSc Web page. http://www.mcs.anl.gov/petsc, 2018.

  • [9]

    W. Bangerth, C. Burstedde, T. Heister, and M. Kronbichler. Algorithms and data structures for massively parallel generic adaptive finite element codes. ACM Trans. Math. Softw., 38:14/1–28, 2011.Google Scholar

  • [10]

    W. Bangerth, R. Hartmann, and G. Kanschat. deal.II — a general purpose object oriented finite element library. ACM Trans. Math. Softw., 33(4), 2007.Google Scholar

  • [11]

    W. Bangerth and O. Kayser-Herold. Data structures and requirements for hp finite element software. ACM Trans. Math. Softw., 36(1):4/1–4/31, 2009.CrossrefGoogle Scholar

  • [12]

    C. Bernardi and G. Raugel. Analysis of some finite elements for the Stokes problem. Mathematics of Computation, 44(169):71–79,1985.CrossrefGoogle Scholar

  • [13]

    L. S. Blackford, J. Choi, A. Cleary, E. ĎAzevedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley. ScaLAPACK Users’ Guide. Society for Industrial and Applied Mathematics, Philadelphia, PA, 1997.Google Scholar

  • [14]

    J. L. Blanco and P. K. Rai. nanoflann: a C++ header-only fork of FLANN, a library for Nearest Neighbor (NN) with KD-trees. https://github.com/jlblancoc/nanoflann, 2014.

  • [15]

    C. Burstedde. Parallel tree algorithms for AMR and non-standard data access. arXiv e-prints, page arXiv:1803.08432, Mar 2018.Google Scholar

  • [16]

    C. Burstedde, L. C. Wilcox, and O. Ghattas. p4est: Scalable algorithms for parallel adaptive mesh refinement on forests of octrees. SIAM J. Sci. Comput., 33(3):1103–1133, 2011.CrossrefGoogle Scholar

  • [17]

    T. C. Clevenger, T. Heister, G. Kanschat, and M. Kronbichler. A flexible, parallel, adaptive geometric multigrid method for FEM. Technical report, arXiv:1904.03317, 2019.Google Scholar

  • [18]

    cuSOLVERLibrary. https://docs.nvidia.com/cuda/cusolver/index.html.

  • [19]

    cuSPARSE Library. https://docs.nvidia.com/cuda/cusparse/index.html.

  • [20]

    T. A. Davis. Algorithm 832: UMFPACK V4.3—an unsymmetric-pattern multifrontal method. ACM Trans. Math. Softw., 30:196–199, 2004.CrossrefGoogle Scholar

  • [21]

    D. Davydov, T. Gerasimov, J.-P. Pelteret, and P. Steinmann. Convergence study of the h-adaptive PUM and the hp-adaptive FEM applied to eigenvalue problems in quantum mechanics. Advanced Modeling and Simulation in Engineering Sciences, 4(1):7, Dec 2017.Google Scholar

  • [22]

    A. DeSimone, L. Heltai, and C. Manigrasso. Tools for the solution of PDEs defined on curved manifolds with deal.II. Technical Report 42/2009/M, SISSA, 2009.Google Scholar

  • [23]

    M. Galassi, J. Davies, J. Theiler, B. Gough, G. Jungman, P. Alken, M. Booth, F. Rossi, and R. Ulerich. Gnu scientific library reference manual (edition 2.3), 2016.Google Scholar

  • [24]

    C. Geuzaine and J.-F. Remacle. Gmsh: A 3-d finite element mesh generator with built-in pre-and post-processing facilities. International journal for numerical methods in engineering, 79(11):1309–1331, 2009.CrossrefGoogle Scholar

  • [25]

    Ginkgo: high-performance linear algebra library for manycore systems. https://github.com/ginkgo-project/ginkgo.

  • [26]

    N. Giuliani, A. Mola, and L. Heltai. n-BEM: A flexible parallel implementation for adaptive, geometry aware, and high order boundary element methods. Advances in Engineering Software, 121(March):39–58, 2018.CrossrefGoogle Scholar

  • [27]

    A. Griewank, D. Juedes, and J. Utke. Algorithm 755: ADOL-C: a package for the automatic differentiation of algorithms written in C/C++. ACM Transactions on Mathematical Software (TOMS), 22(2):131–167,1996.CrossrefGoogle Scholar

  • [28]

    L. Heltai and A. Mola. Towards the Integration of CAD and FEM using open source libraries: a Collection of deal.II Manifold Wrappers for the OpenCASCADE Library. Technical report, SISSA, 2015. Submitted.Google Scholar

  • [29]

    V. Hernandez, J. E. Roman, and V. Vidal. SLEPc: A scalable and flexible toolkit for the solution of eigenvalue problems. ACM Trans. Math. Software, 31(3):351–362, 2005.CrossrefGoogle Scholar

  • [30]

    M. A. Heroux, R. A. Bartlett, V. E. Howle, R. J. Hoekstra, J. J. Hu, T. G. Kolda, R. B. Lehoucq, K. R. Long, R. P. Pawlowski, E. T. Phipps, A. G. Salinger, H. K. Thornquist, R. S. Tuminaro, J. M. Willenbring, A. Williams, and K. S. Stanley. An overview of the Trilinos project. ACM Trans. Math. Softw., 31:397–423, 2005.CrossrefGoogle Scholar

  • [31]

    M. A. Heroux et al. Trilinos web page, 2018. http://trilinos.org.

  • [32]

    A. C. Hindmarsh, P. N. Brown, K. E. Grant, S. L. Lee, R. Serban, D. E. Shumaker, and C. S. Woodward. SUNDIALS: Suite of nonlinear and differential/algebraic equation solvers. ACM Transactions on Mathematical Software (TOMS), 31(3):363–396, 2005.CrossrefGoogle Scholar

  • [33]

    B. Janssen and G. Kanschat. Adaptive multilevel methods with local smoothing for H1- and Hcurl-conforming high order finite element methods. SIAM J. Sci. Comput., 33(4):2095–2114, 2011.CrossrefGoogle Scholar

  • [34]

    G. Kanschat. Multi-level methods for discontinuous Galerkin FEM on locally refined meshes. Comput. & Struct., 82(28):2437–2445, 2004.CrossrefGoogle Scholar

  • [35]

    G. Karypis and V. Kumar. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput., 20(1):359–392,1998.CrossrefGoogle Scholar

  • [36]

    M. Kronbichler and K. Kormann. A generic interface for parallel cell-based finite element operator application. Comput. Fluids, 63:135–147, 2012.CrossrefGoogle Scholar

  • [37]

    M. Kronbichler and K. Kormann. Fast matrix-free evaluation of discontinuous Galerkin finite element operators. ACM Trans. Math. Soft., in press:1–37, 2019.Google Scholar

  • [38]

    M. Kronbichler and K. Ljungkvist. Multigrid for matrix-free high-order finite element computations on graphics processors. ACM Trans. Parallel Comput., 6(1):2/1–32, 2019.Google Scholar

  • [39]

    M. Kronbichler and W. A. Wall. A performance comparison of continuous and discontinuous Galerkin methods with fast multigrid solvers. SIAM J. Sci. Comput., 40(5):A3423-A3448,2018.CrossrefGoogle Scholar

  • [40]

    R. M. Kynch and P. D. Ledger. Resolving the sign conflict problem for hp-hexahedral Nedelec elements with application to eddy current problems. Computers & Structures, 181:41–54, Mar. 2017.CrossrefGoogle Scholar

  • [41]

    R. B. Lehoucq, D. C. Sorensen, and C. Yang. ARPACK users’ guide: solution of large-scale eigenvalue problems with implicitly restarted Arnoldi methods. SIAM, Philadelphia, 1998.Google Scholar

  • [42]

    List of changes for 9.1. https://www.dealii.org/developer/doxygen/deal.II/changes_between_9_⦰_1_and_9_1_⦰.html.

  • [43]

    K. Ljungkvist. Matrix-free finite-element computations on graphics processors with adaptively refined unstructured meshes. In Proceedings of the 25th High Performance Computing Symposium, HPC ’17, pages 1:1–1:12, San Diego, CA, USA, 2017. Society for Computer Simulation International.Google Scholar

  • [44]

    M. Maier, M. Bardelloni, and L. Heltai. LinearOperator – a generic, high-level expression syntax for linear algebra. Computers and Mathematics with Applications, 72(1):1–24, 2016.CrossrefGoogle Scholar

  • [45]

    M. Maier, M. Bardelloni, and L. Heltai. LinearOperator Benchmarks, Version 1.0.0, Mar. 2016.Google Scholar

  • [46]

    MUMPS: a MUltifrontal Massively Parallel sparse direct Solver. http://graal.ens-lyon.fr/MUMPS/.

  • [47]

    muparser: Fast Math Parser Library. http://muparser.beltoforion.de/.

  • [48]

    OpenCASCADE: Open CASCADE Technology, 3D modeling & numerical simulation. http://www.opencascade.org/.

  • [49]

    J. Reinders. Intel Threading Building Blocks. O’Reilly, 2007.Google Scholar

  • [50]

    R. Rew and G. Davis. NetCDF: an interface for scientific data access. Computer Graphics and Applications, IEEE, 10(4):76–82,1990.CrossrefGoogle Scholar

  • [51]

    D. Ridzal and D. P. Kouri. Rapid optimization library. Technical report, Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States), 2014.Google Scholar

  • [52]

    A. Sartori, N. Giuliani, M. Bardelloni, and L. Heltai. deal2lkit: A toolkit library for high performance programming in deal.II. SoftwareX, 7:318–327, 2018.CrossrefGoogle Scholar

  • [53]

    T. Schulze, A. Gessler, K. Kulling, D. Nadlinger, J. Klein, M. Sibly, and M. Gubisch. Open asset import library (assimp). Computer Software, URL: https://github.com/assimp/assimp, 2012.Google Scholar

  • [54]

    SymEngine: fast symbolic manipulation library, written in C++. https://github.com/symengine/symengine, http://sympy.org/.

  • [55]

    The HDF Group. Hierarchical Data Format, version 5, 1997–2018. http://www.hdfgroup.org/HDF5/.

  • [56]

    B. Turcksin, M. Kronbichler, and W. Bangerth. WorkStream – a design pattern for multicoreenabled finite element computations. ACM Transactions on Mathematical Software, 43(1):2/1–2/29, 2016.CrossrefGoogle Scholar

  • [57]

    A. Walther and A. Griewank. Getting started with ADOL-C. In Combinatorial Scientific Computing, Chapman-Hall CRC Computational Science, pages 181–202. U. Naumann and O.Schenk, 2012.Google Scholar

  • [58]

    S. Zaglmayr. High Order Finite Element Methods for Electromagnetic Field Computation. PhD thesis, Johannes Kepler University, Linz, Austria, 2006.Google Scholar

About the article

This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, worldwide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doe-public-access-plan).

heister@sci.utah.edu


Published Online: 2019-06-30


Citation Information: Journal of Numerical Mathematics, ISSN (Online) 1569-3953, ISSN (Print) 1570-2820, DOI: https://doi.org/10.1515/jnma-2019-0064.

Export Citation

© 2019 Walter de Gruyter GmbH, Berlin/Boston.Get Permission

Comments (0)

Please log in or register to comment.
Log in