Book a Demo!
CoCalc Logo Icon
StoreFeaturesDocsShareSupportNewsAboutPoliciesSign UpSign In
ElmerCSC
GitHub Repository: ElmerCSC/elmerfem
Path: blob/devel/ReleaseNotes/release_8.4.txt
3194 views
1
Elmer Release Notes for version 8.4
2
===================================
3
4
Previous release: 8.3
5
Period covered: 18 May 2017 - 18 Dec 2018
6
Number of commits: ~750 (excluding merges)
7
8
These are just the most essential changes.
9
You can get a complete listing of commit messages, for example, with:
10
git log --since="2017-05-18" > log.txt
11
12
New Solver Modules
13
==================
14
15
StatCurrentSolveVec
16
-------------------
17
- Modernized version of StatCurrentSolve (not totally similar feature set)
18
- Uses ListGetElement keyword fetches and vectorized assembly routines
19
- New way to compute resistivity matrix utilizing constrained modes.
20
- Farfield conditions where added around origin.
21
- Elemental result fields enabled.
22
23
24
EmWaveSolver
25
------------
26
- Module for electromagnetic waves in time domain
27
- Utilizes 1st and 2nd order Hcurl conforming elements
28
- Transient counterpart to VectorHelmholtz module
29
- Undocumented solver.
30
31
WaveSolver
32
----------
33
- Solver for scalar waves in time domain
34
- Upgraded from a test case to real module
35
- Some changes to the variational formulation
36
- Use of ValueHandles for faster evaluation of keywords
37
- Documentation available in Elmer Models Manual
38
39
40
Mesh2MeshSolver
41
--------------
42
- Basically a wrapper for GetVariable that can control the parameters used for Mesh2Mesh interpolation.
43
- The routine can give a number of parameters to be interpolated.
44
- Works in parallel at least with same number of partitions.
45
- Undomented solver.
46
47
48
ModelPDEevol
49
------------
50
- Module only applied in the solver of the same name
51
- Uses keyword handles and multilthreaded assembly
52
- Ideal solver as a basis for own developments
53
54
55
OpenFOAM2ElmerIO
56
----------------
57
- A file based coupler for importing fields from OpenFOAM to Elmer.
58
- The interpolation is carried out in Elmer using Galerkin method with diffusion for regularization.
59
- Elmer reads the data from files in OpenFOAM format.
60
- For optimal performance study the EOF library
61
62
63
Elmer2OpenFOAMIO
64
----------------
65
- A file based coupler for exporting fields from Elmer to OpenFOAM
66
- Interpolation is carried out in Elmer on cell centerpoints written by OpenFOAM.
67
- Elmer writes a file in OpenFOAM format.
68
- For optimal performance study the EOF library.
69
- Joined documentation with the previous routine in Models Manual
70
71
72
Enhanced Solver Modules
73
=======================
74
75
ElasticSolver
76
-------------
77
o a somewhat limited support for giving the material model in the Abaqus
78
UMAT format has been added
79
o for simple examples see the test cases UMAT_* and a template for the UMAT
80
subroutine currently contained in the solver code
81
o see also the updated version of the solver documentation (Elmer Models
82
Manual, Ch 6)
83
84
ShellSolver
85
-----------
86
o a major revision of the shell solver has been done and it can now handle
87
geometrically nonlinear problems
88
o see also the updated version of the solver documentation (Elmer Models
89
Manual, Ch 7)
90
91
MagnetoDynamics
92
---------------
93
- Added simple regularization to steady and transient cases
94
- Enable tree gauging in parallel runs. The gauge tree is still constructed sequentially,
95
but only once in a single run.
96
- Option to apply "Gauge Tree" only to non-conducting region.
97
- Lorentz velocity term for 3D WhitneyAVHarmonicSolver.
98
99
MagnetoDynamics2D
100
-----------------
101
- Zirka-Moroz hysteresis model for MagnetoDynamics 2D
102
o Zirka, Sergey & Moroz, Y.I. & Harrison, R.G. & Chiesa, Nicola. (2014).
103
Inverse Hysteresis Models for Transient Simulation. Power Delivery, IEEE
104
Transactions on. 29. 552-559. 10.1109/TPWRD.2013.2274530.
105
o Test cases:
106
* circuits2D_with_hysteresis: 2d test with circuit
107
* circuits2D_with_hysteresis_axi: 2d axisymmetric test with circuit
108
* Zirka: unit test that tries to recover hysteretic BH curve from FE simulation.
109
110
VectorHelmholtz
111
---------------
112
- Enabled solver to use quadratic edge elements
113
- Some streamlining of code
114
- New BC: Keyword `TEM potential` defines a quantity whose gradient is used as Neumann load.
115
116
117
CoilSolver
118
----------
119
- For closed coils there is a more narrow band where the jump BCs are set
120
121
ParticleDynamics
122
----------------
123
- Enable the module to use different types of particles with different properties
124
125
ParticleAdvector
126
----------------
127
- Fixes for parallel operation
128
- Enable elemental and DG result fields to eliminate problems related to interfaces.
129
130
SaveLine
131
--------
132
- Enable use of SaveLine for edge element fields
133
134
SaveScalars
135
-----------
136
- New operators: 'rms' (root-mean-square), 'int square', 'int square mean'
137
138
139
VtuOutputSolver
140
---------------
141
- Enable saving of elemental and ip fields
142
- If DG type of output is requested ip fields are fitted on-the-fly solving a small linear system
143
144
145
146
ElmerSolver library functionality
147
=================================
148
149
150
Lua support for ElmerSolver sif files
151
-------------------------------------
152
- Includes Lua 5.1.5 interpreter inside Elmer codebase under
153
contrib/lua-5.1.5
154
- Enabled in compilation with cmake variable `WITH_LUA`.
155
o Setting `USE_SYSTEM_LUA=TRUE` makes cmake look for system Lua.
156
o CMake variables `LUA_LIBRARIES`, `LUA_INCLUDE_DIR` disables cmake
157
from searching Lua.
158
- Enables Lua expressions inside sif file in following cases
159
o Inline syntax with `#<lua expression>#` or `#<lua expression>`
160
similarly to matc
161
o Commented sections in main sif file as follows
162
```
163
!---LUA BEGIN
164
! <first line of lua code>
165
! <second line of lua code>
166
! ...
167
! <last line of lua code>
168
!---LUA END
169
```
170
Such code blocks are to be executed prior to reading the rest of
171
the sif-file. Thus, such code blocks are not executed in included
172
files.
173
o Using variable dependent keyword evaluations:
174
```
175
keyword = variable var_a, var_b, var_c, ..., var_n
176
real lua "expr"
177
tensor keyword (i,j) = variable var_a, var_b, var_c, ..., var_n
178
real lua "expr 1, expr 2, expr 3, ..., expr i*j"
179
```
180
Here the entries to the `tensor keyword` are given in a row-wise
181
order.
182
- Should work in threaded mode too.
183
- Includes 2 tests: `Lua` and `KeywordUnitTestLua`
184
185
186
Multithreading and vectorization:
187
--------------------------------
188
- Multithreaded added to many parts of the library code
189
o HUTI CG, BiCGStab, HUTI GMRES and BiCGStabL (double precision versions)
190
o SparMatrixVector in SParIterSolver
191
o Norm computation & matrix scaling
192
o Matrix creation and bandwidth computation
193
o Modifications to enable experimental multithreaded startup.
194
Multithreaded Startup = Logical <boolean>
195
o Boundary mesh is now also colored when MultiColour Solver is set to True.
196
Colour index lists for boundary mesh are available in
197
Mesh % BoundaryColourIndexList similarly to regular colour index lists.
198
o Added partial implementation of ListMatrixArray type as a thread safe
199
replacements of ListMatrix
200
o Completely removed locking from the critical path in FE assembly
201
o Improved NUMA locality by initializing the system matrix in parallel with threads
202
o Improved NUMA locality by making Gauss points structure private to each thread.
203
o ElementInfoVec and LinearForms module now use stack storage for work space.
204
o GetCurrentElement and SetCurrentElement modified to give out correct values
205
when called from within parallel regions.
206
o Modified CheckElementEquation to be thread safe.
207
o Test cases Added multithreaded and mesh colored version of ModelPDE.
208
209
- SIMD improvements:
210
o Added linear forms for (grad u, v), (u,v) in H^1 to LinearForms module.
211
o Added testing of (GradU,V) and (U,V) linear forms to LinearFormsAssembly
212
test (only constant coefficients).
213
o SIMD efficiency improvements to GetElementNOFDOFs.
214
o Improved SIMD performance of ElementInfoVec, ElementMetricVec and
215
LinearForms for a small number of Gauss points.
216
o Significantly improved the performance of CRS_GlueLocalMatrixVec by swithing
217
to an alternative algorithm and introducing software prefetching.
218
o H1Basis has been refactored to avoid register spilling arising from
219
accessing unaligned multidimensional array.
220
221
222
Block preconditioning
223
---------------------
224
- Block treatment has two main uses
225
o Split up monolithic equations into subproblems that are easier to solve
226
o Combine linear multiphysical coupled problems into a block matrix
227
o These are usually best solver with outer Krylov iteration using GCR
228
229
- Implement new experimental ways to split existing linear system
230
o Into Re and Im blocks for complex solvers
231
o Into horizontal and vertical (hor-ver) degrees of freedom for lowest order edge elements
232
o Into Cartesian direction for fully Cartesian edge element meshes
233
o These might not be fully operational particularly in parallel
234
o Constraints dealt as additional blocks
235
236
- FSI: Implement ways to combine fluid and structure solvers to form a block matrix
237
o Library routines used to create the coupling matrices
238
o Limited to nodal degrees of freedom
239
o Currently assumed linear solvers for the different fields
240
o Test cases exists for a number of combinations of structure and fluid solvers
241
o Structure solver can be linear plate solver, shell solver, or stress solver
242
o Fluid solver can be Helmholtz solver, for example
243
244
245
Linear solver features
246
----------------------
247
- Pseudocomplex gcr implemented to be used mainly in conjunction of block preconditioners
248
- Enable any transient solver to be solved as harmonic when > Harmonic Mode = True < keyword is given.
249
o Also harmonic field is named for visualization
250
o Can be used in conjunction with block preconditioning
251
- In Hypre interface adapted during the solution procedure although
252
a previously constructed Hypre solver is utilized. This feature depends on
253
giving the command Linear System Adaptive Tolerance = True.
254
- Added > Linear System Min Iterations < parameter to HutIter structures and applied it
255
to the built-in GCR iterative method. Sometimes block preconditioning needs more than one iteration.
256
- Linear solver strategy that uses namespaces to try out different strategies until convergence is reached.
257
Currently only own Krylov methods and direct methods supported.
258
Activates by >Linear System Trialing = Logical True< and the linear solver strategies with namespaces.
259
- When using >filename numbering< in SaveScalars only update the file number the 1st time visiting the subroutine.
260
261
Dirichlet conditions
262
--------------------
263
- Dirichlet conditions were totally reformulated
264
o Separates the detection and assigning of the conditions
265
o Makes many follow-up steps easier, e.g. optimal scaling of the linear system
266
- Some fix for nontrivial boundary conditions for p-elements
267
268
269
Non-nodal field types
270
----------------------
271
- Better support for different field types
272
o nodal, elemental, DG, ip
273
o New types supported for exported variables
274
o ListGetElementalReal operations may depend on these variables types
275
o Ip fields may depend on adaptive Gaussian quadratures
276
o Initialization improved for non-nodal fields
277
o Exported variables may be used to make them active only in subsection on where the primary solver is.
278
o Mask is a logical keyword in Body Force section given my keyword >Exported Variable i Mask<
279
280
Derived fields
281
---------------
282
- Restructure the computation of derived fields
283
- Enable exported variables to be transient and their velocity be saved
284
285
286
Adaptive quadratures
287
--------------------
288
- GaussPointsAdapt implemented to allow higher integration rules within a band
289
290
291
Reduced basis DG
292
----------------
293
- Enable solution of PDEs using "reduced basis DG" where discontinuities are only present between bodies.
294
This allows for a more economical treatment of discontinuities than having a complete discontinuous
295
Galerkin method.
296
- Also it is more flexible than having the discontinuity created in the mesh.
297
There are multiple ways how the bodies can be grouped when creating discontinuities between them.
298
299
300
Zoltan interface
301
----------------
302
- Added preliminary Zoltan interface to allow parallel mesh repartitioning for purposes of load
303
balancing/initial mesh distribution.
304
- Allows serial meshes to be parallelised within Elmer and distributed to processors.
305
- Also allows load rebalancing during simulations following mesh modification.
306
- Still work in progress.
307
308
309
MMG interface
310
--------------
311
- Added library interface to MMG 3D for performing remeshing and mesh adaptation functions via API.
312
- 3D meshes can be adapted based on user-defined metrics, and domain geometry can be chopped (e.g. fracture events)
313
using a level set method. MMG functions are serial only, but routines have been added to isolate subregions of
314
the mesh requiring adaptation, reducing overall computational cost.
315
- Still under development.
316
317
318
Internal partitioning
319
---------------------
320
- Routines allow to skip the separate partitioning routine with ElmerGrid
321
- Does not deal with halos yet.
322
- Either geometric routine, or any routine by Zoltan.
323
- Keywords to Zoltan passed by namespace 'zoltan:'
324
- Own strategies have limited support for hybrid partitioning
325
326
327
EOF library
328
-----------
329
- Separate library developed by Juris Vencels, see https://eof-library.com/
330
- Some of the general developments of Elmer motivated by a streamlined operation for the EOF library,
331
e.g. the improved support for different field types.
332
333
334
Local assembly vectorization
335
----------------------------
336
- Add (hopefully) temporary unvectorized p-pyramid to "ElementInfoVec".
337
- Added `LinearForms_UdotV` in `LinearForms.F90`
338
339
340
Miscellaneous
341
------------
342
- Enable _post solver slot for single solvers to have cleaner routines.
343
- Enable > Output Directory < keyword also in Simulation section such that SaveScalars,
344
SaveLine, and VtuOutputSolver can use a common target directory more easily.
345
- Enable using Solver specific mesh with of #partitions different from #mpitasks.
346
- Read command line arguments when there are more than 1 MPI task as well.
347
This should make ELMERSOLVER_STARTINFO file superfluous.
348
349
350
351
ElmerGrid
352
=========
353
- Support for Gmsh version 4 import
354
355
- Fixes and enhancements to mesh formats
356
o Abaqus format (.ino) in case mesh parts are used
357
o Prism added to Ansys import
358
o Fixed ordering of quadratic triangle in universal format (.unv)
359
360
- Modifications in partitioning routines
361
o Updated to use fresh Metis v. 5.1.0 (after it was released with suitable license)
362
o Enable contiguous Metis partitioning on request
363
o Added tests for partitioning
364
o Remove writing of parallel orphan nodes after ElmerSolver communicates Dirichlet nodes
365
366
ElmerGUI
367
========
368
- Add Paraview icon to ElmerGUI (with a permission from Kitware).
369
- Add stress computation to ElmerGUI for nonlinear solvers
370
- Add xml file for harmonic av solver.
371
- ElmerGUI new version of OCC without vtk-post
372
- Ensure that ElmerGUI locale follows that of C language.
373
- Modify default of linear solvers in ElmerGUI
374
375
376
MATC
377
====
378
- add internal function "env" such that in sif files one can use
379
str=env("name")
380
o The routine makes it possible to pass environmental variables to ElmerSolver
381
382
383
Configuration & Compilation
384
===========================
385
386
- cmake improvements:
387
o Create pkg-config file `elmer.pc` if cmake variable
388
`CREATE_PKGCONFIG_FILE` is set to true
389
o The `elmer.pc` file is installed under
390
`${CMAKE_INSTALL_PREFIX}/share/pkgconfig` by default. The install path
391
can be changed with cmake variable `PKGCONFIG_PC_PATH`
392
o Improved the way CMake detects and uses BLAS/LAPACK from Intel Math
393
Kernel library. BLAS and LAPACK routines from Intel MKL are now used by default if
394
detected by FindMKL unless BLAS_LIBRARIES and LAPACK_LIBRARIES CMake
395
variables have been set.
396
o Added detection of OpenMP SIMD features within the build
397
system. Added routines for checking the existence and the
398
functionality of the used OMP SIMD -features in the code.
399
o Make suitesparse solver cholmod & spqr usable also when "USE_ISO_C_BIDINGS"
400
true at compile time.
401
o Added global preprocessor macros to allow OpenMP SIMD functionality
402
to be disabled if needed:
403
o Included Elmer/Ice library in elmerf90 command when compiled with
404
405
- ctest improvements:
406
o output stdout/stderr if CTEST_OUTPUT_ON_FAILURE=1 env variable is set
407
408
- Dockerfile added to promote easy cross-platform builds of Elmer/Ice (by nwrichmond)
409
o The dockerfile shows the recipe for a Docker image which runs Elmer/Ice in
410
a lightweight Ubuntu Linux environment. This way, anyone can use Elmer/Ice
411
whether they are on a Windows, Mac, or Linux platform - all they need to do
412
is install Docker, then follow the instructions laid out on the Docker Hub
413
description for the Elmer/Ice Docker image:
414
415
Elmer/Ice
416
=========
417
New features in Elmer/Ice are documented in elmerfem/elmerice/ReleaseNotes/release_elmerice_8.4.txt
418
419
420
421
Acknowledgements
422
================
423
Apart from the core Elmer team at CSC (Juhani K., Mika M., Juha R., Peter R., Thomas Z.)
424
git log shows contributions from from Mikko B., Eelis T., Fabien G.-C., Olivier G., Janne K.,
425
Joe T., Nick R., Juris V., Pavel P., and Sami I..
426
427
Additionally there are many ongoing developments in several branches
428
that have not been merged to this release, and are therefore not covered here.
429
Also sometimes the code has been passed on by the original author by means other than the
430
git, and in such cases the names may have been accidentally omitted.
431
432
The contribution of all developers is gratefully acknowledged.
433
434