The code assumes that some temperature is applied evenly across the leftmost boundary and then calculates the resulting temperature gradient at equilibrium. For small values of nodes the gradient can be viewed in real time but incurs considerable slowdown (see animate parameter for Heat2d constructor). To run a simple case with default values and only 1 solver use:
cd heatSolverPy/src
python3 main.pyTo customise use arguments below (options section shows default values):
python3 main.py [-h] [--height HEIGHT] [--length LENGTH] [--thickness THICKNESS] [--alpha ALPHA] [--kappa KAPPA] [--nodes NODES]
options:
-h, --help show this help message and exit
--height 1
--length 0.025
--thickness 0.01
--alpha 1.27e-4
--kappa 318
--nodes 20To run multiple solvers use mpirun as below:
# Two solvers exactly the same
mpirun -np 2 python3 main.py --length 0.025
#Two solvers with differing arguments
mpirun -np 1 python3 main.py --length 0.025 : -np 1 python3 main.py --length 0.05 --kappa 636 --alpha 2.55e-4This project uses numpy, matlplotlib, argparser, mpi4py, and pytest. To install these use spack:
spack install py-<insert package name>
The heat equation in 2 dimensions is given by the PDE below, where
We can solve this numerically using the Finite Difference method which relies on the approximation:
This comes from the first principle definition of the derivative but can also be derived from the taylor series expansion of a function. See here or here for a full walk through of the discretisation in time and space.
Applying the discretisation we get the following discretised expressions for the temperature gradient.
Where
Subbing in our approximations to the heat equation we get:
Rearranging:
Let
This is an expression for the temperature at the next timestep in terms of entirely known variables. We can compute this iteratively to solve for temperature at a given time (given we know some inital state).
This project uses MUI to couple multiple solvers together. There are two possible coupling conditions that can be enforced with similar performance.
Let
This is the simplest interface condition, we enforce the same Dirichlet boundary condition on both sides of the interface. Namely:
This is a weighted average of
Here we impose a Dirichlet Condition (fixedValue) on one side and Neumann Condition (fixedGradient) on the other. Supposing
Here the superscript denotes the iteration number the given temperature is from.
Each side decides which kind of condition it will impose based on the convergenceCoefficient variable. If the the Neumann side has a convergenceCoefficient greater than 1 we will get blow up since we recursively multiply by the convergenceCoefficient. Since one side must have a convergenceCoefficient smaller than 1 we always set this side to impose the Neumann Condition.
The validation cases are based on this site. Testing is implemented with Pytest and can be run using:
# set X to the number of solvers you wish to test with
mpirun -np X pytest tests.pyTo run use:
mpirun -np 1 python3 main.py --length 0.025 --nodes 100 : -np 1 python3 main.py --length 0.025 --alpha 2.5e-4 --kappa 636 --nodes 100- Layer length
$x_1 = 0.025$ m,$x_2 = 0.025$ m - Layer height: 1m (this is irrelevant in practice since flux at the top and bottom is always 0)
- Inital Temperature
$T_1 = 373.15$ - Final Temperature
$T_3 = 273.15$ - Thermal diffusivity $\alpha_1 = 1.27 \times 10^{-4} $ (Pure Gold)
$\alpha_2 = 2\alpha_1 = 2.54 \times 10^{-4}$ - Thermal conductivity
$k_1 = 318 $ ,$k_2 = 2k_1 = 636$ - Simulation time: 20s
- number of nodes: 20,000 (100x100x2)
Theoretically, the gradients should be:
We have:
Thus we expect a gradients of
Furthermore, the measure value of




