Tutorial: Solution of the heat equation with Neumann boundary conditions
Similar to the tutorial on linear advection, we will demonstrate how to solve a conservative production-destruction system (PDS) resulting from a PDE discretization and means to improve the performance.
Definition of the conservative production-destruction system
Consider the heat equation
\[\partial_t u(t,x) = \mu \partial_x^2 u(t,x),\quad u(0,x)=u_0(x),\]
with $μ ≥ 0$, $t≥ 0$, $x\in[0,1]$, and homogeneous Neumann boundary conditions. We use a finite volume discretization, i.e., we split the domain $[0, 1]$ into $N$ uniform cells of width $\Delta x = 1 / N$. As degrees of freedom, we use the mean values of $u(t)$ in each cell approximated by the point value $u_i(t)$ in the center of cell $i$. Finally, we use the classical central finite difference discretization of the Laplacian with homogeneous Neumann boundary conditions, resulting in the ODE
\[\partial_t u(t) = L u(t), \quad L = \frac{\mu}{\Delta x^2} \begin{pmatrix} -1 & 1 \\ 1 & -2 & 1 \\ & \ddots & \ddots & \ddots \\ && 1 & -2 & 1 \\ &&& 1 & -1 \end{pmatrix}.\]
The system can be written as a conservative PDS with production terms
\[\begin{aligned} &p_{i,i-1}(t,\mathbf u(t)) = \frac{\mu}{\Delta x^2} u_{i-1}(t),\quad i=2,\dots,N, \\ &p_{i,i+1}(t,\mathbf u(t)) = \frac{\mu}{\Delta x^2} u_{i+1}(t),\quad i=1,\dots,N-1, \end{aligned}\]
and destruction terms $d_{i,j} = p_{j,i}$. In addition, all production and destruction terms not listed are zero.
Solution of the conservative production-destruction system
Now we are ready to define a ConservativePDSProblem
and to solve this problem with a method of PositiveIntegrators.jl or OrdinaryDiffEq.jl. In the following we use $N = 100$ nodes and the time domain $t \in [0,1]$. Moreover, we choose the initial condition
\[u_0(x) = \cos(\pi x)^2.\]
x_boundaries = range(0, 1, length = 101)
x = x_boundaries[1:end-1] .+ step(x_boundaries) / 2
u0 = @. cospi(x)^2 # initial solution
tspan = (0.0, 1.0) # time domain
We will choose three different matrix types for the production terms and the resulting linear systems:
- standard dense matrices (default)
- sparse matrices (from SparseArrays.jl)
- tridiagonal matrices (from LinearAlgebra.jl)
Standard dense matrices
using PositiveIntegrators # load ConservativePDSProblem
function heat_eq_P!(P, u, μ, t)
fill!(P, 0)
N = length(u)
Δx = 1 / N
μ_Δx2 = μ / Δx^2
let i = 1
# Neumann boundary condition
P[i, i + 1] = u[i + 1] * μ_Δx2
end
for i in 2:(length(u) - 1)
# interior stencil
P[i, i - 1] = u[i - 1] * μ_Δx2
P[i, i + 1] = u[i + 1] * μ_Δx2
end
let i = length(u)
# Neumann boundary condition
P[i, i - 1] = u[i - 1] * μ_Δx2
end
return nothing
end
μ = 1.0e-2
prob = ConservativePDSProblem(heat_eq_P!, u0, tspan, μ) # create the PDS
sol = solve(prob, MPRK22(1.0); save_everystep = false)
using Plots
plot(x, u0; label = "u0", xguide = "x", yguide = "u")
plot!(x, last(sol.u); label = "u")
Sparse matrices
To use different matrix types for the production terms and linear systems, you can use the keyword argument p_prototype
of ConservativePDSProblem
and PDSProblem
.
using SparseArrays
p_prototype = spdiagm(-1 => ones(eltype(u0), length(u0) - 1),
+1 => ones(eltype(u0), length(u0) - 1))
prob_sparse = ConservativePDSProblem(heat_eq_P!, u0, tspan, μ;
p_prototype = p_prototype)
sol_sparse = solve(prob_sparse, MPRK22(1.0); save_everystep = false)
plot(x, u0; label = "u0", xguide = "x", yguide = "u")
plot!(x, last(sol_sparse.u); label = "u")
Tridiagonal matrices
The sparse matrices used in this case have a very special structure since they are in fact tridiagonal matrices. Thus, we can also use the special matrix type Tridiagonal
from the standard library LinearAlgebra
.
using LinearAlgebra
p_prototype = Tridiagonal(ones(eltype(u0), length(u0) - 1),
ones(eltype(u0), length(u0)),
ones(eltype(u0), length(u0) - 1))
prob_tridiagonal = ConservativePDSProblem(heat_eq_P!, u0, tspan, μ;
p_prototype = p_prototype)
sol_tridiagonal = solve(prob_tridiagonal, MPRK22(1.0); save_everystep = false)
plot(x, u0; label = "u0", xguide = "x", yguide = "u")
plot!(x, last(sol_tridiagonal.u); label = "u")
Performance comparison
Finally, we use BenchmarkTools.jl to compare the performance of the different implementations.
using BenchmarkTools
@benchmark solve(prob, MPRK22(1.0); save_everystep = false)
BenchmarkTools.Trial: 1085 samples with 1 evaluation per sample.
Range (min … max): 4.228 ms … 28.333 ms ┊ GC (min … max): 0.00% … 0.00%
Time (median): 4.405 ms ┊ GC (median): 0.00%
Time (mean ± σ): 4.605 ms ± 994.576 μs ┊ GC (mean ± σ): 3.36% ± 5.10%
█▆▃▄▄▃ ▁▆▄▁
███████▅▁█████▆▅▇▅▁▄▄▁▁▁▅▁▁▁▄▁▄▁▄▁▁▁▁▁▁▁▁▁▁▁▁▅█▇▆▅▅▄▁▁▄▁▁▁▄ █
4.23 ms Histogram: log(frequency) by time 7.11 ms <
Memory estimate: 5.11 MiB, allocs estimate: 375.
@benchmark solve(prob_sparse, MPRK22(1.0); save_everystep = false)
BenchmarkTools.Trial: 1509 samples with 1 evaluation per sample.
Range (min … max): 3.043 ms … 16.109 ms ┊ GC (min … max): 0.00% … 3.51%
Time (median): 3.102 ms ┊ GC (median): 0.00%
Time (mean ± σ): 3.311 ms ± 633.636 μs ┊ GC (mean ± σ): 3.72% ± 5.41%
▅█▇▅▅▄▁▁ ▄▅▄▂▁
████████▆▆▄▁▁▁▁▁▁▅▁▁▁▁▁▁▁▁▁▁▁▄▅▅██████▆▅▅▅▄▅▄▄▅▅▁▁▁▁▁▄▁▁▁▁▄ █
3.04 ms Histogram: log(frequency) by time 4.27 ms <
Memory estimate: 5.06 MiB, allocs estimate: 2760.
By default, we use an LU factorization for the linear systems. At the time of writing, Julia uses SparseArrays.jl defaulting to UMFPACK from SuiteSparse in this case. However, the linear systems do not necessarily have the structure for which UMFPACK is optimized for. Thus, it is often possible to gain performance by switching to KLU instead.
using LinearSolve
@benchmark solve(prob_sparse, MPRK22(1.0; linsolve = KLUFactorization()); save_everystep = false)
BenchmarkTools.Trial: 7289 samples with 1 evaluation per sample.
Range (min … max): 610.650 μs … 15.676 ms ┊ GC (min … max): 0.00% … 72.18%
Time (median): 642.239 μs ┊ GC (median): 0.00%
Time (mean ± σ): 683.426 μs ± 413.279 μs ┊ GC (mean ± σ): 3.74% ± 6.18%
▇█▄▂ ▃▁ ▁
███████▆▅▄▁▃▁▄▃▁▁▃▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▃▁▃▃▁▁▁▁▁▃▃ █
611 μs Histogram: log(frequency) by time 2.19 ms <
Memory estimate: 317.51 KiB, allocs estimate: 612.
@benchmark solve(prob_tridiagonal, MPRK22(1.0); save_everystep = false)
BenchmarkTools.Trial: 10000 samples with 1 evaluation per sample.
Range (min … max): 215.532 μs … 30.000 ms ┊ GC (min … max): 0.00% … 99.13%
Time (median): 228.988 μs ┊ GC (median): 0.00%
Time (mean ± σ): 270.944 μs ± 505.605 μs ┊ GC (mean ± σ): 6.36% ± 7.37%
██
██▇▃▂▂▂▂▂▁▁▁▂▂▆▆▃▂▂▂▂▂▂▂▂▂▂▂▁▁▁▁▁▂▁▁▁▁▁▂▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▂▂ ▂
216 μs Histogram: frequency by time 738 μs <
Memory estimate: 300.09 KiB, allocs estimate: 834.
Package versions
These results were obtained using the following versions.
using InteractiveUtils
versioninfo()
println()
using Pkg
Pkg.status(["PositiveIntegrators", "SparseArrays", "KLU", "LinearSolve", "OrdinaryDiffEq"],
mode=PKGMODE_MANIFEST)
Julia Version 1.11.3
Commit d63adeda50d (2025-01-21 19:42 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: Linux (x86_64-linux-gnu)
CPU: 4 × AMD EPYC 7763 64-Core Processor
WORD_SIZE: 64
LLVM: libLLVM-16.0.6 (ORCJIT, znver3)
Threads: 1 default, 0 interactive, 1 GC (on 4 virtual cores)
Environment:
JULIA_PKG_SERVER_REGISTRY_PREFERENCE = eager
Status `~/work/PositiveIntegrators.jl/PositiveIntegrators.jl/docs/Manifest.toml`
[ef3ab10e] KLU v0.6.0
[7ed4a6bd] LinearSolve v2.38.0
[1dea7af3] OrdinaryDiffEq v6.90.1
[d1b20bf0] PositiveIntegrators v0.2.6 `~/work/PositiveIntegrators.jl/PositiveIntegrators.jl`
[2f01184e] SparseArrays v1.11.0