Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion src/03_Preliminaries.jl
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ This indeed can be shown to be the case.

- In comparing with (5) we notice the **monomials** $1 = x^0, x = x^1, x^2, x^3, \ldots, x^n$ to be a possible choice for a basis.
- Similar to Euclidean vector spaces this is not the only choice of basis and in fact many families of polynomials are known, which are frequently employed as basis functions (e.g. [Lagrange polynomials](https://en.wikipedia.org/wiki/Lagrange_polynomial), [Chebyshev polynomials](https://en.wikipedia.org/wiki/Chebyshev_polynomials), [Hermite polynomials](https://en.wikipedia.org/wiki/Hermite_polynomials), ...)
- One basis we will discuss in the context of [polynomial interpolation](https://teaching.matmat.org/numerical-analysis/05_Interpolation.html) are Lagrange polynomials, which have the form
- One basis we will discuss in the context of [polynomial interpolation](https://teaching.matmat.org/numerical-analysis/07_Interpolation.html) are Lagrange polynomials, which have the form
```math
\begin{aligned}
L_{\textcolor{red}{i}}(x) &= \prod_{\stackrel{j=1}{\textcolor{red}{j\neq i}}}^{n+1} \frac{x-x_j}{\textcolor{red}{x_i} - x_j} \\
Expand Down
4 changes: 2 additions & 2 deletions src/04_Nonlinear_equations.jl
Original file line number Diff line number Diff line change
Expand Up @@ -1393,7 +1393,7 @@ md"""
*any iterative procedure*.

We will consider this aspect further,
for example in [Iterative methods for linear systems](https://teaching.matmat.org/numerical-analysis/07_Iterative_methods.html).
for example in [Iterative methods for linear systems](https://teaching.matmat.org/numerical-analysis/06_Iterative_methods.html).
"""

# ╔═╡ bdff9554-58b6-466e-9c93-6b1367262b50
Expand Down Expand Up @@ -1616,7 +1616,7 @@ end
md"""
Note that he linear system $\textbf{A}^{(k)} \textbf{r}^{(k)} = - \textbf{y}^{(k)}$ is solved in Julia using the backslash operator `\`, which employs a numerically more stable algorithm than explicitly computing the inverse `inv(A)` and then applying this to `y`.
We will discuss these methods in
[Direct methods for linear systems](https://teaching.matmat.org/numerical-analysis/06_Direct_methods.html).
[Direct methods for linear systems](https://teaching.matmat.org/numerical-analysis/05_Direct_methods.html).
"""

# ╔═╡ 702ffb33-7fbe-4673-aed7-d985a76b455a
Expand Down
6 changes: 5 additions & 1 deletion src/06_Direct_methods.jl → src/05_Direct_methods.jl
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ end
# ╔═╡ ca2c949f-a6a0-485f-bd52-5dae3b050612
md"""
!!! info ""
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/06_Direct_methods.pdf)
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/05_Direct_methods.pdf)
"""

# ╔═╡ 21c9a859-f976-4a93-bae4-616122712a24
Expand All @@ -50,6 +50,9 @@ as well as a right-hand side $\mathbf{b} \in \mathbb{R}^n$.
As the solution we seek the unknown $\mathbf{x} \in \mathbb{R}^n$.
"""

# ╔═╡ adb09dc3-a074-4b5f-9757-85c05d22ee83
TODO("polynomial interpolation now comes later")

# ╔═╡ 419d11bf-2561-49ca-a6e7-40c8d8b88b24
md"""
- `nmax = ` $(@bind nmax Slider([5, 10, 12, 15]; default=10, show_value=true))
Expand Down Expand Up @@ -2328,6 +2331,7 @@ version = "17.4.0+2"
# ╠═3295f30c-c1f4-11ee-3901-4fb291e0e4cb
# ╟─21c9a859-f976-4a93-bae4-616122712a24
# ╟─b3cb31aa-c982-4454-8882-5b840c68df9b
# ╠═adb09dc3-a074-4b5f-9757-85c05d22ee83
# ╟─be5d3f98-4c96-4e69-af91-fa2ae5f74af5
# ╟─419d11bf-2561-49ca-a6e7-40c8d8b88b24
# ╠═011c25d5-0d60-4729-b200-cdaf3dc89faf
Expand Down
10 changes: 5 additions & 5 deletions src/07_Iterative_methods.jl → src/06_Iterative_methods.jl
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ end
# ╔═╡ 63bb7fe9-750f-4d2f-9d18-8374b113373e
md"""
!!! info ""
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/07_Iterative_methods.pdf)
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/06_Iterative_methods.pdf)
"""

# ╔═╡ 7d9c9392-3aec-4efd-a9ba-d8965687b163
Expand Down Expand Up @@ -357,7 +357,7 @@ md"""
From Theorem 1 we take away that the norm of iteration matrix
$\|\mathbf{B}\|$ is the crucial quantity to determine not only *if*
Richardson iterations converge, but also *at which rate*.
Recall in Lemma 4 of [Direct methods for linear systems](https://teaching.matmat.org/numerical-analysis/06_Direct_methods.html)
Recall in Lemma 4 of [Direct methods for linear systems](https://teaching.matmat.org/numerical-analysis/05_Direct_methods.html)
we had the result that for any matrix $\mathbf{B} \in \mathbb{R}^{m \times n}$
```math
\tag{5}
Expand Down Expand Up @@ -469,7 +469,7 @@ obtained by solving the system $\mathbf{A} \mathbf{x}_\ast = \mathbf{b}$ employi

We are thus in exactly the same setting as our
final section on *Numerical stability* in our discussion
on [Direct methods for linear systems](https://teaching.matmat.org/numerical-analysis/06_Direct_methods.html)
on [Direct methods for linear systems](https://teaching.matmat.org/numerical-analysis/05_Direct_methods.html)
where instead of solving $\mathbf{A} \mathbf{x}_\ast = \mathbf{b}$
we are only able to solve the perturbed system
$\mathbf{A} \widetilde{\textbf{x}} = \widetilde{\mathbf{b}}$.
Expand All @@ -478,7 +478,7 @@ $\mathbf{A} \widetilde{\textbf{x}} = \widetilde{\mathbf{b}}$.
# ╔═╡ 55a69e52-002f-40dc-8830-7fa16b7af081
md"""
We can thus directly apply Theorem 2
from [Direct methods for linear systems](https://teaching.matmat.org/numerical-analysis/06_Direct_methods.html), which states that
from [Direct methods for linear systems](https://teaching.matmat.org/numerical-analysis/05_Direct_methods.html), which states that
```math
\frac{\|\mathbf{x}_\ast - \widetilde{\mathbf{x}} \|}{\| \mathbf{x}_\ast \|}
≤ κ(\mathbf{A})
Expand Down Expand Up @@ -839,7 +839,7 @@ Importantly there is thus a **relation between optimisation problems** and **sol

# ╔═╡ bf9a171a-8aa4-4f21-bde3-56ccef40de24
md"""
SPD matrices are not unusual. For example, recall that in polynomial regression problems (see least-squares problems in [Interpolation](https://teaching.matmat.org/numerical-analysis/05_Interpolation.html)),
SPD matrices are not unusual. For example, recall that in polynomial regression problems (see least-squares problems in [Interpolation](https://teaching.matmat.org/numerical-analysis/07_Interpolation.html)),
where we wanted to find the best polynomial through the points
$(x_i, y_i)$ for $i=1, \ldots n$ by minimising the least-squares error,
we had to solve the *normal equations*
Expand Down
14 changes: 11 additions & 3 deletions src/05_Interpolation.jl → src/07_Interpolation.jl
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ end
# ╔═╡ 46b46b8e-b388-44e1-b2d8-8d7cfdc3b475
md"""
!!! info ""
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/05_Interpolation.pdf)
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/07_Interpolation.pdf)
"""

# ╔═╡ 61e5ef66-a213-4b23-9406-9cc63a58104c
Expand Down Expand Up @@ -692,6 +692,9 @@ This is an example of **exponential convergence**: The error of the approximatio
The **graphical characterisation** is similar to the iterative schemes we discussed in the previous chapter: We employ a **semilog plot** (using a linear scale for $n$ and a logarithmic scale for the error), where exponential convergence is characterised by a straight line:
"""

# ╔═╡ 21c98bd4-b3eb-4406-bcd2-0abfbeb9bb93
TODO("'previous chapter' remark likely outdated after pushing interpolation back")

# ╔═╡ d4cf71ef-576d-4900-9608-475dbd4d933a
let
fine = range(-1.0, 1.0; length=3000)
Expand Down Expand Up @@ -730,6 +733,9 @@ is one of the **desired properties**.
* If the error scales as $α C^{n}$ where $n$ is some accuracy parameter (with larger $n$ giving more accurate results), then we say the scheme has **exponential convergence**.
"""

# ╔═╡ 647f96ee-c0ad-4bd8-9de1-f24a7dcf6b24
TODO("'Last chapter' reference is likely outdated after pushing interpolation back")

# ╔═╡ a15750a3-3507-4ee1-8b9a-b7d6a3dcea46
md"""
### Stability of polynomial interpolation
Expand Down Expand Up @@ -827,7 +833,7 @@ md"""

Since for Chebyshev nodes $\Lambda_n$ stays relatitvely small, we would call Chebyshev interpolation **well-conditioned**. In contrast interpolation using equally spaced nodes is **ill-conditioned** as the condition number $\Lambda_n$ can get very large, thus **even small input errors can amplify** and **drastically reduce the accuracy** of the obtained polynomial.

We will meet other condition numbers later in the lecture, e.g. in [Iterative methods for linear systems](https://teaching.matmat.org/numerical-analysis/07_Iterative_methods.html).
We will meet other condition numbers later in the lecture, e.g. in [Iterative methods for linear systems](https://teaching.matmat.org/numerical-analysis/06_Iterative_methods.html).
"""

# ╔═╡ 5e19f1a7-985e-4fb7-87c4-5113b5615521
Expand Down Expand Up @@ -1954,7 +1960,7 @@ md"""
* The typical approach are **Chebyshev nodes**
* These lead to **exponential convergence**

Notice that all of these problems lead to linear systems $\textbf A \textbf x = \textbf b$ that we need to solve. How this can me done numerically we will see in [Direct methods for linear systems](https://teaching.matmat.org/numerical-analysis/06_Direct_methods.html).
Notice that all of these problems lead to linear systems $\textbf A \textbf x = \textbf b$ that we need to solve. How this can me done numerically we will see in [Direct methods for linear systems](https://teaching.matmat.org/numerical-analysis/05_Direct_methods.html).
"""

# ╔═╡ 2240f8bc-5c0b-450a-b56f-2b53ca66bb03
Expand Down Expand Up @@ -3307,8 +3313,10 @@ version = "1.4.1+2"
# ╟─25b82572-b27d-4f0b-9be9-323cd4e3ce7a
# ╟─c38b9e48-98bb-4b9c-acc4-7375bbd39ade
# ╟─479a234e-1ce6-456d-903a-048bbb3de65a
# ╠═21c98bd4-b3eb-4406-bcd2-0abfbeb9bb93
# ╟─d4cf71ef-576d-4900-9608-475dbd4d933a
# ╟─56685887-7866-446c-acdb-2c20bd11d4cd
# ╠═647f96ee-c0ad-4bd8-9de1-f24a7dcf6b24
# ╟─a15750a3-3507-4ee1-8b9a-b7d6a3dcea46
# ╟─7f855423-72ac-4e6f-92bc-73c12e5007eb
# ╟─eaaf2227-1a19-4fbc-a5b4-45503e832280
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ end
# ╔═╡ d34833b7-f375-40f7-a7a6-ab925d736320
md"""
!!! info ""
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/09_Numerical_integration.pdf)
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/08_Numerical_integration.pdf)
"""

# ╔═╡ 47114a9b-0e74-4e48-bb39-b49f526f1e9b
Expand Down Expand Up @@ -107,7 +107,7 @@ and **then integrate that** instead of $f$ itself.
Since the integration of the polynomial is essentially exact,
the error of such a scheme is **dominated by the error of the polynomial
interpolation**.
Recall the [chapter on Interpolation](https://teaching.matmat.org/numerical-analysis/05_Interpolation.html), where we noted polynomials
Recall the [chapter on Interpolation](https://teaching.matmat.org/numerical-analysis/07_Interpolation.html), where we noted polynomials
through equispaced nodes to become numerically unstable and
possibly inaccurate for large $n$ due to Runge's phaenomenon.

Expand Down Expand Up @@ -211,7 +211,7 @@ end

# ╔═╡ a1d83cb2-6e0d-4a53-a11f-60dc020249d4
md"""
Recall that in Theorem 4 of [chapter 05 (Interpolation)](https://teaching.matmat.org/numerical-analysis/05_Interpolation.html) we found that
Recall that in Theorem 4 of [chapter 07 (Interpolation)](https://teaching.matmat.org/numerical-analysis/07_Interpolation.html) we found that
the piecewise polynomial interpolation shows quadratic convergence
```math
\|f - p_{1,h}\|_\infty \leq α h^2 \| f'' \|_\infty,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ end
# ╔═╡ 4103c9d2-ef89-4c65-be3f-3dab59d1cc47
md"""
!!! info ""
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/10_Numerical_differentiation.pdf)
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/09_Numerical_differentiation.pdf)
"""

# ╔═╡ e9151d3f-8d28-4e9b-add8-43c713f6f068
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ end
# ╔═╡ b72b45ad-6191-40cb-9e9f-950bf1bfe212
md"""
!!! info ""
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/12_Boundary_value_problems.pdf)
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/10_Boundary_value_problems.pdf)
"""

# ╔═╡ 206ae56c-fcfa-4d6f-93e4-30f03dee8f90
Expand Down Expand Up @@ -176,7 +176,7 @@ u(0) &= b_0, \quad u(L) = b_L,
\right.
```
were $b_0, b_L \in \mathbb{R}$.
Similar to our approach when [solving initial value problems (chapter 11)](https://teaching.matmat.org/numerical-analysis/11_Initial_value_problems.html)
Similar to our approach when [solving initial value problems (chapter 12)](https://teaching.matmat.org/numerical-analysis/12_Initial_value_problems.html)
we **divide the full interval $[0, L]$ into $N+1$ subintervals** $[x_j, x_{j+1}]$
of uniform size $h$, i.e.
```math
Expand All @@ -185,6 +185,9 @@ x_j = j\, h \quad j = 0, \ldots, N+1, \qquad h = \frac{L}{N+1}.
Our goal is thus to find approximate points $u_j$ such that $u_j ≈ u(x_j)$ at the nodes $x_j$.
"""

# ╔═╡ 782dff7d-76f5-4977-98cb-81881a05331a
TODO("IVP is now after BCP, adjust reference accordingly")

# ╔═╡ 82788dfd-3462-4f8e-b0c8-9e196dac23a9
md"""
Due to the Dirichlet boundary conditions $u(0) = b_0$ and $u(L) = b_L$.
Expand All @@ -201,7 +204,7 @@ These internal nodes $u(x_j)$ need to satisfy
- \frac{\partial^2 u}{\partial x^2}(x_j) = f(x_j) \qquad \forall\, 1 ≤ j ≤ N.
```
As the derivatives of $u$ are unknown to us we employ a
**[central finite-difference formula](https://teaching.matmat.org/numerical-analysis/10_Numerical_differentiation.html)**
**[central finite-difference formula](https://teaching.matmat.org/numerical-analysis/09_Numerical_differentiation.html)**
to replace this derivative by the approximation
```math
\tag{3}
Expand Down Expand Up @@ -288,7 +291,7 @@ which is to be solved for the unknows $\mathbf{u}$.
# ╔═╡ c21502ce-777f-491a-a536-ff499fc172fc
md"""
We notice that $\mathbf{A}$ is **symmetric and tridiagonal**. Additionally one can show $\mathbf{A}$ to be **positive definite**.
Problem (8) can therefore be **efficiently solved** using [direct methods based on (sparse) LU factorisation (chapter 6)](https://teaching.matmat.org/numerical-analysis/06_Direct_methods.html) or an [iterative approaches (chapter 7)](https://teaching.matmat.org/numerical-analysis/07_Iterative_methods.html), e.g. the conjugate gradient method.
Problem (8) can therefore be **efficiently solved** using [direct methods based on (sparse) LU factorisation (chapter 5)](https://teaching.matmat.org/numerical-analysis/05_Direct_methods.html) or an [iterative approaches (chapter 6)](https://teaching.matmat.org/numerical-analysis/06_Iterative_methods.html), e.g. the conjugate gradient method.
"""

# ╔═╡ c2bb42b3-4fee-4ad4-84c0-06f58c7f7665
Expand Down Expand Up @@ -505,7 +508,7 @@ md"""
While initially the convergence thus nicely follows the expected convergence curve, **for larger $N$ the convergence degrades and the error starts increasing again**.

Similar to our discussion on numerical stability
in the [chapter on numerical differentiation](https://teaching.matmat.org/numerical-analysis/10_Numerical_differentiation.html)
in the [chapter on numerical differentiation](https://teaching.matmat.org/numerical-analysis/09_Numerical_differentiation.html)
this error plot is the result of a balance between two error contributions:
- The **discretisation error** due to the choice of $N$, where as $N$ gets larger
this error **decreases** as $O(N^{-2})$.
Expand Down Expand Up @@ -1122,7 +1125,7 @@ md"""
A widely employed set of basis functions for Galerkin approximations
are the hat functions $φ_i = H_i$,
which we already discussed in the chapter
on [Interpolation (chapter 5)](https://teaching.matmat.org/numerical-analysis/05_Interpolation.html).
on [Interpolation (chapter 7)](https://teaching.matmat.org/numerical-analysis/07_Interpolation.html).
Recall, that given a set of nodes $x_0 < x_1 < \cdots < x_{n}$
the hat functions are defined as
```math
Expand Down Expand Up @@ -2656,6 +2659,7 @@ version = "1.4.1+2"
# ╟─3e10cf8e-d5aa-4b3e-a7be-12ccdc2f3cf7
# ╟─7fd851e6-3180-4008-a4c0-0e08edae9954
# ╟─52c7ce42-152d-40fd-a910-78f755fcae47
# ╠═782dff7d-76f5-4977-98cb-81881a05331a
# ╟─82788dfd-3462-4f8e-b0c8-9e196dac23a9
# ╟─d43ecff3-89a3-4edd-95c2-7262e317ce29
# ╟─1fb53091-89c8-4f70-ab4b-ca2371b830b2
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ end
# ╔═╡ 34beda8f-7e5f-42eb-b32c-73cfc724062e
md"""
!!! info ""
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/08_Eigenvalue_problems.pdf)
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/11_Eigenvalue_problems.pdf)
"""

# ╔═╡ 13298dc4-9800-476d-9474-182359a7671b
Expand Down Expand Up @@ -59,7 +59,7 @@ then we find
\sqrt{λ_\text{max}(\mathbf A^T \mathbf A)} \, \| \mathbf x \|
= λ_\text{max}(\mathbf A) \, \| \mathbf x \|
```
where we used the inequalities introduced at the end of [Direct methods for linear systems](https://teaching.matmat.org/numerical-analysis/06_Direct_methods.html).
where we used the inequalities introduced at the end of [Direct methods for linear systems](https://teaching.matmat.org/numerical-analysis/05_Direct_methods.html).
We note that the **largest eigenvalue of $\mathbf A$**
provides a **bound to the action of $\mathbf{A}$**.
"""
Expand Down Expand Up @@ -849,7 +849,7 @@ the iterative loop.
Since for dense matrices
computing the factorisation scales $O(n^3)$,
but solving linear systems based on the factorisation only scales $O(n^2)$
([recall chapter 6](https://teaching.matmat.org/numerical-analysis/06_Direct_methods.html)), this reduces the cost per iteration.
([recall chapter 5](https://teaching.matmat.org/numerical-analysis/05_Direct_methods.html)), this reduces the cost per iteration.
"""

# ╔═╡ 8e01a98d-c49f-43b3-9681-07d8e4b7f12a
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ end
# ╔═╡ ba9b6172-0234-442c-baaa-876b12f689bd
md"""
!!! info ""
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/11_Initial_value_problems.pdf)
[Click here to view the PDF version.](https://teaching.matmat.org/numerical-analysis/12_Initial_value_problems.pdf)
"""

# ╔═╡ d8406b01-e36f-4953-a5af-cd563005c2a1
Expand Down Expand Up @@ -293,7 +293,7 @@ Our task is to find $u(t_{n+1})$.
md"""
We make progress by approximating the dervative of $u$
using one of the finite differences formulas
discussed in the chapter on [Numerical differentiation](https://teaching.matmat.org/numerical-analysis/10_Numerical_differentiation.html).
discussed in the chapter on [Numerical differentiation](https://teaching.matmat.org/numerical-analysis/09_Numerical_differentiation.html).

The simplest approach is to employ forward finite differences, i.e.
```math
Expand Down
Loading