ODEs, Polynomials approx., Linear Least Squares, and Errors Machine epsilon
f l ( x ) = x ( 1 + ϵ ) where ∣ ϵ ∣ ≤ u fl(x) = x(1+\mathbf{\epsilon}) \space\text{where }|\epsilon|\leq{u} f l ( x ) = x ( 1 + ϵ ) where ∣ ϵ ∣ ≤ u
∣ f l ( x ) − x x ∣ = ∣ ϵ ∣ ≤ u is called relative error. |\frac{fl(x)-x}{x}|=|\epsilon|\leq u \space\text{is called relative error.} ∣ x f l ( x ) − x ∣ = ∣ ϵ ∣ ≤ u is called relative error.
Cancellations occur when subtracting nearby number containing roundoff. \text{Cancellations occur when subtracting nearby number containing roundoff.} Cancellations occur when subtracting nearby number containing roundoff.
Taylor series
f ( x ) = ∑ k = 0 inf f ( k ) ( c ) k ! ( x − c ) k E n + 1 = f ( n + 1 ) ( ξ ) ( n + 1 ) ! ( h : = x − c ) n + 1 ∣ E n + 1 ∣ ≤ c h n + 1 \begin{aligned}
f(x) &= \sum_{k=0}^{\inf}\frac{f^{(k)}(c)}{k!}(x-c)^k\\\
E_{n+1} &= \frac{f^{(n+1)}(\xi)}{(n+1)!}(h:=x-c)^{n+1}\\\
|E_{n+1}| \leq ch^{n+1}\\\
\end{aligned} f ( x ) E n + 1 ∣ E n + 1 ∣ ≤ c h n + 1 = k = 0 ∑ i n f k ! f ( k ) ( c ) ( x − c ) k = ( n + 1 )! f ( n + 1 ) ( ξ ) ( h := x − c ) n + 1
Polynomial Interpolation
v ( x ) = ∑ j = 0 n c j ϕ j ( x ) → linearly independent iff v ( x ) = 0 ∀ x → c j = 0 ∀ j ) Linear system: [ ϕ 0 ( x 0 ) ϕ 1 ( x 0 ) ⋯ ϕ n ( x 0 ) ϕ 0 ( x 1 ) ϕ 1 ( x 1 ) ⋯ ϕ n ( x 1 ) ⋮ ⋮ ⋱ ⋮ ϕ 0 ( x n ) ϕ 1 ( x n ) ⋯ ϕ n ( x n ) ] [ c 0 c 1 ⋮ c n ] = [ y 0 y 1 ⋮ y n ] \begin{aligned}
v(x) = &\sum_{j=0}^{n}c_j\phi_{j}(x) \space \rightarrow \text{linearly independent iff} \space v(x) = 0 \space \forall \space x \rightarrow c_j=0 \space \forall \space j)\\\
&\\\
\text{Linear system: } &\begin{bmatrix} \phi_0(x_0) & \phi_1(x_0) & \cdots & \phi_n(x_0) \\ \phi_0(x_1) & \phi_1(x_1) & \cdots & \phi_n(x_1) \\ \vdots & \vdots & \ddots & \vdots \\ \phi_0(x_n) & \phi_1(x_n) & \cdots & \phi_n(x_n) \end{bmatrix} \begin{bmatrix} c_0 \\ c_1 \\ \vdots \\ c_n \end{bmatrix} = \begin{bmatrix} y_0 \\ y_1 \\ \vdots \\ y_n \end{bmatrix}
\end{aligned} v ( x ) = Linear system: j = 0 ∑ n c j ϕ j ( x ) → linearly independent iff v ( x ) = 0 ∀ x → c j = 0 ∀ j ) ϕ 0 ( x 0 ) ϕ 0 ( x 1 ) ⋮ ϕ 0 ( x n ) ϕ 1 ( x 0 ) ϕ 1 ( x 1 ) ⋮ ϕ 1 ( x n ) ⋯ ⋯ ⋱ ⋯ ϕ n ( x 0 ) ϕ n ( x 1 ) ⋮ ϕ n ( x n ) c 0 c 1 ⋮ c n = y 0 y 1 ⋮ y n
Monomial basis: ϕ j ( x ) = x j , j = 0 , 1 , . . . , n → v ( x ) = ∑ j = 0 n c j x j p n ( x i ) = c 0 + c 1 x i + c 2 x i 2 + ⋯ + c n x i n = y i X : Vandermonde matrix → det ( X ) = ∏ i = 0 n − 1 [ ∏ j = i + 1 n ( x j − x i ) ] if x i are distinct: ∙ det ( X ) ≠ 0 ∙ X is nonsingular ∙ system has unique solution ∙ unique polynomial of degree ≤ n that interpolates the data ∙ can be poorly conditioned, work is O ( n 3 ) \begin{aligned}
\text{Monomial basis: }&\phi_j(x)=x^j, \space j=0,1,...,n \space \rightarrow v(x)=\sum_{j=0}^{n}c_jx^j\\\
&p_n(x_i) = c_0 + c_1x_i + c_2x_i^2 + \cdots + c_nx_i^n = y_i \\\
&\\\
X: &\text{Vandermonde matrix} \rightarrow \text{det}(X)=\prod_{i=0}^{n-1} \left[ \prod_{j=i+1}^{n} (x_j - x_i) \right]\\\
\text{if } &x_i \space\text{are distinct:}\\\
&\bullet\space \text{det}(X) \neq 0\\\
&\bullet\space X\space \text{is nonsingular}\\\
&\bullet\space \text{system has unique solution}\\\
&\bullet\space \text{unique polynomial of degree}\leq{n}\space \text{that interpolates the data}\\\
&\bullet\space \text{can be poorly conditioned, work is }O(n^3)\\\
\end{aligned} Monomial basis: X : if ϕ j ( x ) = x j , j = 0 , 1 , ... , n → v ( x ) = j = 0 ∑ n c j x j p n ( x i ) = c 0 + c 1 x i + c 2 x i 2 + ⋯ + c n x i n = y i Vandermonde matrix → det ( X ) = i = 0 ∏ n − 1 [ j = i + 1 ∏ n ( x j − x i ) ] x i are distinct: ∙ det ( X ) = 0 ∙ X is nonsingular ∙ system has unique solution ∙ unique polynomial of degree ≤ n that interpolates the data ∙ can be poorly conditioned, work is O ( n 3 )
Lagrange basis: L j ( x i ) = { 0 if i ≠ j 1 if i = j L j ( x ) = ∏ i = 0 , i ≠ j n x − x i x j − x i p n ( x i ) = ∑ j = 0 n y j L j ( x i ) = ∑ j = 0 i − 1 y j L j ( x i ) + y i L i ( x i ) + ∑ j = i + 1 n y j L j ( x i ) = y i \begin{aligned}
\text{Lagrange basis: }&L_j(x_i) = \begin{cases} 0 & \text{if } i \neq j \\ 1 & \text{if } i = j \end{cases} \\\
&L_j(x) = \prod_{i=0,i\neq{j}}^{n}\frac{x-x_i}{x_j-x_i}\\\
&p_n(x_i) = \sum_{j=0}^{n} y_jL_j(x_i) = \sum_{j=0}^{i-1} y_jL_j(x_i) + y_iL_i(x_i) + \sum_{j=i+1}^{n} y_jL_j(x_i) = y_i\\\
\end{aligned} Lagrange basis: L j ( x i ) = { 0 1 if i = j if i = j L j ( x ) = i = 0 , i = j ∏ n x j − x i x − x i p n ( x i ) = j = 0 ∑ n y j L j ( x i ) = j = 0 ∑ i − 1 y j L j ( x i ) + y i L i ( x i ) + j = i + 1 ∑ n y j L j ( x i ) = y i
Newton’s basis: ϕ j ( x ) = ∏ i = 0 j − 1 ( x − x i ) , j = 0 : n p n ( x i ) = c 0 + c 1 ( x i − x 0 ) + ⋯ + c n ( x i − x 0 ) ( x i − x 1 ) ⋯ ( x i − x n − 1 ) = f ( x i ) \begin{aligned}
\text{Newton's basis: }&\phi_j(x)=\prod_{i=0}^{j-1}(x-x_i), j=0:n\\\
&p_n(x_i)=c_0 + c_1(x_i-x_0)+ \cdots + c_n(x_i-x_0)(x_i-x_1)\cdots(x_i-x_{n-1})=f(x_i)\\\
\end{aligned} Newton’s basis: ϕ j ( x ) = i = 0 ∏ j − 1 ( x − x i ) , j = 0 : n p n ( x i ) = c 0 + c 1 ( x i − x 0 ) + ⋯ + c n ( x i − x 0 ) ( x i − x 1 ) ⋯ ( x i − x n − 1 ) = f ( x i )
Divided differences: f [ x i , ⋯ , x j ] = f [ x i + 1 , ⋯ , x j ] − f [ x i , ⋯ , x j − 1 ] x j − x i ∙ at x = x 0 then c 0 = f ( x 0 ) = f [ x 0 ] ∙ at x = x 1 then c 1 = f ( x 1 ) − f ( x 0 ) x 1 − x 0 = f [ x 0 , x 1 ] ∙ at x = x 2 then c 2 = f ( x 2 ) − c 0 − c 1 ( x 2 − x 0 ) ( x 2 − x 0 ) ( x 2 − x 1 ) = f ( x 2 ) − f ( x 1 ) x 2 − x 1 − f ( x 1 ) − f ( x 0 ) x 1 − x 0 x 2 − x 0 = f [ x 0 , x 1 , x 2 ] ∴ ∀ x ∈ [ a , b ] ∃ ξ = ξ ( x ) ∈ ( a , b ) : f ( x ) − p n ( x ) = f n + 1 ( ξ ) ( n + 1 ) ! ∏ i = 0 n ( x − x i ) ∴ Error: ∣ f ( x ) − p n ( x ) ∣ ≤ M 4 ( n + 1 ) h n + 1 where: ∙ M = m a x a ≤ t ≤ b ∣ f n + 1 ( t ) ∣ ∙ h = b − a n ∙ x i = a + i h for i = 0 , 1 , ⋯ , n \begin{aligned}
&\text{Divided differences: }f[x_i,\cdots,x_j] = \frac{f[x_{i+1},\cdots,x_j]-f[x_i,\cdots,x_{j-1}]}{x_j-x_i}\\\
&\bullet\space\text{at } x=x_0 \text{ then } c_0 = f(x_0) = f[x_0]\\\
&\bullet\space\text{at } x=x_1 \text{ then } c_1 = \frac{f(x_1)-f(x_0)}{x_1-x_0} = f[x_0, x_1]\\\
&\bullet\space\text{at } x=x_2 \text{ then } c_2 = \frac{f(x_2)-c_0-c_1(x_2-x_0)}{(x_2-x_0)(x_2-x_1)} = \frac{\frac{f(x_2)-f(x_1)}{x_2-x_1}-\frac{f(x_1)-f(x_0)}{x_1-x_0}}{x_2-x_0} = f[x_0, x_1, x_2]\\\
&\\\
&\therefore\forall x\in{[a,b]}\space\exists\space\xi=\xi(x)\in(a,b)\space : \space f(x)-p_n(x)=\frac{f^{n+1}(\xi)}{(n+1)!} \prod_{i=0}^{n} (x - x_i)\\\
&\therefore\space\text{Error: } |f(x)-p_n(x)|\leq\frac{M}{4(n+1)}h^{n+1}\\\
&\text{where: }\\\
&\bullet\space M=max_{a\leq{t}\leq{b}}|f^{n+1}(t)|\\\
&\bullet\space h=\frac{b-a}{n}\\\
&\bullet\space x_i=a+ih \text{ for }i=0,1,\cdots,n
\end{aligned} Divided differences: f [ x i , ⋯ , x j ] = x j − x i f [ x i + 1 , ⋯ , x j ] − f [ x i , ⋯ , x j − 1 ] ∙ at x = x 0 then c 0 = f ( x 0 ) = f [ x 0 ] ∙ at x = x 1 then c 1 = x 1 − x 0 f ( x 1 ) − f ( x 0 ) = f [ x 0 , x 1 ] ∙ at x = x 2 then c 2 = ( x 2 − x 0 ) ( x 2 − x 1 ) f ( x 2 ) − c 0 − c 1 ( x 2 − x 0 ) = x 2 − x 0 x 2 − x 1 f ( x 2 ) − f ( x 1 ) − x 1 − x 0 f ( x 1 ) − f ( x 0 ) = f [ x 0 , x 1 , x 2 ] ∴ ∀ x ∈ [ a , b ] ∃ ξ = ξ ( x ) ∈ ( a , b ) : f ( x ) − p n ( x ) = ( n + 1 )! f n + 1 ( ξ ) i = 0 ∏ n ( x − x i ) ∴ Error: ∣ f ( x ) − p n ( x ) ∣ ≤ 4 ( n + 1 ) M h n + 1 where: ∙ M = ma x a ≤ t ≤ b ∣ f n + 1 ( t ) ∣ ∙ h = n b − a ∙ x i = a + ih for i = 0 , 1 , ⋯ , n
Basic Numeric Integration
I f = ∫ a b f ( x ) d x ≈ ∑ j = 0 n a j f ( x j ) (quadrature rule) ∙ x 0 , ⋯ , x n be distinct points in [ a , b ] ∙ p n ( x ) be interpolating polynomial of f → ∫ a b f ( x ) d x ≈ ∫ a b p n ( x ) d x ∙ Uses Lagrange form: ∫ a b f ( x ) d x ≈ ∑ j = 0 n f ( x j ) ∫ a b L j ( x ) d x = ∑ j = 0 n f ( x j ) a j \begin{aligned}
&I_f = \int_{a}^{b}{f(x)dx} \approx \sum_{j=0}^{n}a_jf(x_j)\space\text{(quadrature rule)}\\\
&\bullet\space x_0,\cdots,x_n\space\text{be distinct points in } [a,b]\\\
&\bullet\space p_n(x)\space\text{be interpolating polynomial of }f\rightarrow\space \int_{a}^{b}f(x)dx\approx\int_{a}^{b}p_n(x)dx\\\
&\bullet\space \text{Uses Lagrange form: }\int_{a}^{b}f(x)dx\approx\sum_{j=0}^{n}f(x_j)\int_{a}^{b}L_j(x)dx=\sum_{j=0}^{n}f(x_j)a_j\\\
\end{aligned} I f = ∫ a b f ( x ) d x ≈ j = 0 ∑ n a j f ( x j ) (quadrature rule) ∙ x 0 , ⋯ , x n be distinct points in [ a , b ] ∙ p n ( x ) be interpolating polynomial of f → ∫ a b f ( x ) d x ≈ ∫ a b p n ( x ) d x ∙ Uses Lagrange form: ∫ a b f ( x ) d x ≈ j = 0 ∑ n f ( x j ) ∫ a b L j ( x ) d x = j = 0 ∑ n f ( x j ) a j
Trapezoidal rule: f ( x ) ≈ p 1 ( x ) = f ( x 0 ) L 0 ( x ) + f ( x 1 ) L 1 ( x ) ( n = 1 , x 0 = a , x 1 = b ) ∴ I f = ∫ a b f ( x ) d x ≈ f ( a ) ∫ a b x − b a − b d x + f ( b ) ∫ a b x − a b − a d x = b − a 2 [ f ( a ) + f ( b ) ] Error: f ( x ) − p 1 ( x ) = 1 2 f ′ ′ ( ξ ( x ) ) ( x − a ) ( x − b ) then: ∫ a b ( f ( x ) − p 1 ( x ) ) d x = 1 2 ∫ a b f ′ ′ ( ξ ( x ) ) ( x − a ) ( x − b ) d x From MVT: ∃ η ∈ ( a , b ) : ∫ a b f ′ ′ ( ξ ( x ) ) ( x − a ) ( x − b ) d x = f ′ ′ ( η ) ∫ a b ( x − a ) ( x − b ) d x ∴ Error of Trapezoidal rule: I f − I t r a p = − f ′ ′ ( η ) 12 ( b − a ) 3 \begin{aligned}
\text{Trapezoidal rule: } &f(x) \approx p_1(x)=f(x_0)L_0(x) + f(x_1)L_1(x)\space(n=1, x_0=a,x_1=b)\\\
\therefore\space &I_f=\int_{a}^{b}f(x)dx \approx f(a)\int_{a}^{b}{\frac{x-b}{a-b}dx} + f(b)\int_{a}^{b}{\frac{x-a}{b-a}dx} \\\
&\space\space\space\space=\frac{b-a}{2}[f(a) + f(b)]\\\
\text{Error: } &f(x) - p_1(x) = \frac{1}{2}f^{''}(\xi(x))(x-a)(x-b)\\\
\text{then: }&\int_{a}^{b}{(f(x)-p_1(x))dx} = \frac{1}{2}\int_{a}^{b}{f^{''}(\xi(x))(x-a)(x-b)dx}\\\
\text{From MVT: } &\exists\space\eta\in(a,b) \space : \space \int_{a}^{b}{f^{''}(\xi(x))(x-a)(x-b)dx} = f^{''}(\eta)\int_{a}^{b}{(x-a)(x-b)dx}\\\
\therefore\space&\text{Error of Trapezoidal rule: }\space I_f - I_{trap} = -\frac{f^{''}(\eta)}{12}(b-a)^3\\\
\end{aligned} Trapezoidal rule: ∴ Error: then: From MVT: ∴ f ( x ) ≈ p 1 ( x ) = f ( x 0 ) L 0 ( x ) + f ( x 1 ) L 1 ( x ) ( n = 1 , x 0 = a , x 1 = b ) I f = ∫ a b f ( x ) d x ≈ f ( a ) ∫ a b a − b x − b d x + f ( b ) ∫ a b b − a x − a d x = 2 b − a [ f ( a ) + f ( b )] f ( x ) − p 1 ( x ) = 2 1 f ′′ ( ξ ( x )) ( x − a ) ( x − b ) ∫ a b ( f ( x ) − p 1 ( x )) d x = 2 1 ∫ a b f ′′ ( ξ ( x )) ( x − a ) ( x − b ) d x ∃ η ∈ ( a , b ) : ∫ a b f ′′ ( ξ ( x )) ( x − a ) ( x − b ) d x = f ′′ ( η ) ∫ a b ( x − a ) ( x − b ) d x Error of Trapezoidal rule: I f − I t r a p = − 12 f ′′ ( η ) ( b − a ) 3
Midpoint rule: I f ≈ I m i d = ( b − a ) f ( a + b 2 ) Let m = a + b 2 → f ( x ) = f ( m ) + f ′ ( m ) ( x − m ) + 1 2 f ′ ′ ( ξ ( x ) ) ( x − m ) 2 ∴ I f = ∫ a b f ( x ) = ( b − a ) f ( m ) + 1 2 ∫ a b f ′ ′ ( ξ ( x ) ) ( x − m ) 2 d x ∃ η ∈ ( a , b ) : 1 2 ∫ a b f ′ ′ ( ξ ( x ) ) ( x − m ) 2 d x = f ′ ′ ( η ) 24 ( b − a ) 3 ∴ Error of Midpoint rule: I f − I m i d = f ′ ′ ( η ) 24 ( b − a ) 3 \begin{aligned}
\text{Midpoint rule: } &I_f \approx I_{mid} = (b-a)f(\frac{a+b}{2})\\\
&\text{Let } m=\frac{a+b}{2}\rightarrow f(x)=f(m)+f^{'}(m)(x-m)+\frac{1}{2}f^{''}(\xi(x))(x-m)^2\\\
\therefore\space&I_f = \int_{a}^{b} f(x) = (b - a)f(m) + \frac{1}{2} \int_{a}^{b} f''(\xi(x))(x - m)^2 \, dx\\\
&\exists\space\eta\in(a,b)\space : \space \frac{1}{2} \int_{a}^{b} f''(\xi(x))(x - m)^2 \, dx = \frac{f''(\eta)}{24}(b - a)^3\\\
\therefore\space&\text{Error of Midpoint rule: }\space I_f - I_{mid} = \frac{f^{''}(\eta)}{24}(b-a)^3\\\
\end{aligned} Midpoint rule: ∴ ∴ I f ≈ I mi d = ( b − a ) f ( 2 a + b ) Let m = 2 a + b → f ( x ) = f ( m ) + f ′ ( m ) ( x − m ) + 2 1 f ′′ ( ξ ( x )) ( x − m ) 2 I f = ∫ a b f ( x ) = ( b − a ) f ( m ) + 2 1 ∫ a b f ′′ ( ξ ( x )) ( x − m ) 2 d x ∃ η ∈ ( a , b ) : 2 1 ∫ a b f ′′ ( ξ ( x )) ( x − m ) 2 d x = 24 f ′′ ( η ) ( b − a ) 3 Error of Midpoint rule: I f − I mi d = 24 f ′′ ( η ) ( b − a ) 3
Simpson’s rule: I f ≈ I s i m p = b − a 6 [ f ( a ) + 4 f ( a + b 2 ) + f ( b ) ] ( p 2 ( x ) , n = 2 , x 0 = a , x 1 = a + b 2 , x 2 = b ) ∴ Error of Simpson’s rule: I f − I S i m p s o n = − f ( 4 ) ( η ) 90 ( b − a 2 ) 5 , η ∈ ( a , b ) \begin{aligned}
\text{Simpson's rule: } &I_f \approx I_{simp} = \frac{b-a}{6}[f(a) + 4f(\frac{a+b}{2}) + f(b)]\\\
&(p_2(x),n=2,x_0=a,x_1=\frac{a+b}{2},x_2=b)\\\
\therefore\space&\text{Error of Simpson's rule: }\space I_f - I_{Simpson} = -\frac{f^{(4)}(\eta)}{90}(\frac{b-a}{2})^5,\space\eta\in(a,b)\\\
\end{aligned} Simpson’s rule: ∴ I f ≈ I s im p = 6 b − a [ f ( a ) + 4 f ( 2 a + b ) + f ( b )] ( p 2 ( x ) , n = 2 , x 0 = a , x 1 = 2 a + b , x 2 = b ) Error of Simpson’s rule: I f − I S im p so n = − 90 f ( 4 ) ( η ) ( 2 b − a ) 5 , η ∈ ( a , b )
Composite Numeric Integration
∙ subdivide [ a , b ] int r subintervals ∙ h = b − a r length per interval ∙ t i = a + i h for i = 0 , 1 , ⋯ , r t 0 = a , t r = b → ∫ a b f ( x ) d x = ∑ i = 1 r ∫ t i − 1 t i f ( x ) d x \begin{aligned}
&\bullet\space\text{subdivide }[a,b]\space\text{int }r\space\text{subintervals}\\\
&\bullet\space h=\frac{b-a}{r}\space\text{length per interval}\\\
&\bullet\space t_i=a+ih\space\text{for }i=0,1,\cdots,r\\\
&t_0=a,t_r=b\space\rightarrow\space\int_{a}^{b}f(x)\,dx=\sum_{i=1}^{r}\int_{t_{i-1}}^{t_i}f(x)\,dx\\\
\end{aligned} ∙ subdivide [ a , b ] int r subintervals ∙ h = r b − a length per interval ∙ t i = a + ih for i = 0 , 1 , ⋯ , r t 0 = a , t r = b → ∫ a b f ( x ) d x = i = 1 ∑ r ∫ t i − 1 t i f ( x ) d x
Composite Trapezoidal rule: I c f = h 2 [ f ( a ) + f ( b ) ] + h ∑ i = 1 r − 1 f ( t i ) Error: I f − I c f = − f ′ ′ ( μ ) 12 ( b − a ) h 2 Composite Simpson rule: I c s = h 3 [ f ( a ) + 2 ∑ i = 1 r / 2 − 1 f ( t 2 i ) + 4 ∑ i = 1 r / 2 f ( t 2 i − 1 ) + f ( b ) ] Error: I f − I c s = − f ( 4 ) ( ζ ) 180 ( b − a ) h 4 Composite Midpoint rule: I c m = h ∑ i = 1 r f ( a + ( i − 1 / 2 ) h ) Error: I f − I c m = − f ′ ′ ( η ) 24 ( b − a ) h 2 \begin{aligned}
\text{Composite Trapezoidal rule: } &I_{cf} = \frac{h}{2} [f(a) + f(b)] + h \sum_{i=1}^{r-1} f(t_i)\\\
\text{Error: } &I_f - I_{cf} = -\frac{f^{''}(\mu)}{12}(b-a)h^2\\\
\text{Composite Simpson rule: } &I_{cs} = \frac{h}{3} [f(a) + 2 \sum_{i=1}^{r/2-1} f(t_{2i}) + 4 \sum_{i=1}^{r/2} f(t_{2i-1}) + f(b)]\\\
\text{Error: } &I_f - I_{cs} = -\frac{f^{(4)}(\zeta)}{180}(b-a)h^4\\\
\text{Composite Midpoint rule: } &I_{cm} = h \sum_{i=1}^{r} f(a + (i - 1/2)h)\\\
\text{Error: } &I_f - I_{cm} = -\frac{f^{''}(\eta)}{24}(b-a)h^2\\\
\end{aligned} Composite Trapezoidal rule: Error: Composite Simpson rule: Error: Composite Midpoint rule: Error: I c f = 2 h [ f ( a ) + f ( b )] + h i = 1 ∑ r − 1 f ( t i ) I f − I c f = − 12 f ′′ ( μ ) ( b − a ) h 2 I cs = 3 h [ f ( a ) + 2 i = 1 ∑ r /2 − 1 f ( t 2 i ) + 4 i = 1 ∑ r /2 f ( t 2 i − 1 ) + f ( b )] I f − I cs = − 180 f ( 4 ) ( ζ ) ( b − a ) h 4 I c m = h i = 1 ∑ r f ( a + ( i − 1/2 ) h ) I f − I c m = − 24 f ′′ ( η ) ( b − a ) h 2
Linear Least Squares
_Find c j c_j c j such that ∑ k = 0 m ( v ( x ∗ k ) − y k ) 2 = ∑ ∗ k = 0 m ( ∑ ∗ j = 0 n c j ϕ j ( x k ) − y k ) 2 \sum_{k=0}^{m}(v(x*k)-y_k)^2=\sum*{k=0}^{m}(\sum*{j=0}^{n}c_j\phi_j(x_k)-y_k)^2 ∑ k = 0 m ( v ( x ∗ k ) − y k ) 2 = ∑ ∗ k = 0 m ( ∑ ∗ j = 0 n c j ϕ j ( x k ) − y k ) 2 is minimised*
Conditions: ∂ ϕ ∂ a = 0 , ∂ ϕ ∂ b = 0 \frac{\partial \phi}{\partial a} = 0, \quad \frac{\partial \phi}{\partial b} = 0 ∂ a ∂ ϕ = 0 , ∂ b ∂ ϕ = 0
Linear fit: y k = a x k + b , k = 1 , ⋯ , m [ ∑ k = 0 m x k 2 ∑ k = 0 m x k ∑ k = 0 m x k m + 1 ] [ a b ] = [ ∑ k = 0 m x k y k ∑ k = 0 m y k ] p = ∑ k = 0 m x k , q = ∑ k = 0 m y k , r = ∑ k = 0 m x k y k , s = ∑ k = 0 m x k 2 → [ s p p m + 1 ] [ a b ] = [ r q ] ↔ A z = [ x 0 1 x 1 1 ⋮ ⋮ x m 1 ] [ a b ] = [ y 0 y 1 ⋮ y m ] = f is overdetermined \begin{aligned}
\text{Linear fit: } y_k&=ax_k+b,k=1,\cdots,m\\\
\begin{bmatrix}
\sum_{k=0}^{m} x_k^2 & \sum_{k=0}^{m} x_k \\
\sum_{k=0}^{m} x_k & m + 1
\end{bmatrix}
\begin{bmatrix}
a \\
b
\end{bmatrix}
&=
\begin{bmatrix}
\sum_{k=0}^{m} x_k y_k \\
\sum_{k=0}^{m} y_k
\end{bmatrix}\\\
p &= \sum_{k=0}^{m} x_k, \quad q = \sum_{k=0}^{m} y_k, \quad r = \sum_{k=0}^{m} x_k y_k, \quad s = \sum_{k=0}^{m} x_k^2\\\
\rightarrow\begin{bmatrix}
s & p \\
p & m + 1
\end{bmatrix}
\begin{bmatrix}
a \\
b
\end{bmatrix}
&=
\begin{bmatrix}
r \\
q
\end{bmatrix}\\\
\leftrightarrow A\mathbf{z} &=
\begin{bmatrix}
x_0 & 1 \\
x_1 & 1 \\
\vdots & \vdots \\
x_m & 1
\end{bmatrix}
\begin{bmatrix}
a \\
b
\end{bmatrix}
=
\begin{bmatrix}
y_0 \\
y_1 \\
\vdots \\
y_m
\end{bmatrix}
= \mathbf{f}\space\text{is overdetermined}\\\
&\\\
\end{aligned} Linear fit: y k [ ∑ k = 0 m x k 2 ∑ k = 0 m x k ∑ k = 0 m x k m + 1 ] [ a b ] p → [ s p p m + 1 ] [ a b ] ↔ A z = a x k + b , k = 1 , ⋯ , m = [ ∑ k = 0 m x k y k ∑ k = 0 m y k ] = k = 0 ∑ m x k , q = k = 0 ∑ m y k , r = k = 0 ∑ m x k y k , s = k = 0 ∑ m x k 2 = [ r q ] = x 0 x 1 ⋮ x m 1 1 ⋮ 1 [ a b ] = y 0 y 1 ⋮ y m = f is overdetermined
Solving linear system: r = b − A x ∣ ∣ r ∣ ∣ 2 2 = ∑ i = 1 m r i 2 = ∑ i = 1 m ( b i − ∑ j = 1 n a i j x j ) 2 Let ϕ ( x ) = 1 2 ∥ r ∥ 2 2 = 1 2 ∑ i = 1 m ( b i − ∑ j = 1 n a i j x j ) 2 Conditions : ∂ ϕ ∂ x k = 0 , k = 1 , ⋯ , n 0 = ∑ i = 1 m ( b i − ∑ j = 1 n a i j x j ) ( − a i k ) → ∑ i = 1 m a i k ∑ j = 1 n a i j x j = ∑ i = 1 m a i k b i , k = 1 , ⋯ , n ( equivalent to A T A x = A T b ) \begin{aligned}
\text{Solving linear system: }r &= b - Ax\\\
||r||_2^2 &= \sum_{i=1}^{m}r_i^2 = \sum_{i=1}^{m}(b_i-\sum_{j=1}^{n}a_{ij}x_j)^2\\\
\text{Let } \phi(x) &= \frac{1}{2}\|r\|^2_2 = \frac{1}{2} \sum_{i=1}^{m} (b_i - \sum_{j=1}^{n} a_{ij}x_j)^2\\\
\text{Conditions}: \frac{\partial \phi}{\partial x_k} &= 0, \quad k = 1, \cdots, n\\\
0&=\sum_{i=1}^{m}(b_i-\sum_{j=1}^{n}a_{ij}x_j)(-a_{ik})\\\
\rightarrow \sum_{i=1}^{m}a_{ik}\sum_{j=1}^{n}a_{ij}x_j &= \sum_{i=1}^{m}a_{ik}b_i, k=1,\cdots,n\space (\text{equivalent to } A^{T}Ax=A^{T}b)\\\
\end{aligned} Solving linear system: r ∣∣ r ∣ ∣ 2 2 Let ϕ ( x ) Conditions : ∂ x k ∂ ϕ 0 → i = 1 ∑ m a ik j = 1 ∑ n a ij x j = b − A x = i = 1 ∑ m r i 2 = i = 1 ∑ m ( b i − j = 1 ∑ n a ij x j ) 2 = 2 1 ∥ r ∥ 2 2 = 2 1 i = 1 ∑ m ( b i − j = 1 ∑ n a ij x j ) 2 = 0 , k = 1 , ⋯ , n = i = 1 ∑ m ( b i − j = 1 ∑ n a ij x j ) ( − a ik ) = i = 1 ∑ m a ik b i , k = 1 , ⋯ , n ( equivalent to A T A x = A T b )
A T A x = A T b is called the normal equations If A has a full-column rank , min x ∥ b − A x ∥ 2 has uniq sol: x = ( A T A ) − 1 A T b = A + b \begin{aligned}
A^T Ax &= A^T b \space\text{is called the normal equations}\\\
\text{If }A \text{ has a full-column rank}, &\min_{x} \|b - Ax\|_2\space\text{has uniq sol:}\\\
x&=(A^TA)^{-1}A^Tb=A^{+}b\\\
\end{aligned} A T A x If A has a full-column rank , x = A T b is called the normal equations x min ∥ b − A x ∥ 2 has uniq sol: = ( A T A ) − 1 A T b = A + b
Adaptive Simpson: find Q : ∣ Q − I ∣ ≤ tol I = ∫ a b f ( x ) d x = S ( a , b ) + E ( a , b ) S 1 = S ( a , b ) = h 6 [ f ( a ) + 4 f ( a + b 2 ) + f ( b ) ] E 1 = E ( a , b ) = − 1 90 ( h 2 ) 5 f ( 4 ) ( ξ ) , ξ between a and b \begin{aligned}
\text{Adaptive Simpson: find } &Q \space : \space |Q - I| \leq \text{tol}\\\
I &= \int_{a}^{b} f(x) \, dx = S(a, b) + E(a, b) \\\
S_1=S(a, b) &= \frac{h}{6} \left[ f(a) + 4f\left( \frac{a + b}{2} \right) + f(b) \right] \\\
E_1=E(a, b) &= -\frac{1}{90} \left( \frac{h}{2} \right)^5 f^{(4)}(\xi), \quad \xi \text{ between } a \text{ and } b\\\
\end{aligned} Adaptive Simpson: find I S 1 = S ( a , b ) E 1 = E ( a , b ) Q : ∣ Q − I ∣ ≤ tol = ∫ a b f ( x ) d x = S ( a , b ) + E ( a , b ) = 6 h [ f ( a ) + 4 f ( 2 a + b ) + f ( b ) ] = − 90 1 ( 2 h ) 5 f ( 4 ) ( ξ ) , ξ between a and b
S = quadSimpson ( f , a , b , tol ) h = b − a , c = a + b 2 S 1 = h 6 [ f ( a ) + 4 f ( a + b 2 ) + f ( b ) ] S 2 = h 12 [ f ( a ) + 4 f ( a + c 2 ) + 2 f ( c ) + 4 f ( c + b 2 ) + f ( b ) ] E ~ 2 = 1 15 ( S 2 − S 1 ) if ∣ E ~ 2 ∣ ≤ tol return Q = S 2 + E ~ 2 else Q 1 = quadSimpson ( f , a , c , tol / 2 ) Q 2 = quadSimpson ( f , c , b , tol / 2 ) return Q = Q 1 + Q 2 \begin{aligned}
S =\space&\text{quadSimpson}(f, a, b, \text{tol})\\\
&h = b - a, \quad c = \frac{a + b}{2}\\\
&S_1 = \frac{h}{6} [f(a) + 4f\left(\frac{a+b}{2}\right) + f(b)]\\\
&S_2 = \frac{h}{12} [f(a) + 4f\left(\frac{a+c}{2}\right) + 2f(c) + 4f\left(\frac{c+b}{2}\right) + f(b)]\\\
&\tilde{E}_2 = \frac{1}{15}(S_2 - S_1)\\\
&\text{if} |\tilde{E}_2| \leq \text{tol}\\\
&\space\space\text{return } Q = S_2 + \tilde{E}_2 \\\
&\text{else}\\\
&\space\space Q_1 = \text{quadSimpson}(f, a, c, \text{tol}/2)\\\
&\space\space Q_2 = \text{quadSimpson}(f, c, b, \text{tol}/2)\\\
&\space\space\text{return } Q = Q_1 + Q_2 \\\
\end{aligned} S = quadSimpson ( f , a , b , tol ) h = b − a , c = 2 a + b S 1 = 6 h [ f ( a ) + 4 f ( 2 a + b ) + f ( b )] S 2 = 12 h [ f ( a ) + 4 f ( 2 a + c ) + 2 f ( c ) + 4 f ( 2 c + b ) + f ( b )] E ~ 2 = 15 1 ( S 2 − S 1 ) if ∣ E ~ 2 ∣ ≤ tol return Q = S 2 + E ~ 2 else Q 1 = quadSimpson ( f , a , c , tol /2 ) Q 2 = quadSimpson ( f , c , b , tol /2 ) return Q = Q 1 + Q 2
Newton’s Method for Nonlinear equations
x n + 1 = x n − f ( x n ) f ′ ( x n ) x_{n+1}=x_n-\frac{f(x_n)}{f'(x_n)} x n + 1 = x n − f ′ ( x n ) f ( x n )
Convergence: if f , f ′ , f ′ ′ f, f', f'' f , f ′ , f ′′ are continuous in a neighborhood of a root r r r of f f f and f ′ ( r ) ≠ 0 f'(r) \neq 0 f ′ ( r ) = 0 , then ∃ δ > 0 : ∣ r − x 0 ∣ ≤ δ \exists\delta\ >0\space : \space |r-x_0|\leq{\delta} ∃ δ > 0 : ∣ r − x 0 ∣ ≤ δ , then ∀ x n : : ∣ r − x n ∣ ≤ δ , ∣ r − x n + 1 ∣ ≤ c ( δ ) ∣ r − x n ∣ 2 \forall x_n\space : \space: |r-x_n|\leq{\delta}, |r-x_{n+1}|\leq c(\delta)|r-x_n|^2 ∀ x n : : ∣ r − x n ∣ ≤ δ , ∣ r − x n + 1 ∣ ≤ c ( δ ) ∣ r − x n ∣ 2
∣ e n + 1 ∣ ≤ c ( δ ) ∣ e n ∣ 2 |e_{n+1}|\leq c(\delta)|e_n|^2 ∣ e n + 1 ∣ ≤ c ( δ ) ∣ e n ∣ 2 (Quadratic convergence, order is 2)
Let c ( δ ) = 1 2 ∗ max ∣ r − x ∣ ≤ δ ∣ f ′ ′ ( x ) ∣ min ∣ r − x ∣ ≤ δ ∣ f ′ ( x ) ∣ c(\delta)=\frac{1}{2}*\frac{\max_{|r-x|\leq{\delta}}|f''(x)|}{\min_{|r-x|\leq{\delta}}|f'(x)|} c ( δ ) = 2 1 ∗ m i n ∣ r − x ∣ ≤ δ ∣ f ′ ( x ) ∣ m a x ∣ r − x ∣ ≤ δ ∣ f ′′ ( x ) ∣
For linear system: denote x = ( x 1 , x 2 , ⋯ , x n ) T \mathbf{x}=(x_1,x_2,\cdots,x_n)^T x = ( x 1 , x 2 , ⋯ , x n ) T and F = ( f 1 , f 2 , ⋯ , f n ) \mathbf{F}=(f_1,f_2,\cdots,f_n) F = ( f 1 , f 2 , ⋯ , f n ) , find x ∗ \mathbf{x}^{*} x ∗ such that F ( x ∗ ) = 0 F(x^{*})=0 F ( x ∗ ) = 0
F ( x ( k ) ) + F ′ ( x ( k ) ) ( x ( k + 1 ) − x ( k ) ) = 0 F ′ ( x ( k ) ) is the Jacobian of F at x ( k ) Let s = x ( k + 1 ) − x ( k ) ∴ F ′ ( x ( k ) ) s = − F ( x ( k ) ) x ( k + 1 ) = x ( k ) + s \begin{aligned}
F(x^{(k)}) + F'(x^{(k)})(x^{(k+1)}-x^{(k)}) &= 0\\\ F'(x^{(k)}) \space&\text{is the Jacobian of } \mathbf{F} \space\text{at } x^{(k)}\\\
\text{Let } \mathbf{s} &= \mathbf{x}^{(k+1)} - \mathbf{x}^{(k)}\\\
\therefore\space F'(x^{(k)})s &= -F(x^{(k)})\\\
\mathbf{x}^{(k+1)} &= \mathbf{x} ^{(k)} + \mathbf{s}\\\
\end{aligned} F ( x ( k ) ) + F ′ ( x ( k ) ) ( x ( k + 1 ) − x ( k ) ) F ′ ( x ( k ) ) Let s ∴ F ′ ( x ( k ) ) s x ( k + 1 ) = 0 is the Jacobian of F at x ( k ) = x ( k + 1 ) − x ( k ) = − F ( x ( k ) ) = x ( k ) + s
IVP in ODEs.
Given y ′ = f ( t , y ) , y ( a ) = c , find y ( t ) for t ∈ [ a , b ] y ′ ≡ y ′ ( t ) ≡ d y d t System of n first-order: y ′ = f ( t , y ) , f : R × R n → R n \begin{aligned}
\text{Given } y'=f(t,y), y(a)=c, \text{ find } y(t) \text{ for } t\in[a,b]\\\
y' &\equiv y'(t) \equiv \frac{dy}{dt}\\\
\text{System of n first-order: } y' &= f(t,y), f: \mathbb{R} \times \mathbb{R}^n \rightarrow \mathbb{R}^n\\\
\end{aligned} Given y ′ = f ( t , y ) , y ( a ) = c , find y ( t ) for t ∈ [ a , b ] y ′ System of n first-order: y ′ ≡ y ′ ( t ) ≡ d t d y = f ( t , y ) , f : R × R n → R n
Forward Euler’s method (explicit): y t i + 1 ≈ y ( t i ) + h f ( t i , y ( t i ) ) where: h = b − a N , N > 1 h = step size t 0 = a , t i = a + i h , i = 1 , 2 , ⋯ , N \begin{aligned}
\text{Forward Euler's method (explicit): } y_{t_{i+1}} &\approx y(t_i) + hf(t_i, y_(t_i))\\\
\text{where: }h &= \frac{b-a}{N}, N > 1\\\
h &= \text{step size}\\\
t_0 &= a, t_i=a+ih, i=1,2,\cdots,N\\\
\end{aligned} Forward Euler’s method (explicit): y t i + 1 where: h h t 0 ≈ y ( t i ) + h f ( t i , y ( t i )) = N b − a , N > 1 = step size = a , t i = a + ih , i = 1 , 2 , ⋯ , N
Backward Euler’s method (implicit): y i + 1 = y i + h f ( t i + 1 , y i + 1 ) \text{Backward Euler's method (implicit): } y_{i+1} = y_i + hf(t_{i+1}, y_{i+1}) Backward Euler’s method (implicit): y i + 1 = y i + h f ( t i + 1 , y i + 1 )
Non-linear, then apply Newton’s methods
FE Stability: y ′ = λ y , y ( 0 ) = y 0 Exact sol: y ( t ) = y 0 e λ t FE sol with constant stepsize h: y i + 1 = ( 1 + h λ ) y i = ( 1 + h λ ) i + 1 y 0 To be numerically stable: h ≤ 2 ∣ λ ∣ BE Stability: y ′ = λ y , y ( 0 ) = y 0 ∣ y i + 1 ∣ = 1 ∣ 1 − h λ ∣ ∣ y i ∣ ≤ ∣ y i ∣ ∀ h > 0 \begin{aligned}
\text{FE Stability: } y'&=\lambda{y},y(0)=y_0\\\
\text{Exact sol: } y(t)&=y_0e^{\lambda{t}}\\\
\text{FE sol with constant stepsize h: } y_{i+1}&=(1+h\lambda)y_i=(1+h\lambda)^{i+1}y_0\\\
\text{To be numerically stable: } h&\leq{\frac{2}{|\lambda|}}\\\
&\\\
\text{BE Stability: } y'&=\lambda{y},y(0)=y_0\\\
|y_{i+1}| &= \frac{1}{|1-h\lambda|}|y_i| \leq |y_i|\space\forall\space h > 0 \\\
\end{aligned} FE Stability: y ′ Exact sol: y ( t ) FE sol with constant stepsize h: y i + 1 To be numerically stable: h BE Stability: y ′ ∣ y i + 1 ∣ = λ y , y ( 0 ) = y 0 = y 0 e λ t = ( 1 + hλ ) y i = ( 1 + hλ ) i + 1 y 0 ≤ ∣ λ ∣ 2 = λ y , y ( 0 ) = y 0 = ∣1 − hλ ∣ 1 ∣ y i ∣ ≤ ∣ y i ∣ ∀ h > 0
Order, Error, Convergence and Stiffness
Local truncation error of FE: d i = y ( t i + 1 ) − y ( t i ) h − f ( t i , y ( t i ) ) = h 2 y ′ ′ ( η i ) (q=1) Local truncation error of BE: d i = − h 2 y ′ ′ ( ξ i ) (q=1) \begin{aligned}
\text{Local truncation error of FE: } &d_i = \frac{y(t_{i+1}) - y(t_i)}{h} - f(t_i, y(t_i)) = \frac{h}{2}y''(\eta_i)\space\text{(q=1)}\\\
\text{Local truncation error of BE: } &d_i = -\frac{h}{2}y''(\xi_i)\space\text{(q=1)}\\\
\end{aligned} Local truncation error of FE: Local truncation error of BE: d i = h y ( t i + 1 ) − y ( t i ) − f ( t i , y ( t i )) = 2 h y ′′ ( η i ) (q=1) d i = − 2 h y ′′ ( ξ i ) (q=1)
A method of order q if q is the lowest positive int such that any smooth exact sol of y ( t ) : max i ∣ d i ∣ = O ( h q ) \text{A method of order }q\space\text{ if} q\text{ is the lowest positive int such that any smooth exact sol of }y(t):\max_{i}|d_i|=O(h^q) A method of order q if q is the lowest positive int such that any smooth exact sol of y ( t ) : max i ∣ d i ∣ = O ( h q )
Global error: e i = y ( t i ) − y i , i = 0 , 1 , ⋯ , N Consider u ′ = f ( t , u ) , u ( t i − 1 ) = y i − 1 , local error: l i = u ( t i ) \begin{aligned}
\text{Global error: } e_i &= y(t_i) - y_i, i=0,1,\cdots,N\\\
\text{Consider } u' &= f(t,u), u(t_{i-1}) = y_{i-1}, \space\text{local error: }l_i=u(t_i)\\\
\end{aligned} Global error: e i Consider u ′ = y ( t i ) − y i , i = 0 , 1 , ⋯ , N = f ( t , u ) , u ( t i − 1 ) = y i − 1 , local error: l i = u ( t i )
Convergence: max i e i = max i ∣ y ( t i ) − y i ∣ → 0 as h → 0 \begin{aligned}
\text{Convergence: } &\max_i e_i = \max_i |y(t_i) - y_i| \rightarrow 0 \text{ as } h \rightarrow 0\\\
\end{aligned} Convergence: i max e i = i max ∣ y ( t i ) − y i ∣ → 0 as h → 0
Stiffness is when the stepsize is restricted by stability rather than accuracy
Runge-Kutta Methods
Implicit trapezoidal: y ′ ( t ) = f ( t , y ) , y ( t i ) = y i y i + 1 = y i + h 2 [ f ( t i , y i ) + f ( t i + 1 , y i + 1 ) ] d i = O ( h 2 ) = y ( t i + 1 ) − y ( t i ) h − 1 2 [ f ( t i , y ( t i ) ) + f ( t i + 1 , y ( t i + 1 ) ) ] Explicit trapezoidal: Y = y i + h f ( t i , y i ) y i + 1 = y i + h 2 [ f ( t i , y i ) + f ( t i + 1 , Y ) ] d i = O ( h 2 ) = y ( t i + 1 ) − y ( t i ) h − 1 2 [ f ( t i , y ( t i ) ) + f ( t i + 1 , y ( t i ) + h f ( t i , y ( t i ) ) ) ] Implicit midpoint: y i + 1 = y i + h f ( t i + h / 2 , ( y i + y i + 1 ) / 2 ) Explicit midpoint: Y = y i + h 2 f ( t i , y i ) \begin{aligned}
\text{Implicit trapezoidal: } y'(t) &= f(t,y), y(t_i)=y_i\\\
y_{i+1} &= y_i + \frac{h}{2} [f(t_i, y_i) + f(t_{i+1}, y_{i+1})]\\\
d_i = O(h^2) &= \frac{y(t_{i+1})-y(t_i)}{h}-\frac{1}{2}[f(t_i,y(t_i)) + f(t_{i+1},y(t_{i+1}))]\\\
&\\\
\text{Explicit trapezoidal: } Y&=y_i+hf(t_i,y_i)\\\
y_{i+1} &= y_i + \frac{h}{2} [f(t_i, y_i) + f(t_{i+1}, Y)]\\\
d_i = O(h^2) &= \frac{y(t_{i+1})-y(t_i)}{h}-\frac{1}{2}[f(t_i,y(t_i)) + f(t_{i+1},y(t_i)+hf(t_i,y(t_i)))]\\\
&\\\
\text{Implicit midpoint: } y_{i+1} &= y_i + hf(t_i+h/2, (y_i+y_{i+1})/2)\\\
\text{Explicit midpoint: } Y &= y_i + \frac{h}{2}f(t_i, y_i)\\\
\end{aligned} Implicit trapezoidal: y ′ ( t ) y i + 1 d i = O ( h 2 ) Explicit trapezoidal: Y y i + 1 d i = O ( h 2 ) Implicit midpoint: y i + 1 Explicit midpoint: Y = f ( t , y ) , y ( t i ) = y i = y i + 2 h [ f ( t i , y i ) + f ( t i + 1 , y i + 1 )] = h y ( t i + 1 ) − y ( t i ) − 2 1 [ f ( t i , y ( t i )) + f ( t i + 1 , y ( t i + 1 ))] = y i + h f ( t i , y i ) = y i + 2 h [ f ( t i , y i ) + f ( t i + 1 , Y )] = h y ( t i + 1 ) − y ( t i ) − 2 1 [ f ( t i , y ( t i )) + f ( t i + 1 , y ( t i ) + h f ( t i , y ( t i )))] = y i + h f ( t i + h /2 , ( y i + y i + 1 ) /2 ) = y i + 2 h f ( t i , y i )
Classical RK4: based on Simpson’s quadrature rule, O ( h 4 ) O(h^4) O ( h 4 ) accuracy
Y 1 = y i Y 2 = y i + h 2 f ( t i , Y 1 ) Y 3 = y i + h 2 f ( t i + h 2 , Y 2 ) Y 4 = y i + h f ( t i + h 2 , Y 3 ) y i + 1 = y i + h 6 [ f ( t i , Y 1 ) + 2 f ( t i + h 2 , Y 2 ) + 2 f ( t i + h 2 , Y 3 ) + f ( t i + 1 , Y 4 ) ] \begin{align*}
Y_1 &= y_i \\\
Y_2 &= y_i + \frac{h}{2}f(t_i, Y_1) \\\
Y_3 &= y_i + \frac{h}{2}f(t_i + \frac{h}{2}, Y_2) \\\
Y_4 &= y_i + hf(t_i + \frac{h}{2}, Y_3) \\\
y_{i+1} &= y_i + \frac{h}{6} [f(t_i, Y_1) + 2f(t_i + \frac{h}{2}, Y_2) + 2f(t_i + \frac{h}{2}, Y_3) + f(t_{i+1}, Y_4)]\\\
\end{align*} Y 1 Y 2 Y 3 Y 4 y i + 1 = y i = y i + 2 h f ( t i , Y 1 ) = y i + 2 h f ( t i + 2 h , Y 2 ) = y i + h f ( t i + 2 h , Y 3 ) = y i + 6 h [ f ( t i , Y 1 ) + 2 f ( t i + 2 h , Y 2 ) + 2 f ( t i + 2 h , Y 3 ) + f ( t i + 1 , Y 4 )]