Giter VIP home page Giter VIP logo

intro-numerical-methods's Introduction

intro-numerical-methods's People

Contributors

beisiegel avatar ceisenbach avatar kellyblack avatar mandli avatar matthewcarbone avatar mspieg avatar partially-compiled avatar rajathkmp avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

intro-numerical-methods's Issues

04_error

In the 04_error.ipynb notebook the two categories of numerical errors are listed as truncation error and floating point error. I tend to think of truncation error as the error made in a single step of approximating an ODE. I would like to separate truncation error into two other different kinds of errors, discretization error (error associated with using a simpler function) and convergence error (errors that accumulate over multiple steps in an algorithm).

Would this be okay?

latex ~ won't render in github

Hi Kyle,

I found that in your notebook '~' is used for empty spaces, however, it won't get rendered correctly in github (right above the second cell block):
https://github.com/mandli/intro-numerical-methods/blob/master/14_LA_iterative.ipynb

It is rendered correctly on nbviewer however:
http://nbviewer.jupyter.org/github/mandli/intro-numerical-methods/blob/master/14_LA_iterative.ipynb

The way to get around is to use \quad or \[empty space].

I also use jupyter notebook for self-study and came across your wonderful lecture notes, when you cited an issue I raised for nbviewer:
jupyter/nbviewer#590

Cheers,
Zhangyi

A probable mistake in differentiation at Asymptotic Convergence of Newton's Method

At 5 Root Finding and Optimization, Asymptotic Convergence of Newton's Method

... Let $g(x) = x - \frac{f(x)}{f'(x)}$, then
...
What about $g'(x^*)$ though:
$$g'(x) = 1 - \frac{f'(x)}{f'(x)} + \frac{f(x)}{f''(x)}$$
which simplifies when evaluated at $x = x^*$ to
$$g'(x^*) = \frac{f(x^*)}{f''(x^*)} = 0$$
...

Since the Quotient Rule, $$\left ( \frac{u}{v} \right )' = \frac{u' v - v' u}{v^2}$$,

I think it should be $$g'(x) = 1 - \frac{f'(x) f'(x) - f(x) f''(x)}{f'^2(x)} = 1 - 1 + \frac{f(x) f''(x)}{f'^2(x)}$$.

Nonetheless, $$g'(x^*) = \frac{f(x^*) f''(x^*)}{f'^2(x^*)} = 0$$, $$g''(x^*) = \frac{f''(x^*)}{f'(x^*)}$$, so it won't effect the following result.

16_ODE_BVP

Under the shooting method, instead of

min_{v_2(0)} |pi/2 - v_2(2)|

should be

min_{v_2(0)} |pi/2 - v_1(2)|

Error in Bracketing Algorithm- Basic Idea

I saw you revised this part before, but it did not look correct.

If $f(x_2) < f(x_1)$ then we know the minimum is between $x_0$ and $x_2$.
If $f(x_2) > f(x_1)$ then we know the minimum is between $x_1$ and $x_3$.

Should not the sign be reversed?

05_root_finding_optimization

For Bracketing Algorithm - Basic Idea:

"If $f(x_3) > f(x_2)$ then we know the minimum is between $x_1$ and $x_4$.

If $f(x_3) < f(x_2)$ then we know the minimum is between $x_3$ and $x_2$."

Shouldn't this be:

"If $f(x_3) > f(x_2)$ then we know the minimum is between $x_1$ and $x_3$.

If $f(x_3) < f(x_2)$ then we know the minimum is between $x_2$ and $x_4$."

09_ODE_ivp_part2

Should be du / dt = lambda u in the second line under the Example: Forward Euler on a Linear Problem.

complement not compliment

This is throughout at least 10_LA_QR. Probably want 'complement' (from set theory) and not 'compliment' which is an expression of approval.

11_LA_QR

In Complimentary Projectors, line that reads

I-2P-P

should be

I-2P+P

05_root_finding_optimization

In the block containing

x = [0.2, None, None, 0.5]

x[1] = x[3] - phi * (x[3] - x[0])

x[2] = x[0] + phi * (x[3] - x[0])

I believe there is an error. The golden ratio phi is defined as 1.61803, but if that is true, then we should be dividing by the golden ratio, not multiplying by it. The code happens to work because the interval chosen in the notes is small enough such that x[1] and x[2] happen to fall within the interval. Making the interval larger will prevent this from happening and the brackets will diverge away.

Something along these lines was mentioned in class, but I figured I'd raise the potential issue just to be thorough.

s/plain/plane/g

Area under "Plotting Stability Regions" and "Absolute Stability of the Forward Euler Method" mentions 'complex plain' and should be 'plane'.

Mistake and typoes in Backward Substitution, Solving Ax=b, 15_LA_gaussian

In Backward Substitution, Solving Ax=b, 15_LA_gaussian, it writes:

Backwards substitution requires us to move from the last row of $U$ and move upwards. We can consider again the general $i$th row with
$$
U_{i,i} x_i + U_{i,i-1} x_{i-1} + \ldots + U_{i,m-1} x_{m-1} + U_{i,m} x_m = y_i
$$
noting that we are using the fact that the matrix $L$ has 1 on its diagonal. We can now solve for $y_i$ as
$$
x_i = \frac{1}{U_{i,i}} \left( y_i - ( U_{i,i-1} x_{i-1} + \ldots + U_{i,m-1} x_{m-1} + U_{i,m} x_m) \right )
$$

  1. Both equations have index mistake ($U_{i,i-1} x_{i-1}$), which should be
    $$U_{i,i} x_i + U_{i,i+1} x_{i+1} + \ldots + U_{i,m-1} x_{m-1} + U_{i,m} x_m = y_i$$,
    $$x_i = \frac{1}{U_{i,i}} \left( y_i - ( U_{i,i+1} x_{i+1} + \ldots + U_{i,m-1} x_{m-1} + U_{i,m} x_m) \right )$$.

  2. Matrix $L$ has nothing to do with backwards substitution. Noting that the matrix $L$ has 1 on its diagonal is unnecessary.

  3. The second sentence of ones between two equations should be

    We can now solve for $x_i$ as ...

Lecture 04_error typo

For the floating point arithmetic Ax = b example, the solver I used generated x = [1; 16], and not x = [-0.5; 16].

Modularize Content

Want to take content and make it more modularized and re-orderable for ease of re-use.

Lecture 04_error typo

Typo 1.)

"Smallest number that can be represented is the underflow: $1.0 \times 10^{-2} = 0.01$ Largest number that can be represented is the overflow: $9.9 \times 10^0 = 9.9$".

Should be "Smallest number that can be represented is the underflow: $0.1 \times 10^{-2} = 0.001$ Largest number that can be represented is the overflow: $9.9 \times 10^0 = 9.9".

Typo 2.)

"Smallest number that can be represented is the underflow: $1.0 \times 2^{-1} = 0.25$ Largest number that can be represented is the overflow: $1.1 \times 2^1 = 2.2$".

Should be "Smallest number that can be represented is the underflow: $0.1 \times 2^{-1} = 0.25$ Largest number that can be represented is the overflow: $1.1 \times 2^1 = 3$".

15_LA_gaussian.ipynb

I'm pretty sure there is a typo on the last line of this lecture. Instead of:

$(U_{i,i-1} x_{i-1}$ ...

I think it should be:

$(U_{i,i+1} x_{i+1}$ ...

Minor error 09_ODE

When describing the adam-bashworth method, there is a minor formatting error in the list describing the steps.

error in 09_ODE_ivp_part1

In "Example: 4-stage Runge-Kutta Method"

y_2 = u_4[n] + 0.5 * delta_t * f(t_n + 0.5 * delta_t, y_1)
y_3 = u_4[n] + 0.5 * delta_t * f(t_n + 0.5, y_2)

Type 10_LA

In "Example: Vandermonde Matrix," I believe the y matrix should be y1, y2 ... ym.

Maybe not a mistake in 04_error?

In the code for the "2-digit precision base 2 system" in the 04_error notebook, we see the system defined as follows:

axes.plot( (d1 + d2 * 0.1) * 2E, 0.0, 'r+', markersize=20)
axes.plot(-(d1 + d2 * 0.1) * 2
E, 0.0, 'r+', markersize=20)

Shouldn't the 0.1 be 0.5, since this is binary and not decimal (or maybe I am missing something)?

09_ODE_ivp_part1.ipynb

I think there is a typo in the lecture, in the "Truncation Error for Multi-Step Methods" section.

In the last expression I think the general form for the q-ith term is miss-written. A summation seems missing, and I also think that the 1/q! term should not be multiplying both the \alpha and \beta terms (it should just be multiplying the \alpha term). For example:

\Delta t^{q - 1} \left (\frac{1}{q!} \left(j^q \alpha_j - \frac{1}{(q-1)!} j^{q-1} \beta_j \right) \right) u^{(q)}(t_n)

should be:

\Delta t^{q - 1} \left( \sum^r_{j=0} \left (\frac{1}{q!} j^q \alpha_j - \frac{1}{(q-1)!} j^{q-1} \beta_j \right) \right) u^{(q)}(t_n)

Typo in "04_error"

"Plotting the error as a function of $\Delta x$ is a common way to show that a numerical method is doing what we expect and exhbits the correct convergence behavior" --> "...exhibits the correct..."

09_ODE_ivp_part2

Plots under L-stability section have typos in labels:

"Comparison of error for backward euler" should have label for Backward Euler.

"Comparison of errors for trapezoidal rule" should have label for Trapezoidal Rule (not Forward Euler).

Lecture 9_ODE graph setup

There seems to be an error in the graphs showing the difference between euler and the leap-frog methods. The leapfrog graph just shows one data point at 0 and a vertical, dashed black line to it.

typo 09_ODE

Near the end of the global error for the forward euler example, there is a type in the line:
"In other words the global error is bounded by the original global erro(r) and"

09_ODE_ivp_part1

A small typo in line 28 of the code block under Adams-Moulton Methods.
axes.set_xlabel("u(t)") should be axes.set_ylabel

Also detect same typos when plotting similar graphs in the file.

15_LA_gaussian

The last equation in the last block has an error. The backwards substitution method starts with i+1, not i-1. See

link

It doesn't make sense as written so hopefully people will realize it, but this could confuse people!

Possible typo in quadrature lecture

Under 'Newton-Cotes Quadrature,' the lecture contains the phrase:
"evaluate f(x) at these points and exactly integrate the interpolating polynomial exactly."

was this intended?

04_error

The expression given by

$$f \cdot g = p \cdot q + O(x^{n\cdot m})$$

should be

$$f \cdot g = p \cdot q + O(x^{n + m})$$

Minor Corrections

covers chapters 00 - 04,

Lecture 04_error typo

The code for Example 2 (last example) should be

numpy.all(error < 8.0 * numpy.finfo(float).eps):

not

numpy.all(error < 100.0 * numpy.finfo(float).eps):

09_ODE_ivp_part1

Typo or unfinished block within Taylor Series Methods?

Example (no math mode):
[ u^{(p)}(t_n) = f^{(p-1)}(t_n, u(t_n)) ]

Why g''(c) not g''(x*) in the Taylor expansion, Analysis of Fixed Point Iteration

In 5_root_finding_optimization.ipynb, Analysis of Fixed Point Iteration part, it writes:

Using a Taylor expansion we know

$$g(x^* + e_k) = g(x^) + g'(x^) e_k + \frac{g''(c) e_k^2}{2}$$

$$x^* + e_{k+1} = g(x^) + g'(x^) e_k + \frac{g''(c) e_k^2}{2}$$

Why it's $$\frac{g''(c) e_k^2}{2}$$, instead of $$\frac{g''(x^*) e_k^2}{2}$$? Where does the $$c$$ come from?

Problems with two equations in Computing Order of Convergence, Differentiation

In 7_Differentiation, Examples, Example 1: 1st order Forward and Backward Differences, Computing Order of Convergence part, there are two equations:
e(Δx) = Δx^n + b
log e(Δx) = n logb logΔx

However, I don't think log e(Δx) equals that.
If I change the e(Δx), equations seem to be correct then.
e(Δx) = b Δx^n
log e(Δx) = logb + n logΔx
And the following plot supports that.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.