Unit 5: Beta & Gamma Functions and Numerical Techniques

Table of Contents

Beta and Gamma Functions: Definitions

Gamma Function (Γ(n))

The Gamma function is an extension of the factorial function to complex and real numbers. It is defined by a definite integral:

Γ(n) = ∫0 e-x xn-1 dx (for n > 0)

Beta Function (B(m, n))

The Beta function is a related function defined by a different integral:

B(m, n) = ∫01 xm-1 (1-x)n-1 dx (for m > 0, n > 0)

Properties and Relation between Beta and Gamma

Properties of Gamma Function

Properties of Beta Function

Relation Between Beta and Gamma Functions

This is the key identity that connects the two functions and allows for the easy evaluation of many integrals.

B(m, n) = [ Γ(m) Γ(n) ] / Γ(m+n)

Expression of Integrals in terms of Gamma Functions

We can use the relation `B(m, n) = [ Γ(m) Γ(n) ] / Γ(m+n)` and the trigonometric form of the Beta function to solve a very common class of definite integrals.

We know:

0π/2 (sinθ)p (cosθ)q dθ = (1/2) * B( (p+1)/2, (q+1)/2 )

...by comparing `p = 2m-1` and `q = 2n-1`.

Combining these, we get the W_all_is Integral Formula:

0π/2 (sinθ)p (cosθ)q dθ = [ Γ((p+1)/2) * Γ((q+1)/2) ] / [ 2 * Γ((p+q+2)/2) ]
Example Problem: Evaluate I = ∫0 e-x² dx (Gaussian Integral)
  1. Let t = x², so x = t1/2. Then dx = (1/2)t-1/2 dt.
  2. As x → 0, t → 0. As x → ∞, t → ∞.
  3. Substitute: I = ∫0 e-t (1/2)t-1/2 dt = (1/2) ∫0 e-t t(1/2) - 1 dt
  4. This is in the form of the Gamma function: (1/2) * Γ(1/2).
  5. Since Γ(1/2) = √π, the integral is √π / 2.

Numerical Integration

These techniques are used to find the approximate value of a definite integral `I = ∫[a, b] f(x) dx` when `f(x)` is too difficult or impossible to integrate analytically. The method is to divide the interval `[a, b]` into `n` equal subintervals of width `h`.

h = (b - a) / n

Let `y₀ = f(x₀)`, `y₁ = f(x₁)`, ..., `yₙ = f(xₙ)`, where `x₀ = a` and `xₙ = b`.

Trapezoidal Rule

This method approximates the area under the curve in each subinterval as a trapezoid.

I ≈ (h/2) [ (y₀ + yₙ) + 2(y₁ + y₂ + ... + yn-1) ]

In words: (h/2) * [ (first + last ordinates) + 2 * (sum of all other ordinates) ]

Simpson's 1/3 Rule

This method is more accurate. It approximates the function over two intervals at a time using a parabola. It requires `n` to be an even number.

I ≈ (h/3) [ (y₀ + yₙ) + 4(y₁ + y₃ + y₅ + ...) + 2(y₂ + y₄ + y₆ + ...) ]

In words: (h/3) * [ (first + last) + 4*(sum of odd ordinates) + 2*(sum of even ordinates) ]

Simpson's 3/8 Rule

This method approximates the function over three intervals at a time using a cubic. It requires `n` to be a multiple of 3.

I ≈ (3h/8) [ (y₀ + yₙ) + 3(y₁ + y₂ + y₄ + y₅ + ...) + 2(y₃ + y₆ + y₉ + ...) ]

In words: (3h/8) * [ (first + last) + 3*(sum of non-multiple-of-3 ordinates) + 2*(sum of multiple-of-3 ordinates) ]

Key Point: Simpson's 1/3 rule is generally more accurate and more commonly used than the Trapezoidal rule. The 3/8 rule is for specific cases where `n` is a multiple of 3.

Solution of Equations (Root Finding)

(Note: The syllabus has a typo "Solution of linear equations...". Bisection and Newton-Raphson are for non-linear equations of the form `f(x) = 0`.)

Bisection Method

This is a "bracketing" method that is slow but guaranteed to work.

  1. Requirement: Find two points, `a` and `b`, such that `f(a)` and `f(b)` have opposite signs. This guarantees a root exists between them.
  2. Step 1: Calculate the midpoint `c = (a + b) / 2`.
  3. Step 2: Evaluate `f(c)`.
  4. Step 3 (Update):
    • If `f(a)` and `f(c)` have opposite signs, the root is in `[a, c]`. Set `b = c`.
    • If `f(b)` and `f(c)` have opposite signs, the root is in `[c, b]`. Set `a = c`.
  5. Step 4 (Repeat): Go back to Step 1 with the new, smaller interval. Repeat until the interval `[a, b]` is sufficiently small.

Newton-Raphson Method

This is an "open" method that is much faster but can sometimes fail to converge. It uses the tangent line to approximate the root.

  1. Requirement: You need the function `f(x)` and its derivative `f'(x)`.
  2. Step 1: Start with an initial "guess" `x₀` that is close to the root.
  3. Step 2 (Iterate): Generate the next, better guess `x₁` using the formula:
    xn+1 = xn - f(xn) / f'(xn)
  4. Step 3 (Repeat): Use `x₁` to find `x₂`, `x₂` to find `x₃`, and so on.
  5. Step 4 (Stop): Stop when the difference between `x_{n+1}` and `xₙ` is very small.
Comparison:

Interpolation

Interpolation is the process of estimating the value of a function `y` at a point `x` that lies between known data points `(x₀, y₀), (x₁, y₁), ...`.

These formulas are for equally spaced data, where `h = x₁ - x₀ = x₂ - x₁`.

Newton-Gregory Forward Difference Formula

This formula is used to interpolate values of `y` near the beginning of the dataset (near `x₀`).

It uses a forward difference table.
First difference: Δy₀ = y₁ - y₀
Second difference: Δ²y₀ = Δy₁ - Δy₀
...and so on.

The formula is:

y(x) = y₀ + p Δy₀ + [p(p-1)/2!] Δ²y₀ + [p(p-1)(p-2)/3!] Δ³y₀ + ...

where p = (x - x₀) / h. (`x` is the point you are interpolating at).

Newton-Gregory Backward Difference Formula

This formula is used to interpolate values of `y` near the end of the dataset (near `xₙ`).

It uses a backward difference table.
First difference: ∇yₙ = yₙ - yn-1
Second difference: ∇²yₙ = ∇yₙ - ∇yn-1
...and so on.

The formula is:

y(x) = yₙ + p ∇yₙ + [p(p+1)/2!] ∇²yₙ + [p(p+1)(p+2)/3!] ∇³yₙ + ...

where p = (x - xₙ) / h. (Note the different definition of `p`!)