Unit 1: Vector Algebra and Matrices
        
        Vector Products
        
        Scalar (Dot) Product
        The scalar product of two vectors A and B is a scalar quantity defined as:
        
            A · B = |A| |B| cos(θ)
        
        where |A| and |B| are the magnitudes of the vectors and θ is the angle between them.
        
        Vector (Cross) Product
        The vector product of two vectors A and B is a vector quantity C defined as:
        
            A × B = |A| |B| sin(θ) n̂
        
        where n̂ is a unit vector perpendicular to the plane containing A and B, given by the Right-Hand Rule.
        
        
        Scalar and Vector Triple Products
        Scalar Triple Product (Box Product)
        It is defined as the dot product of one vector with the cross product of the other two: A · (B × C).
        
        Vector Triple Product
        It is defined as the cross product of one vector with the cross product of the other two: A × (B × C).
        
            - Expansion (BAC-CAB Rule): This is a crucial identity to remember.
                
                    A × (B × C) = B(A · C) - C(A · B)
                 
- Mnemonic: Remember the rule as "BAC-CAB".
- Properties:
                
                    - The resultant vector lies in the plane formed by B and C.
- It is not associative: A × (B × C) ≠ (A × B) × C
- (A × B) × C = -C × (A × B) = -[A(C · B) - B(C · A)] = B(A · C) - A(B · C)
 
            Common Mistake: Confusing the scalar and vector triple products. Remember:
            
                - Scalar Triple Product: `A · (B × C)` → Results in a Scalar (a number, representing volume).
- Vector Triple Product: `A × (B × C)` → Results in a Vector (using the BAC-CAB rule).
 
        
        Properties and Applications of Vectors
        This is a summary of the applications discussed above.
        
            
                
                    | Concept | Formula | Physical Application | 
            
            
                
                    | Scalar Product | W = F · d | Calculating Work Done by a force. | 
                
                    | Scalar Product | P = F · v | Calculating instantaneous Power. | 
                
                    | Scalar Product | ΦB = B · A | Calculating Magnetic Flux through an area A. | 
                
                    | Vector Product | τ = r × F | Calculating Torque. | 
                
                    | Vector Product | L = r × p | Calculating Angular Momentum. | 
                
                    | Vector Product | Fm = q(v × B)S | Calculating Lorentz Force on a charge in a B-field. | 
                
                    | Scalar Triple Product | V = |A · (B × C)| | Finding the Volume of a parallelepiped. | 
            
        
        
        Scalar and Vector Fields
        A field is a physical quantity that has a value for each point in space and time.
        
            - Scalar Field: A field that associates a scalar value with every point in space.
                
                    - Examples: Temperature in a room (T(x,y,z)), pressure in a fluid (P(x,y,z)), electric potential (V(x,y,z)).
- These are represented by level surfaces or contour lines.
 
- Vector Field: A field that associates a vector value (magnitude and direction) with every point in space.
                
                    - Examples: Gravitational field (g(x,y,z)), electric field (E(x,y,z)), velocity field of a flowing river (v(x,y,z)).
- These are represented by drawing arrows at various points.
 
        Matrices: Types and Properties
        
        Different Types of Matrices
        
            - Square Matrix: Number of rows = Number of columns.
- Diagonal Matrix: A square matrix where all non-diagonal elements are zero.
- Identity Matrix (I): A diagonal matrix with all diagonal elements equal to 1.
- Null Matrix (0): A matrix where all elements are zero.
Symmetric and Antisymmetric Matrices
        
            - Transpose (Aᵀ): A matrix formed by interchanging rows and columns of A. (Aijᵀ = Aji)
- Symmetric Matrix: A square matrix A is symmetric if A = Aᵀ. (i.e., Aij = Aji)
- Antisymmetric (Skew-Symmetric) Matrix: A square matrix A is antisymmetric if A = -Aᵀ. (i.e., Aij = -Aji)
- Property: For an antisymmetric matrix, all diagonal elements must be zero (since Aii = -Aii implies 2Aii = 0).
Hermitian and Anti-Hermitian Matrices
        These are the complex-number equivalents of symmetric and antisymmetric matrices.
        
            - Conjugate Transpose (A†) (Adjoint): First, take the complex conjugate of all elements, then take the transpose. A† = (A*)ᵀ
- Hermitian Matrix: A square matrix A is Hermitian if A = A†. (i.e., Aij = Aji*)
- Property: The diagonal elements of a Hermitian matrix must be real (since Aii = Aii*).
            
- Anti-Hermitian (Skew-Hermitian) Matrix: A square matrix A is anti-Hermitian if A = -A†. (i.e., Aij = -Aji*)
- Property: The diagonal elements of an anti-Hermitian matrix must be purely imaginary or zero (since Aii = -Aii*).
            Key Property: Any square matrix A can be written as the sum of a symmetric (S) and an antisymmetric (K) matrix: A = S + K
            
            where S = (1/2)(A + Aᵀ) and K = (1/2)(A - Aᵀ).
            
            Similarly, any square matrix A can be written as the sum of a Hermitian (H) and an anti-Hermitian (S) matrix: A = H + S
            
            where H = (1/2)(A + A†) and S = (1/2)(A - A†).
        
        
        Matrix Operations: Inverse and Transpose
        
        Transpose of a Matrix (Aᵀ)
        As defined earlier, (Aᵀ)ij = Aji.
        
            - Properties:
                
                    - (Aᵀ)ᵀ = A
- (A + B)ᵀ = Aᵀ + Bᵀ
- (kA)ᵀ = kAᵀ (where k is a scalar)
- (AB)ᵀ = BᵀAᵀ (Reversal law)
 
Inverse of a Matrix (A⁻¹)
        The inverse of a square matrix A is a matrix A⁻¹ such that A A⁻¹ = A⁻¹ A = I (the identity matrix).
        
            - A matrix has an inverse if and only if its determinant is non-zero (det(A) ≠ 0). Such a matrix is called non-singular.
- Formula for Inverse:
                
                    A⁻¹ = (1 / det(A)) * adj(A)
                 
- Determinant (det(A)): A scalar value calculated from the elements of a square matrix.
- Adjugate (adj(A)): The transpose of the cofactor matrix.
                
                    - Minor (Mij): Determinant of the submatrix left after removing row i and column j.
- Cofactor (Cij): (-1)i+j * Mij
- Cofactor Matrix (C): The matrix formed by all cofactors.
- Adjugate (adj(A)): Cᵀ (Transpose of the cofactor matrix).
 
- Properties:
                
                    - (A⁻¹)⁻¹ = A
- (AB)⁻¹ = B⁻¹A⁻¹ (Reversal law)
- (Aᵀ)⁻¹ = (A⁻¹)ᵀ
 
        Solution of Simultaneous Linear Equations
        A system of linear equations can be written in matrix form as AX = B.
        
    [ a₁₁ a₁₂ ... a₁n ] [ x₁ ]   [ b₁ ]
    [ a₂₁ a₂₂ ... a₂n ] [ x₂ ] = [ b₂ ]
    [ ... ... ... ... ] [ .. ]   [ .. ]
    [ aₘ₁ aₘ₂ ... aₘn ] [ xn ]   [ bₘ ]
        
        Homogeneous Equations (AX = 0)
        This is when B is a null vector (all bi = 0).
        
            - Trivial Solution: X = 0 (i.e., x₁=0, x₂=0, ...) is always a solution.
- Non-Trivial Solution: For a non-trivial solution (X ≠ 0) to exist, the determinant of the coefficient matrix must be zero: det(A) = 0.
- This is crucial for finding eigenvectors.
Non-Homogeneous Equations (AX = B)
        This is when B is not a null vector.
        Methods of Solution:
        
            - Matrix Inverse Method:
                
                    - If A is square and non-singular (det(A) ≠ 0), a unique solution exists.
- Multiply by A⁻¹: A⁻¹(AX) = A⁻¹B  →  (A⁻¹A)X = A⁻¹B  →  IX = A⁻¹B
- Solution: X = A⁻¹B
 
- Gauss-Jordan Elimination (Row Reduction):
                
                    - Form the augmented matrix [A | B].
- Use elementary row operations (swapping rows, multiplying a row by a scalar, adding a multiple of one row to another) to transform A into the identity matrix I.
- The augmented matrix will become [I | X], and the right-hand column will be the solution vector X.
 
- Cramer's Rule:
                
                    - Used when det(A) ≠ 0.
- The solution for each variable xi is given by:
                        xi = det(Ai) / det(A)
                    
- where Ai is the matrix A with its i-th column replaced by the vector B.
 
        Eigenvalues and Eigenvectors of a Matrix
        
        Definition
        For a given square matrix A, a non-zero vector X is an eigenvector of A if it satisfies the following equation for some scalar λ:
        
            AX = λX
        
        The scalar λ is called the eigenvalue corresponding to the eigenvector X. This means that when matrix A "acts" on its eigenvector X, it only scales it by a factor λ, without changing its direction.
        How to Find Eigenvalues and Eigenvectors
        - Step 1: Find the Eigenvalues (λ)
                
                    - Rearrange the equation: AX - λX = 0  →  AX - λIX = 0
- (A - λI)X = 0
- This is a homogeneous system of linear equations. For a non-trivial solution (X ≠ 0) to exist, the determinant of the coefficient matrix must be zero.
- The Characteristic Equation: det(A - λI) = 0
- Solving this equation (which will be a polynomial in λ) gives the eigenvalues λ₁, λ₂, ...
 
- Step 2: Find the Eigenvectors (X)
                
                    - For each eigenvalue λi found in Step 1, substitute it back into the equation:
                        (A - λiI)X = 0
                    
- Solve this homogeneous system for the vector X = [x₁, x₂, ...]. This will typically involve using Gaussian elimination.
- The solution will have at least one free variable, leading to an infinite number of solutions (all scalar multiples of the base eigenvector). We usually state one normalized eigenvector.
 
            Properties of Eigenvalues:
            
                - The sum of all eigenvalues is equal to the trace of the matrix (sum of diagonal elements): Σλi = Tr(A)
- The product of all eigenvalues is equal to the determinant of the matrix: Πλi = det(A)
- The eigenvalues of a Hermitian (or real symmetric) matrix are always real. This is fundamental in quantum mechanics, where eigenvalues represent measurable quantities.