ResearchForge / Calculators
← all articles
linear-algebramatriceseigenvaluesdiagonalizationdeterminantsSat Apr 25

Linear Algebra: Step-by-Step Derivations

Abstract

This article presents core concepts in linear algebra with emphasis on rigorous derivations and computational clarity. We cover matrix multiplication, determinant properties, eigenvalue computation, and diagonalization—foundational techniques essential for applications in engineering, data science, and theoretical mathematics. Each section builds systematically from definitions to worked examples.

Background

Linear algebra provides the mathematical framework for understanding linear transformations and systems of equations. The central objects are matrices: rectangular arrays of scalars that encode transformations between vector spaces. Understanding how to manipulate matrices and extract their intrinsic properties—eigenvalues, eigenvectors, and rank—is essential for both theoretical work and practical computation.

This article assumes familiarity with basic matrix notation and vector operations. We focus on deriving key results step-by-step rather than merely stating them.

Key Results

Matrix Multiplication

[matrix-multiplication] defines the product of two matrices AA (size m×nm \times n) and BB (size n×pn \times p) as:

(AB)ij=k=1nAikBkj(AB)_{ij} = \sum_{k=1}^{n} A_{ik} B_{kj}

The element in position (i,j)(i, j) of the product is computed by taking the dot product of the ii-th row of AA with the jj-th column of BB. This operation is non-commutative: in general, ABBAAB \neq BA. Matrix multiplication is essential for representing linear transformations and solving systems of equations efficiently.

Determinants and Invertibility

The determinant is a scalar-valued function on square matrices that encodes critical information about invertibility and volume scaling. [determinant-of-a-matrix] establishes that for a 2×22 \times 2 matrix:

det(A)=adbcforA=(abcd)\det(A) = ad - bc \quad \text{for} \quad A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}

A matrix is invertible if and only if its determinant is nonzero. Geometrically, the determinant represents the scaling factor of volumes under the linear transformation represented by the matrix.

[determinant-properties] and [determinant-properties] establish key properties:

  1. Row swaps change the sign: det(A)=det(A)\det(A') = -\det(A)
  2. Scalar multiplication of a row scales the determinant: det(cR)=cdet(A)\det(cR) = c \cdot \det(A)
  3. The determinant is invariant under row addition: adding a multiple of one row to another preserves the determinant
  4. Transpose invariance: det(A)=det(AT)\det(A) = \det(A^T)

These properties make determinants computable via row reduction and provide a systematic method for analyzing matrix structure.

Eigenvalues and Eigenvectors

[eigenvalues-of-a-matrix] defines eigenvalues as scalars λ\lambda satisfying:

det(AλI)=0\det(A - \lambda I) = 0

This characteristic equation yields a polynomial whose roots are the eigenvalues. Eigenvalues reveal how much corresponding eigenvectors are scaled under the transformation AA. They are crucial in stability analysis, oscillation analysis, and principal component analysis.

Diagonalization

[diagonalizable-matrix] and [diagonalizable-matrix] establish that a matrix AA is diagonalizable if:

A=PDP1A = PDP^{-1}

where DD is diagonal (containing eigenvalues) and PP is invertible (with eigenvectors as columns). Diagonalization simplifies computation: powers of AA become An=PDnP1A^n = PD^nP^{-1}, where DnD^n is trivial to compute. This is invaluable for solving differential equations and analyzing long-term behavior of dynamical systems.

Column and Null Spaces

[basis-of-column-space] describes the column space Col(A)\text{Col}(A) as the span of the matrix's columns. Its basis consists of the pivot columns in row echelon form, and its dimension equals the number of pivots (the rank).

[basis-of-null-space] defines the null space Null(A)\text{Null}(A) as the solution set to Ax=0Ax = 0. Its basis is found by solving this homogeneous system, and its dimension equals the number of free variables. These spaces are complementary: rank plus nullity equals the number of columns (the rank-nullity theorem).

Matrix Equation Solutions

[matrix-equation-solution] demonstrates solving B1(AX)=AXB^{-1}(A - X) = AX for XX:

Expanding: B1AB1X=AXB^{-1}A - B^{-1}X = AX

Rearranging: B1A=AX+B1X=(A+B1)XB^{-1}A = AX + B^{-1}X = (A + B^{-1})X

Multiplying both sides on the right by (A+B1)1(A + B^{-1})^{-1} (when invertible):

X=(BA+I)1AX = (BA + I)^{-1}A

This illustrates systematic matrix algebra: isolating unknowns requires careful attention to non-commutativity and invertibility conditions.

Similarly, [matrix-inversion-formula] solves B1(A+X)=AXB^{-1}(A + X) = AX:

X=(BAI)1AX = (BA - I)^{-1}A

Both results depend critically on the invertibility of the final matrix expression.

Worked Examples

Example 1: Computing a 2×2 Determinant

Let A=(3214)A = \begin{pmatrix} 3 & 2 \\ 1 & 4 \end{pmatrix}.

Using [determinant-of-a-matrix]:

det(A)=(3)(4)(2)(1)=122=10\det(A) = (3)(4) - (2)(1) = 12 - 2 = 10

Since det(A)0\det(A) \neq 0, the matrix is invertible.

Example 2: Finding Eigenvalues

For the same matrix A=(3214)A = \begin{pmatrix} 3 & 2 \\ 1 & 4 \end{pmatrix}, we solve [eigenvalues-of-a-matrix]:

det(AλI)=det(3λ214λ)=0\det(A - \lambda I) = \det\begin{pmatrix} 3-\lambda & 2 \\ 1 & 4-\lambda \end{pmatrix} = 0 (3λ)(4λ)2=0(3-\lambda)(4-\lambda) - 2 = 0 λ27λ+122=0\lambda^2 - 7\lambda + 12 - 2 = 0 λ27λ+10=0\lambda^2 - 7\lambda + 10 = 0 (λ5)(λ2)=0(\lambda - 5)(\lambda - 2) = 0

The eigenvalues are λ1=5\lambda_1 = 5 and λ2=2\lambda_2 = 2.

Example 3: Diagonalization

With eigenvalues λ1=5\lambda_1 = 5 and λ2=2\lambda_2 = 2, we construct:

D=(5002)D = \begin{pmatrix} 5 & 0 \\ 0 & 2 \end{pmatrix}

Finding eigenvectors and forming PP (columns are eigenvectors), we obtain A=PDP1A = PDP^{-1} per [diagonalizable-matrix]. Computing A10A^{10} becomes:

A10=PD10P1=P(51000210)P1A^{10} = PD^{10}P^{-1} = P\begin{pmatrix} 5^{10} & 0 \\ 0 & 2^{10} \end{pmatrix}P^{-1}

This is far simpler than multiplying AA by itself ten times.

References

AI Disclosure

This article was drafted with AI assistance. The structure, derivations, and explanations were generated based on the provided Zettelkasten notes. All mathematical claims are cited to source notes. The article has been reviewed for technical accuracy and clarity, but readers should verify critical results against standard linear algebra references.

Try the math live

References

AI disclosure: Generated from personal class notes with AI assistance. Every factual claim cites a note. Model: claude-haiku-4-5-20251001.