ResearchForge / Calculators
← all articles
linear-algebraeigenvaluesbasisdiagonalizationcolumn-spacenull-spaceMon May 04

Linear Algebra: Key Theorems and Proofs

Abstract

This article surveys foundational theorems in linear algebra, focusing on the structure of vector spaces, matrix properties, and diagonalization. We examine the bases of column and null spaces, determinants, eigenvalues, and conditions for diagonalizability, with emphasis on how these concepts interconnect to provide insight into linear transformations.

Background

Linear algebra provides the mathematical framework for understanding linear transformations and vector spaces. At its core are questions about invertibility, solvability of systems, and the geometric meaning of matrix operations. The theorems discussed here form the backbone of computational and theoretical linear algebra, with applications spanning data science, physics, and engineering.

A matrix AA encodes a linear transformation, and its properties—whether it is invertible, how it scales space, which vectors it leaves unchanged—determine the behavior of systems it governs. Understanding these properties requires tools that extract structural information from the matrix itself.

Key Results

Column Space and Basis

The column space of a matrix AA, denoted Col(A)\text{Col}(A), is the subspace spanned by all linear combinations of its columns [basis-of-column-space]. The basis of this space consists of the linearly independent columns that generate it.

A practical method for finding this basis is to perform row reduction on AA and identify the pivot columns. The columns of AA corresponding to pivot positions in the row echelon form constitute a basis for Col(A)\text{Col}(A) [basis-of-column-space]. Importantly, the dimension of the column space—the rank of AA—equals the number of pivot columns.

This result is essential for understanding whether a system Ax=bAx = b has a solution: the system is consistent if and only if bb lies in Col(A)\text{Col}(A).

Null Space and Basis

The null space of AA, denoted Null(A)\text{Null}(A), is the set of all vectors xx satisfying the homogeneous equation Ax=0Ax = 0 [basis-of-null-space]. Finding a basis for the null space requires solving this equation and expressing the general solution in terms of free variables.

The dimension of the null space equals the number of free variables in the solution [basis-of-null-space]. A non-trivial null space (dimension greater than zero) indicates that the transformation represented by AA collapses some directions to zero, revealing linear dependence among the columns.

The relationship between these two spaces is formalized by the Rank-Nullity Theorem: for an m×nm \times n matrix AA, rank(A)+nullity(A)=n\text{rank}(A) + \text{nullity}(A) = n where rank is the dimension of the column space and nullity is the dimension of the null space.

Determinant

The determinant of a square matrix AA, denoted det(A)\det(A), is a scalar that encodes critical information about the matrix [determinant-of-a-matrix]. For a 2×22 \times 2 matrix, det(abcd)=adbc\det\begin{pmatrix} a & b \\ c & d \end{pmatrix} = ad - bc

For larger matrices, the determinant can be computed via cofactor expansion or row reduction [determinant-of-a-matrix].

The determinant serves multiple purposes: it indicates invertibility (a matrix is invertible if and only if det(A)0\det(A) \neq 0), it represents the signed volume scaling factor of the linear transformation, and it is used to compute eigenvalues. A zero determinant signals that the transformation collapses space into a lower dimension, corresponding to linear dependence among rows or columns [determinant-of-a-matrix].

Eigenvalues and Eigenvectors

For a square matrix AA, an eigenvalue λ\lambda is a scalar such that there exists a non-zero vector vv (called an eigenvector) satisfying Av=λvAv = \lambda v

Eigenvalues are found by solving the characteristic equation [eigenvalues-of-a-matrix]: det(AλI)=0\det(A - \lambda I) = 0

This equation, obtained by rearranging Av=λvAv = \lambda v as (AλI)v=0(A - \lambda I)v = 0, yields a polynomial in λ\lambda whose roots are the eigenvalues [eigenvalues-of-a-matrix].

Eigenvalues reveal how the transformation stretches or compresses directions in space. They appear in stability analysis, oscillation frequencies, and growth rates across applications [eigenvalues-of-a-matrix].

Diagonalization

A matrix AA is diagonalizable if it can be written as A=PDP1A = PDP^{-1} where DD is a diagonal matrix and PP is an invertible matrix whose columns are eigenvectors of AA [diagonalizable-matrix].

Diagonalization is powerful because it simplifies computation: raising AA to a power becomes Ak=PDkP1A^k = PD^kP^{-1} and since DkD^k is diagonal, this is easy to compute [diagonalizable-matrix].

A sufficient condition for diagonalizability is that AA has nn linearly independent eigenvectors (where AA is n×nn \times n). This occurs when the eigenvectors form a basis for the vector space [diagonalizable-matrix].

Worked Example

Consider the matrix A=(2103)A = \begin{pmatrix} 2 & 1 \\ 0 & 3 \end{pmatrix}

Finding eigenvalues: Compute det(AλI)\det(A - \lambda I): det(2λ103λ)=(2λ)(3λ)=0\det\begin{pmatrix} 2-\lambda & 1 \\ 0 & 3-\lambda \end{pmatrix} = (2-\lambda)(3-\lambda) = 0

So λ1=2\lambda_1 = 2 and λ2=3\lambda_2 = 3 [eigenvalues-of-a-matrix].

Finding eigenvectors: For λ1=2\lambda_1 = 2, solve (A2I)v=0(A - 2I)v = 0: (0101)(v1v2)=0    v2=0\begin{pmatrix} 0 & 1 \\ 0 & 1 \end{pmatrix}\begin{pmatrix} v_1 \\ v_2 \end{pmatrix} = 0 \implies v_2 = 0

An eigenvector is v1=(10)v_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}.

For λ2=3\lambda_2 = 3, solve (A3I)v=0(A - 3I)v = 0: (1100)(v1v2)=0    v1=v2\begin{pmatrix} -1 & 1 \\ 0 & 0 \end{pmatrix}\begin{pmatrix} v_1 \\ v_2 \end{pmatrix} = 0 \implies v_1 = v_2

An eigenvector is v2=(11)v_2 = \begin{pmatrix} 1 \\ 1 \end{pmatrix}.

Diagonalization: Since we have two linearly independent eigenvectors, AA is diagonalizable [diagonalizable-matrix]: P=(1101),D=(2003)P = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}, \quad D = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}

and A=PDP1A = PDP^{-1}.

References

AI Disclosure

This article was drafted with AI assistance from class notes (Zettelkasten). All mathematical claims are grounded in cited notes. The article was structured, written, and edited to ensure technical accuracy and clarity.

Try the math live

References

AI disclosure: Generated from personal class notes with AI assistance. Every factual claim cites a note. Model: claude-haiku-4-5-20251001.