Linear Algebra: Key Theorems and Proofs
Abstract
This article surveys foundational theorems in linear algebra, focusing on the structure of vector spaces, matrix properties, and diagonalization. We examine the bases of column and null spaces, determinants, eigenvalues, and conditions for diagonalizability, with emphasis on how these concepts interconnect to provide insight into linear transformations.
Background
Linear algebra provides the mathematical framework for understanding linear transformations and vector spaces. At its core are questions about invertibility, solvability of systems, and the geometric meaning of matrix operations. The theorems discussed here form the backbone of computational and theoretical linear algebra, with applications spanning data science, physics, and engineering.
A matrix encodes a linear transformation, and its properties—whether it is invertible, how it scales space, which vectors it leaves unchanged—determine the behavior of systems it governs. Understanding these properties requires tools that extract structural information from the matrix itself.
Key Results
Column Space and Basis
The column space of a matrix , denoted , is the subspace spanned by all linear combinations of its columns [basis-of-column-space]. The basis of this space consists of the linearly independent columns that generate it.
A practical method for finding this basis is to perform row reduction on and identify the pivot columns. The columns of corresponding to pivot positions in the row echelon form constitute a basis for [basis-of-column-space]. Importantly, the dimension of the column space—the rank of —equals the number of pivot columns.
This result is essential for understanding whether a system has a solution: the system is consistent if and only if lies in .
Null Space and Basis
The null space of , denoted , is the set of all vectors satisfying the homogeneous equation [basis-of-null-space]. Finding a basis for the null space requires solving this equation and expressing the general solution in terms of free variables.
The dimension of the null space equals the number of free variables in the solution [basis-of-null-space]. A non-trivial null space (dimension greater than zero) indicates that the transformation represented by collapses some directions to zero, revealing linear dependence among the columns.
The relationship between these two spaces is formalized by the Rank-Nullity Theorem: for an matrix , where rank is the dimension of the column space and nullity is the dimension of the null space.
Determinant
The determinant of a square matrix , denoted , is a scalar that encodes critical information about the matrix [determinant-of-a-matrix]. For a matrix,
For larger matrices, the determinant can be computed via cofactor expansion or row reduction [determinant-of-a-matrix].
The determinant serves multiple purposes: it indicates invertibility (a matrix is invertible if and only if ), it represents the signed volume scaling factor of the linear transformation, and it is used to compute eigenvalues. A zero determinant signals that the transformation collapses space into a lower dimension, corresponding to linear dependence among rows or columns [determinant-of-a-matrix].
Eigenvalues and Eigenvectors
For a square matrix , an eigenvalue is a scalar such that there exists a non-zero vector (called an eigenvector) satisfying
Eigenvalues are found by solving the characteristic equation [eigenvalues-of-a-matrix]:
This equation, obtained by rearranging as , yields a polynomial in whose roots are the eigenvalues [eigenvalues-of-a-matrix].
Eigenvalues reveal how the transformation stretches or compresses directions in space. They appear in stability analysis, oscillation frequencies, and growth rates across applications [eigenvalues-of-a-matrix].
Diagonalization
A matrix is diagonalizable if it can be written as where is a diagonal matrix and is an invertible matrix whose columns are eigenvectors of [diagonalizable-matrix].
Diagonalization is powerful because it simplifies computation: raising to a power becomes and since is diagonal, this is easy to compute [diagonalizable-matrix].
A sufficient condition for diagonalizability is that has linearly independent eigenvectors (where is ). This occurs when the eigenvectors form a basis for the vector space [diagonalizable-matrix].
Worked Example
Consider the matrix
Finding eigenvalues: Compute :
So and [eigenvalues-of-a-matrix].
Finding eigenvectors: For , solve :
An eigenvector is .
For , solve :
An eigenvector is .
Diagonalization: Since we have two linearly independent eigenvectors, is diagonalizable [diagonalizable-matrix]:
and .
References
- [basis-of-column-space]
- [basis-of-null-space]
- [determinant-of-a-matrix]
- [eigenvalues-of-a-matrix]
- [diagonalizable-matrix]
AI Disclosure
This article was drafted with AI assistance from class notes (Zettelkasten). All mathematical claims are grounded in cited notes. The article was structured, written, and edited to ensure technical accuracy and clarity.