ResearchForge / Calculators
← all articles
linear-algebrapedagogymatrix-operationseigenvaluesdiagonalizationFri Apr 24

Linear Algebra: Common Mistakes and Misconceptions

Abstract

Linear algebra is foundational to mathematics, engineering, and data science, yet students frequently misunderstand core concepts. This article identifies and clarifies five common misconceptions: the assumption that matrix multiplication is commutative, confusion about when determinants determine invertibility, misidentification of column space bases, conflation of null space with column space, and incorrect conditions for diagonalizability. Each misconception is paired with the correct principle and worked examples to reinforce understanding.

Background

Linear algebra courses introduce students to matrices, determinants, eigenvalues, and vector spaces. While the formal definitions are typically presented clearly, students often develop intuitions that contradict the underlying mathematics. These misconceptions frequently persist because they seem plausible by analogy to scalar arithmetic or because a single counterexample is insufficient to dislodge them. This article draws on course materials to address the most consequential errors.

Key Results and Misconceptions

Misconception 1: Matrix Multiplication is Commutative

The Error: Students often assume that AB=BAAB = BA for any matrices AA and BB.

Why It's Wrong: [matrix-multiplication] establishes that matrix multiplication combines two matrices via the rule (AB)ij=k=1nAikBkj(AB)_{ij} = \sum_{k=1}^{n} A_{ik} B_{kj}. This operation is fundamentally asymmetric: the rows of AA are paired with the columns of BB, while BABA pairs rows of BB with columns of AA. In general, these produce different results.

Correct Principle: Matrix multiplication is not commutative. For most matrices, ABBAAB \neq BA.

Why This Matters: Non-commutativity has profound implications. When solving matrix equations or composing transformations, the order of operations is critical. Swapping the order can yield an entirely different result or even make the product undefined (if dimensions don't align).


Misconception 2: A Zero Determinant Means the Matrix Is Not Invertible (Correct), But the Converse Is Assumed Without Justification

The Error: Students correctly learn that det(A)=0\det(A) = 0 implies AA is singular, but sometimes fail to internalize that det(A)0\det(A) \neq 0 is necessary and sufficient for invertibility.

Why It Matters: [determinant-of-a-matrix] and [determinant-of-a-matrix] establish that the determinant indicates invertibility. A non-zero determinant guarantees the existence of an inverse; a zero determinant guarantees it does not exist. This is a biconditional relationship, not a one-way implication.

Correct Principle: A square matrix AA is invertible if and only if det(A)0\det(A) \neq 0.

Common Slip: Students sometimes treat determinant computation as optional or secondary, when in fact it is the primary test for invertibility.


Misconception 3: The Basis of the Column Space Consists of All Columns

The Error: Students identify the column space basis as the set of all columns of a matrix, rather than the linearly independent subset.

Why It's Wrong: [basis-of-column-space] clarifies that the basis of Col(A)\text{Col}(A) is found by identifying the pivot columns in row echelon form. Not all columns are pivot columns; some are linear combinations of others.

Correct Principle: The basis for the column space consists of the pivot columns in the row echelon form of AA. The dimension of the column space equals the number of pivot columns (the rank of AA).

Worked Example: Consider A=(123000000)A = \begin{pmatrix} 1 & 2 & 3 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}. The matrix has three columns, but only one pivot column (the first). Thus dim(Col(A))=1\dim(\text{Col}(A)) = 1, and the basis is {(100)}\left\{ \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \right\}, not all three columns.


Misconception 4: The Null Space and Column Space Are the Same Thing

The Error: Students conflate the null space (solutions to Ax=0Ax = 0) with the column space (all possible outputs AxAx).

Why It's Wrong: [basis-of-null-space] defines the null space as the set of all vectors xx satisfying Ax=0Ax = 0. This is fundamentally different from the column space, which is the set of all possible outputs. The null space lives in the domain; the column space lives in the codomain.

Correct Principle:

  • Null(A)\text{Null}(A) is a subspace of Rn\mathbb{R}^n (the domain).
  • Col(A)\text{Col}(A) is a subspace of Rm\mathbb{R}^m (the codomain).
  • For an m×nm \times n matrix, dim(Null(A))+dim(Col(A))=n\dim(\text{Null}(A)) + \dim(\text{Col}(A)) = n (the rank-nullity theorem).

Why This Matters: Confusing these spaces leads to errors in understanding solution sets of linear systems and the geometry of linear transformations.


Misconception 5: A Matrix Is Diagonalizable If It Has nn Eigenvalues

The Error: Students assume that finding nn eigenvalues (counting multiplicities) for an n×nn \times n matrix guarantees diagonalizability.

Why It's Wrong: [diagonalizable-matrix] and [diagonalizable-matrix] state that a matrix AA is diagonalizable if and only if there exists an invertible matrix PP and diagonal matrix DD such that A=PDP1A = PDP^{-1}. This requires that the eigenvectors form a basis—that is, there are nn linearly independent eigenvectors.

Correct Principle: A matrix is diagonalizable if and only if it has nn linearly independent eigenvectors. Having nn eigenvalues (with multiplicity) is necessary but not sufficient.

Worked Example: Consider A=(1101)A = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}. The characteristic polynomial is det(AλI)=(1λ)2\det(A - \lambda I) = (1 - \lambda)^2, giving eigenvalue λ=1\lambda = 1 with algebraic multiplicity 2. However, solving (AI)v=0(A - I)v = 0 yields only one linearly independent eigenvector: v=(10)v = \begin{pmatrix} 1 \\ 0 \end{pmatrix}. Since we have only one eigenvector but need two, AA is not diagonalizable.


Determinant Properties and Row Operations

A related source of confusion involves determinant properties under row operations. [determinant-properties] and [determinant-properties] clarify:

  1. Swapping two rows multiplies the determinant by 1-1.
  2. Multiplying a row by scalar cc multiplies the determinant by cc.
  3. Adding a multiple of one row to another does not change the determinant.

Common Mistake: Students sometimes apply row operations and forget to track how the determinant changes, leading to incorrect conclusions about invertibility.


Worked Examples

Example 1: Identifying a Column Space Basis

Given A=(123245121)A = \begin{pmatrix} 1 & 2 & 3 \\ 2 & 4 & 5 \\ 1 & 2 & 1 \end{pmatrix}, find a basis for Col(A)\text{Col}(A).

Row reduce to echelon form: (123001002)\begin{pmatrix} 1 & 2 & 3 \\ 0 & 0 & -1 \\ 0 & 0 & -2 \end{pmatrix}

Pivot columns are the 1st and 3rd. Thus, a basis for Col(A)\text{Col}(A) is: {(121),(351)}\left\{ \begin{pmatrix} 1 \\ 2 \\ 1 \end{pmatrix}, \begin{pmatrix} 3 \\ 5 \\ 1 \end{pmatrix} \right\}

Note: We use the original columns corresponding to pivot positions, not the reduced form.

Example 2: Testing Diagonalizability

For A=(2003)A = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}, find eigenvalues and determine if AA is diagonalizable.

The characteristic equation is det(AλI)=(2λ)(3λ)=0\det(A - \lambda I) = (2 - \lambda)(3 - \lambda) = 0, giving λ1=2\lambda_1 = 2 and λ2=3\lambda_2 = 3.

For λ1=2\lambda_1 = 2: (A2I)v=0(A - 2I)v = 0 yields v1=(10)v_1 = \begin{pmatrix} 1 \\ 0 \end{pmatrix}.

For λ2=3\lambda_2 = 3: (A3I)v=0(A - 3I)v = 0 yields v2=(01)v_2 = \begin{pmatrix} 0 \\ 1 \end{pmatrix}.

Since v1v_1 and v2v_2 are linearly independent, AA is diagonalizable with P=(1001)P = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} and D=(2003)D = \begin{pmatrix} 2 & 0 \\ 0 & 3 \end{pmatrix}.


References

[matrix-multiplication] [determinant-properties] [determinant-of-a-matrix] [basis-of-column-space] [basis-of-null-space] [eigenvalues-of-a-matrix] [diagonalizable-matrix] [determinant-properties] [determinant-of-a-matrix] [matrix-inversion-formula] [diagonalizable-matrix]


AI Disclosure

This article was drafted with the assistance of an AI language model. The mathematical claims and definitions are derived from the cited course notes and are presented in paraphrased form. All factual statements are attributed to source notes via wikilinks. The worked examples and explanations were generated by the AI but reviewed for technical accuracy against the source material.

Try the math live

References

AI disclosure: Generated from personal class notes with AI assistance. Every factual claim cites a note. Model: claude-haiku-4-5-20251001.