ResearchForge / Calculators
← all articles
linear-algebrapedagogymatrix-operationseigenvaluesdiagonalizationSat Apr 25

Linear Algebra: Common Mistakes and Misconceptions

Abstract

Linear algebra is foundational to mathematics, physics, computer science, and engineering, yet students frequently encounter conceptual pitfalls that impede deeper understanding. This article examines three prevalent misconceptions: the assumption that matrix multiplication is commutative, confusion about when a matrix can be inverted, and misunderstanding the conditions for diagonalizability. By clarifying these points with concrete examples, we aim to strengthen intuition and reduce errors in both coursework and applications.

Background

Linear algebra deals with vectors, matrices, and linear transformations. While the formal definitions are precise, students often develop intuitions based on analogies to scalar arithmetic that do not transfer directly. This gap between scalar and matrix operations creates systematic errors. Understanding where these intuitions fail is essential for mastery.

Key Results

Mistake 1: Assuming Matrix Multiplication is Commutative

A common error is treating matrix multiplication as commutative—that is, assuming AB=BAAB = BA for any matrices AA and BB. This is false.

The definition of matrix multiplication [matrix-multiplication] specifies that for matrices AA (size m×nm \times n) and BB (size n×pn \times p), the product ABAB is computed as:

(AB)ij=k=1nAikBkj(AB)_{ij} = \sum_{k=1}^{n} A_{ik} B_{kj}

This operation is fundamentally directional. The ii-th row of AA is combined with the jj-th column of BB. Reversing the order changes which rows and columns interact, and may not even be defined if dimensions do not align.

Example: Let A=(1201)A = \begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix} and B=(1011)B = \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}.

Then AB=(3211)AB = \begin{pmatrix} 3 & 2 \\ 1 & 1 \end{pmatrix} but BA=(1213)BA = \begin{pmatrix} 1 & 2 \\ 1 & 3 \end{pmatrix}.

Clearly ABBAAB \neq BA. This non-commutativity is not an exception—it is the norm. Students must always be careful about the order of matrix multiplication, especially when solving equations or manipulating expressions.

Mistake 2: Confusing Invertibility with Other Matrix Properties

A second misconception is that a matrix is invertible if it "looks" invertible or has certain properties like being square or having non-zero entries. In reality, invertibility is determined by a single criterion: the determinant.

According to [determinant-of-a-matrix], the determinant of a square matrix AA is a scalar that indicates invertibility. For a 2×22 \times 2 matrix A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}:

det(A)=adbc\det(A) = ad - bc

A matrix is invertible if and only if det(A)0\det(A) \neq 0. If det(A)=0\det(A) = 0, the matrix is singular and has no inverse.

The determinant also encodes geometric meaning: it represents the scaling factor of the linear transformation [determinant-properties]. A zero determinant means the transformation collapses the space into a lower dimension, destroying information and making inversion impossible.

Common error: Students sometimes assume that because a matrix is square, it must be invertible. This is incorrect. A square matrix with determinant zero is singular.

Mistake 3: Misunderstanding Diagonalizability

A third misconception is that all square matrices can be diagonalized, or that diagonalizability depends only on having distinct eigenvalues.

According to [diagonalizable-matrix], a matrix AA is diagonalizable if there exists an invertible matrix PP and a diagonal matrix DD such that:

A=PDP1A = PDP^{-1}

The columns of PP must be linearly independent eigenvectors of AA. This is the key requirement: we need enough linearly independent eigenvectors to form a basis.

A matrix with repeated eigenvalues may or may not be diagonalizable. If an eigenvalue has algebraic multiplicity greater than its geometric multiplicity (the dimension of the corresponding eigenspace), the matrix cannot be diagonalized. Not all square matrices are diagonalizable.

Implication: Diagonalization is a powerful tool [diagonalizable-matrix] for simplifying matrix powers and solving differential equations, but it is not always available. Checking diagonalizability requires computing eigenvalues and verifying that enough linearly independent eigenvectors exist.

Worked Examples

Example 1: Non-Commutativity in Practice

Suppose we need to solve B1(AX)=AXB^{-1}(A - X) = AX for XX, where AA and BB are invertible matrices.

Multiplying both sides by BB on the left: AX=BAXA - X = B \cdot AX

Rearranging: A=X+BAX=X(I+BA)A = X + BAX = X(I + BA)

Thus: X=A(I+BA)1X = A(I + BA)^{-1}

Note that we cannot write X=(BA+I)1AX = (BA + I)^{-1}A unless we are careful about order. In fact, [matrix-equation-solution] shows that the correct solution is X=(BA+I)1AX = (BA + I)^{-1}A, which requires understanding that (I+BA)1(I + BA)^{-1} and (BA+I)1(BA + I)^{-1} are the same (addition is commutative), but the position of AA relative to the inverse matters crucially.

Example 2: Determinant and Invertibility

Consider the matrix A=(2412)A = \begin{pmatrix} 2 & 4 \\ 1 & 2 \end{pmatrix}.

Computing the determinant: det(A)=(2)(2)(4)(1)=44=0\det(A) = (2)(2) - (4)(1) = 4 - 4 = 0

Since det(A)=0\det(A) = 0, the matrix is singular and has no inverse. The second row is a scalar multiple of the first, so the columns are linearly dependent. The transformation collapses 2D space onto a line.

Example 3: Checking Diagonalizability

Consider A=(2102)A = \begin{pmatrix} 2 & 1 \\ 0 & 2 \end{pmatrix}.

The characteristic polynomial is det(AλI)=(2λ)2=0\det(A - \lambda I) = (2 - \lambda)^2 = 0, giving λ=2\lambda = 2 with algebraic multiplicity 2.

To find eigenvectors, solve (A2I)v=0(A - 2I)v = 0: (0100)(v1v2)=0\begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix} = 0

This gives v2=0v_2 = 0, so eigenvectors are of the form (v10)\begin{pmatrix} v_1 \\ 0 \end{pmatrix}. The eigenspace is 1-dimensional.

Since the geometric multiplicity (1) is less than the algebraic multiplicity (2), the matrix is not diagonalizable. We cannot find two linearly independent eigenvectors.

References

AI Disclosure

This article was drafted with AI assistance. The structure, examples, and explanations were generated based on the provided class notes. All mathematical claims are grounded in the cited notes; no external sources were consulted. The worked examples were constructed to illustrate the misconceptions discussed and have been verified for correctness.

Try the math live

References

AI disclosure: Generated from personal class notes with AI assistance. Every factual claim cites a note. Model: claude-haiku-4-5-20251001.